Skip to content

orbital-materials/orb-models

Repository files navigation

Orbital Materials


Pretrained models for atomic simulations

example workflow PyPI version

Install

pip install orb-models

Orb models are expected to work on MacOS and Linux. Windows support is not guaranteed.

Alternatively, you can use Docker to run orb-models; see instructions below.

Updates

February 2026: Improved GPU-accelerated graph construction with ALCHEMI Toolkit-Ops and batched simulation with TorchSim:

  • Alchemi-based graph construction (GPU-accelerated, up to 12x faster for large single systems, and sub-linear batch scaling delivering >100x graph construction speed-up for large batches of small systems)
  • TorchSim wrapper for batched optimisation and simulation, see usage with TorchSim
  • Alchemi-based D3 dispersion correction module, see D3 correction

August 2025: Release of the OrbMol potentials:

  • Trained on the Open Molecules 2025 (OMol25) dataset—over 100M high-accuracy DFT calculations (ωB97M-V/def2-TZVPD) on diverse molecular systems including metal complexes, biomolecules, and electrolytes.
  • Architecturally similar to the highly-performant Orb-v3 models, but now explicit total charges and spin multiplicities can be passed as input.
  • To get started with these models, see: How to specify total charge and spin multiplicity for OrbMol.

April 2025: Release of the Orb-v3 set of potentials.

October 2024: Release of the Orb-v2 set of potentials.

September 2024: Release of v1 models - state of the art performance on the matbench discovery dataset.

Available models

See MODELS.md for a full list of available models along with usage guidance.

Usage

Note: These examples are designed to run on the main branch of orb-models. If you are using a pip installed version of orb-models, you may want to look at the corresponding README.md from that tag.

Direct usage

import ase
from ase.build import bulk

from orb_models.forcefield import pretrained

device = "cpu"  # or device="cuda"
orbff, atoms_adapter = pretrained.orb_v3_conservative_inf_omat(
  device=device,
  precision="float32-high",   # or "float32-highest" / "float64
)
atoms = bulk('Cu', 'fcc', a=3.58, cubic=True)
graph = atoms_adapter.from_ase_atoms(atoms, device=device)

# If you have several graphs, batch them like so:
# graph = atoms_adapter.batch([graph1, graph2])
# or 
# graph = atoms_adapter.from_ase_atoms_list([atoms1, atoms2])

result = orbff.predict(graph, split=False)

# Convert to ASE atoms (unbatches the results and transfers to cpu if necessary)
atoms = graph.to_ase_atoms(
    energy=result["energy"],
    forces=result["grad_forces"],
    stress=result["grad_stress"]
)

Usage with ASE calculator

import ase
from ase.build import bulk

from orb_models.forcefield import pretrained
from orb_models.forcefield.inference.calculator import ORBCalculator

device="cpu" # or device="cuda"
# or choose another model using ORB_PRETRAINED_MODELS[model_name]()
orbff, atoms_adapter = pretrained.orb_v3_conservative_inf_omat(
  device=device,
  precision="float32-high",   # or "float32-highest" / "float64
)
calc = ORBCalculator(orbff, atoms_adapter=atoms_adapter, device=device)
atoms = bulk('Cu', 'fcc', a=3.58, cubic=True)

atoms.calc = calc
atoms.get_potential_energy()

You can use this calculator with any ASE calculator-compatible code. For example, you can use it to perform a geometry optimization:

from ase.optimize import BFGS

# Rattle the atoms to get them out of the minimum energy configuration
atoms.rattle(0.5)
print("Rattled Energy:", atoms.get_potential_energy())

calc = ORBCalculator(orbff, atoms_adapter=atoms_adapter, device="cpu") # or device="cuda"
dyn = BFGS(atoms)
dyn.run(fmax=0.01)
print("Optimized Energy:", atoms.get_potential_energy())

Or you can use it to run MD simulations. The script, an example input xyz file and a Colab notebook demonstration are available in the examples directory. This should work with any input, simply modify the input_file and cell_size parameters. We recommend using constant volume simulations.

Usage with TorchSim

For batched optimisation, we recommend using TorchSim. It's an optional dependency that can be installed with pip install torch-sim-atomistic.

import ase
import torch
import torch_sim as ts
from ase.build import bulk

from orb_models.forcefield import pretrained
from orb_models.forcefield.inference.orb_torchsim import OrbTorchSimModel


device = "cpu"  # or device="cuda"
# or choose another model using ORB_PRETRAINED_MODELS[model_name]()
orbff, atoms_adapter = pretrained.orb_v3_conservative_inf_omat(
  device=device,
  precision="float32-high",   # or "float32-highest" / "float64
)

atoms1 = bulk('Cu', 'fcc', a=3.58, cubic=True)
atoms2 = bulk('Si', 'diamond', a=5.43, cubic=True)
atoms_list = [atoms1, atoms2]
ts_state = ts.io.atoms_to_state(atoms_list, device, dtype=torch.get_default_dtype())

ts_model = OrbTorchSimModel(orbff, atoms_adapter)
results = ts_model(ts_state)
results["energy"]

You can use this module for geometry optimisation and MD simulation:

# Rattle the atoms to get them out of the minimum energy configuration
atoms1.rattle(0.5)
atoms2.rattle(0.5)
atoms_list = [atoms1, atoms2]
ts_state = ts.io.atoms_to_state(atoms_list, device, dtype=torch.get_default_dtype())

ts_model = OrbTorchSimModel(orbff, atoms_adapter)
results = ts_model(ts_state)
print("Rattled energies:", results["energy"])

# Optimise with TorchSim
relaxed_state = ts.optimize(
    system=ts_state,
    convergence_fn=ts.generate_force_convergence_fn(
        force_tol=0.01,
        include_cell_forces=False,
    ),
    model=ts_model,
    optimizer=ts.Optimizer["fire"],
    max_steps=100,
    steps_between_swaps=10,
)
results = ts_model(relaxed_state)
print("Rattled energies:", results["energy"])

How to specify total charge and spin multiplicity for OrbMol

The OrbMol models require total charge and spin multiplicity to be specified. This can be done by setting them in atoms.info dictionary.

import ase
from ase.build import molecule

from orb_models.forcefield import pretrained

device = "cpu"  # or device="cuda"
orbff, atoms_adapter = pretrained.orb_v3_conservative_omol(
  device=device,
  precision="float32-high",   # or "float32-highest" / "float64
)
atoms = molecule("C6H6")

atoms.info["charge"] = 0  # total charge
atoms.info["spin"] = 1  #  spin multiplicity
graph = atoms_adapter.from_ase_atoms(atoms, device=device)

result = orbff.predict(graph, split=False)

D3 correction

We provide a D3 dispersion correction module, based on nvalchemiops, for improved modeling of van der Waals interactions. To use D3 correction, wrap your model with D3SumModel:

import ase
from ase.build import bulk

from orb_models.forcefield import pretrained
from orb_models.forcefield.inference.calculator import ORBCalculator
from orb_models.forcefield.inference.d3_model import D3SumModel, AlchemiDFTD3

device = "cpu"  # or device="cuda"
orbff, atoms_adapter = pretrained.orb_v3_conservative_inf_omat(
  device=device,
  precision="float32-high",   # or "float32-highest" / "float64
)
orbff_d3 = D3SumModel(orbff, AlchemiDFTD3(functional="PBE", damping="BJ", compile=True))

calc = ORBCalculator(orbff_d3, atoms_adapter=atoms_adapter, device=device)
atoms = bulk('Cu', 'fcc', a=3.58, cubic=True)

atoms.calc = calc
atoms.get_potential_energy()

Or with TorchSim:

import torch
import torch_sim as ts
from ase.build import bulk

from orb_models.forcefield import pretrained
from orb_models.forcefield.inference.orb_torchsim import OrbTorchSimModel
from orb_models.forcefield.inference.d3_model import D3SumModel, AlchemiDFTD3

device = "cpu"  # or device="cuda"
orbff, atoms_adapter = pretrained.orb_v3_conservative_inf_omat(
  device=device,
  precision="float32-high",   # or "float32-highest" / "float64
)
orbff_d3 = D3SumModel(orbff, AlchemiDFTD3(functional="PBE", damping="BJ", compile=True))

atoms = bulk('Cu', 'fcc', a=3.58, cubic=True)
ts_state = ts.io.atoms_to_state([atoms], device, dtype=torch.get_default_dtype())

ts_model = OrbTorchSimModel(orbff_d3, atoms_adapter)
results = ts_model(ts_state)
results["energy"]

Confidence head (Orb-v3 Models Only)

Orb-v3 models have a confidence head which produces a per-atom discrete confidence measure based on a classifier head which learns to predict the binned MAE between predicted and true forces during training. This classifier head has 50 bins, linearly spaced between 0 and 0.4A.

import ase
import matplotlib.pyplot as plt # optional, for visualization only
import numpy
from ase.build import molecule
from seaborn import heatmap # optional, for visualization only

from orb_models.forcefield import pretrained
from orb_models.forcefield.inference.calculator import ORBCalculator

device="cpu" # or device="cuda"
# or choose another model using ORB_PRETRAINED_MODELS[model_name]()
orbff, atoms_adapter = pretrained.orb_v3_conservative_inf_omat(
  device=device,
)
calc = ORBCalculator(orbff, atoms_adapter=atoms_adapter, device=device)
# Use a molecule (OOD for Orb-Omat, so confidence plot is
# more interesting than a bulk crystal)
atoms = molecule("CH3CH2Cl")
atoms.calc = calc

forces = atoms.get_forces()
confidences = calc.results["confidence"]
predicted_bin_per_atom = numpy.argmax(confidences, axis=-1)

print(forces.shape, confidences.shape) # (num_atoms, 3), (num_atoms, 50)
print(predicted_bin_per_atom) # List of length num_atoms
heatmap(confidences)
plt.xlabel('Confidence Bin')
plt.ylabel('Atom Index')
plt.title('Confidence Heatmap')
plt.show()

Floating Point Precision

As shown in usage snippets above, we support 3 floating point precision types: "float32-high", "float32-highest" and "float64".

The default value of "float32-high" is recommended for maximal acceleration when using A100 / H100 Nvidia GPUs. However, we have observed some performance loss for high-precision calculations involving second and third order properties of the PES. In these cases, we recommend "float32-highest".

In stark contrast to other universal forcefields, we have not found any benefit to using "float64".

Graph construction

From version 0.5.6, knn_alchemi is the default and recommended graph construction method. It uses ALCHEMI Toolkit-Ops for fast GPU-accelerated nearest-neighbor search.

Available methods via edge_method parameter in ORBCalculator, OrbTorchSimModel, atoms_adapter.from_ase_atoms(), or atoms_adapter.from_torchsim_state():

Method Status Notes
knn_alchemi Recommended Fast on both CPU and GPU, excellent batch scaling
knn_scipy Deprecated Slightly faster for single-system CPU construction
knn_brute_force Deprecated Legacy GPU method for small systems
knn_cuml_rbc Deprecated Legacy GPU method for larger systems
knn_cuml_brute Deprecated Legacy cuML brute force

Note: Deprecated methods will be removed in a future release. For cuML-based methods, install cuml:

pip install "cuml-cu11==25.2.*"  # For CUDA 11.4-11.8
pip install "cuml-cu12==25.2.*"  # For CUDA 12.x

Finetuning

You can finetune the model using your custom dataset. The dataset should be an ASE sqlite database.

📖 For detailed instructions, including custom loss weights, reference energies, and API usage, see the Finetuning Guide.

Basic usage:

python finetune.py --dataset=<dataset_name> --data_path=<your_data_path> --base_model=<base_model>

Where base_model is an element of orb_models.forcefield.pretrained.ORB_PRETRAINED_MODELS.keys().

After the model is finetuned, checkpoints will, by default, be saved to the ckpts folder in the directory you ran the finetuning script from. You can use the new model and load the checkpoint by:

from orb_models.forcefield import pretrained

model, atoms_adapter = getattr(pretrained, <base_model>)(
  weights_path=<path_to_ckpt>, 
  device="cpu",               # or device="cuda"
  precision="float32-high",   # or precision="float32-highest"
)

âš  Caveats

Our finetuning script is designed for simplicity. We strongly advise users to customise it further for their use-case to get the best performance. Please be aware that:

  • The script assumes that your ASE database rows contain energy, forces, and stress data. To train on molecular data without stress, you will need to edit the code.
  • Early stopping is not implemented. However, you can use the command line argument save_every_x_epochs (default is 5), so "retrospective" early stopping can be applied by selecting a suitable checkpoint.
  • The learning rate schedule is hardcoded to be torch.optim.lr_scheduler.OneCycleLR with pct_start=0.05. The max_lr/min_lr will be 10x greater/smaller than the lr specified via the command line. To get the best performance, you may wish to try other schedulers.
  • The defaults of --num_steps=100 and --max_epochs=50 are small. This may be suitable for very small finetuning datasets (e.g. 100s of systems), but you will likely want to increase the number of steps for larger datasets (e.g. 1,000s of datapoints).
  • The default loss equally weights all loss components (energy, forces, stress), but in practice we've found that adjusting the relative weighting can have a significant effect on the overall performance of the model.
  • The script only tracks a limited set of metrics (energy/force/stress MAEs) which may be insufficient for some downstream use-cases. For instance, if you wish to finetune a model for Molecular Dynamics simulations, we have found (anecdotally) that models that are just on the cusp of overfitting to force MAEs can be substantially worse for simulations. Ideally, more robust "rollout" metrics would be included in the finetuning training loop. In lieu of this, we recommend more aggressive early-stopping i.e. using models several epochs prior to any sign of overfitting.

Docker

You can run orb-models using Docker, which provides a consistent environment with all dependencies pre-installed:

  1. Build the Docker image locally:

    docker build -t orb_models .
  2. Run the Docker container:

    docker run --gpus all --rm --name orb_models -it orb_models /bin/bash

Citing

Preprints describing the models in more detail can be found at:

@misc{rhodes2025orbv3atomisticsimulationscale,
      title={Orb-v3: atomistic simulation at scale}, 
      author={Benjamin Rhodes and Sander Vandenhaute and Vaidotas Å imkus and James Gin and Jonathan Godwin and Tim Duignan and Mark Neumann},
      year={2025},
      eprint={2504.06231},
      archivePrefix={arXiv},
      primaryClass={cond-mat.mtrl-sci},
      url={https://arxiv.org/abs/2504.06231}, 
}

@misc{neumann2024orbfastscalableneural,
      title={Orb: A Fast, Scalable Neural Network Potential}, 
      author={Mark Neumann and James Gin and Benjamin Rhodes and Steven Bennett and Zhiyi Li and Hitarth Choubisa and Arthur Hussey and Jonathan Godwin},
      year={2024},
      eprint={2410.22570},
      archivePrefix={arXiv},
      primaryClass={cond-mat.mtrl-sci},
      url={https://arxiv.org/abs/2410.22570}, 
}

License

Orb models are licensed under the Apache License, Version 2.0. Please see the LICENSE file for details.

If you have an interesting use case or benchmark for an Orb model, please let us know! We are happy to work with the community to make these models useful for as many applications as possible.

Community

Please join the discussion on Discord by following this link.

About

ORB forcefield models from Orbital Materials

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 14