Skip to content

Alchemiops KNN+D3, TorchSim, Refactor#143

Merged
vsimkus merged 11 commits intomainfrom
alchemi-knn-d3-torchsim-refactor
Feb 17, 2026
Merged

Alchemiops KNN+D3, TorchSim, Refactor#143
vsimkus merged 11 commits intomainfrom
alchemi-knn-d3-torchsim-refactor

Conversation

@vsimkus
Copy link
Contributor

@vsimkus vsimkus commented Feb 5, 2026

Porting internal changes to orb-models:

  • Refactored codebase, hence why there are so many changes
  • Added knn_alchemi for graph construction, and set to default
  • Added a D3 dispersion correction module using alchemiops (and added an example in readme)
  • Added a TorchSim wrapper module (and added an example in readme, and made the torch-sim dependency optional)
  • Removed loss_weights.get(target_name, default) pattern, instead setting the defaults in pretrained.py when loading the model.

Dependency/CI changes:

  • uv instead of poetry
  • Updated CI to run ruff linter checks and parallelise pytest
  • Requires python >=3.12 due to torch_sim
  • Requires torch >= 2.8.0 due to nvalchemiops

Checks:

  • README.md updated and examples working
  • NaClWaterMD.py works
  • Tests run
  • Finetune runs
  • Loading finetuned model works
  • Speed test script works
  • Check docker image works (Note: had to use torch==2.10 base image since the torch==2.8 images come with python <3.12, and upgrading python would mean that we invalidate the base pytorch installation, hence losing the point of using the base image in the first place.)

Regressions:

Copy link
Contributor Author

vsimkus commented Feb 5, 2026

This stack of pull requests is managed by Graphite. Learn more about stacking.

@vsimkus vsimkus changed the title Switch to uv Updates from internal repo: Alchemiops KNN+D3, TorchSim, Refactor Feb 5, 2026
@vsimkus vsimkus force-pushed the alchemi-knn-d3-torchsim-refactor branch from 1f8402a to 93aee72 Compare February 5, 2026 16:08
@vsimkus vsimkus changed the title Updates from internal repo: Alchemiops KNN+D3, TorchSim, Refactor Alchemiops KNN+D3, TorchSim, Refactor Feb 5, 2026
@vsimkus vsimkus force-pushed the alchemi-knn-d3-torchsim-refactor branch 5 times, most recently from 17d4ed7 to 059405c Compare February 5, 2026 16:56
@vsimkus vsimkus force-pushed the alchemi-knn-d3-torchsim-refactor branch 2 times, most recently from 3220acb to 7db6381 Compare February 5, 2026 17:12
@vsimkus vsimkus marked this pull request as ready for review February 5, 2026 17:43
@vsimkus vsimkus force-pushed the alchemi-knn-d3-torchsim-refactor branch from 7db6381 to 71efe5c Compare February 6, 2026 13:09
> pip install "cuml-cu12==25.2.*" # For cuda versions >=12.0, <13.0
> ```

### Updates
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a small update about improvements, plugging alchemiops repo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this:

**February 2026**: Improved GPU-accelerated graph construction with [ALCHEMI Toolkit-Ops](https://github.com/NVIDIA/nvalchemi-toolkit-ops) and batched simulation with [TorchSim](https://github.com/TorchSim/torch-sim):

* Alchemi-based graph construction (GPU-accelerated, up to 12x faster for large single systems, and sub-linear batch scaling delivering >100x graph construction speed-up for large batches of small systems)
* TorchSim wrapper for batched optimisation and simulation, see [usage with TorchSim](#usage-with-torchsim)
* Alchemi-based D3 dispersion correction module, see [D3 correction](#d3-correction)

What do you think?

Copy link
Contributor

@benrhodes26 benrhodes26 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉🎉🎉

@vsimkus vsimkus merged commit d5afa35 into main Feb 17, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments