Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
135 commits
Select commit Hold shift + click to select a range
75ac959
Add initial pixi environment
sdvillal Nov 3, 2025
a6e60f7
Reorder dependencies for easier read
sdvillal Nov 3, 2025
c7e75f0
Add openfold3 as an editable dependency
sdvillal Nov 4, 2025
6556328
Sync cuda-python pin between pypi package and the conda environment
sdvillal Nov 4, 2025
7563eca
Comments
sdvillal Nov 7, 2025
c16dda1
Add explicitly a conda yml version of the pixi environment
sdvillal Nov 12, 2025
57de8ef
Improve some wordings
sdvillal Nov 12, 2025
b0c8570
Update pixi lockfile
sdvillal Nov 12, 2025
c1036d8
Vendoring pieces of deepspeed
sdvillal Nov 12, 2025
ed5ea48
Swap ninja verification with pytorch's
sdvillal Nov 12, 2025
485bf91
Vendoring pieces of deepspeed
sdvillal Nov 12, 2025
34ab9e9
Use vendored deepspeed evoformer builder
sdvillal Nov 12, 2025
50c768e
Add symlink to vendored deepspeed as in upstream
sdvillal Nov 12, 2025
dfcdf42
Vendor also op_builder.__init__ from deepspeed
sdvillal Nov 12, 2025
5335ec9
Import explicitly EvoformerAttnBuilder, avoiding broken introspection…
sdvillal Nov 12, 2025
4f73cb1
Add a ignore mechanism for cutlass detection in vendored deepspeed
sdvillal Nov 12, 2025
46f7110
Apply cutlass detection workaround and remove all nvidia-cutlass tric…
sdvillal Nov 12, 2025
d0318f4
Remove nvidia-cutlass from openfold-3 dependencies (fix later)
sdvillal Nov 12, 2025
eb4c1c5
Remove pypi ninja dependency in pixi workspace
sdvillal Nov 12, 2025
37094c2
No need for cutlass hacks
sdvillal Nov 12, 2025
71ba5fb
Add pixi config to .gitattributes
sdvillal Nov 12, 2025
f9e84c9
Remove deepspeed hacks for good
sdvillal Nov 12, 2025
827b47e
Update pixi lockfile
sdvillal Nov 12, 2025
cfc7390
Update pixi conda environment
sdvillal Nov 12, 2025
343391e
Remove MKL from pypi dependencies, as it is unused
sdvillal Dec 1, 2025
aeeb193
Remove aria2 from pypi dependencies, unused and not so much of a conv…
sdvillal Dec 1, 2025
0029ae0
Update lockfile
sdvillal Dec 2, 2025
b565211
Re-enable pure PyPI install
Emrys-Merlin Dec 17, 2025
49d4d69
Disable hack when conda is active
Emrys-Merlin Dec 18, 2025
cd0bc2b
More comments on cutlass python API deprecation and pytorch
sdvillal Dec 29, 2025
2e2b1a0
Make pixi environments (CPU, CUDA12, CUDA13, for all major platforms)
sdvillal Dec 30, 2025
8ad5911
Increase LMDB map size to make test pass in osx-arm64
sdvillal Dec 30, 2025
59e4cd2
Better comments of TODOs in pixi.toml
sdvillal Dec 30, 2025
f59ca9f
Pin cuequivariance until test failure is investigated
sdvillal Dec 31, 2025
ab0a5b2
Move deepspeed to optional dependency also in pyproject
sdvillal Jan 11, 2026
6550284
Pyproject: extend python version support
sdvillal Jan 11, 2026
008fb27
Pyproject: move dependencies table together with optional-dependencies
sdvillal Jan 11, 2026
e7ce361
Pyproject: document future decision on dependency-groups
sdvillal Jan 11, 2026
775f601
Pyproject: reformat to consolidate indent to 4 spaces
sdvillal Jan 11, 2026
f5e652b
Pyproject: reorder dependencies for easier read
sdvillal Jan 11, 2026
597774d
Pixi: add scipy
sdvillal Jan 11, 2026
19d5a37
Pixi: add comment on CUDA13
sdvillal Jan 11, 2026
a30a311
Pixi: make cuequivariance CUDA generic for its conda packages
sdvillal Jan 11, 2026
e81a771
Pixi: add reminder about devel install
sdvillal Jan 11, 2026
4804404
Pyproject: fix and improve readability, add URLs
sdvillal Jan 11, 2026
3ad4e81
pixi.toml: make more readable by showing first envs, then base, then …
sdvillal Jan 11, 2026
f1ffcda
pixi.toml: pin deepspeed to 0.18.3, first one with ninja detection fixed
sdvillal Jan 11, 2026
58a6342
pixi.toml: fully enable aarch64 and cuda13, revamp docs
sdvillal Jan 11, 2026
00f6298
pixi.lock: update
sdvillal Jan 11, 2026
81288cb
pixi.toml: add triton to cuequivariance dependencies for CUDA13
sdvillal Jan 11, 2026
f768474
pixi.lock: update
sdvillal Jan 11, 2026
d263faa
pixi.toml: include pip to allow users to play
sdvillal Jan 11, 2026
1ef3eca
pixi.toml: formatting for better readability
sdvillal Jan 12, 2026
abee0d6
pixi.toml: restrict cuequivariance-cu13 to linux-64 until we unpin to…
sdvillal Jan 12, 2026
5a80cbb
pixi.toml: formatting for better readability
sdvillal Jan 12, 2026
fed067f
pixi.toml: make pytorch-gpu an isolated environment feature
sdvillal Jan 12, 2026
cdd4f77
pixi.toml: add environments that combine mostly pypi-based deps with …
sdvillal Jan 12, 2026
5b8ab7b
pixi.toml: add openfold3-editable-full and account for lack of cuequi…
sdvillal Jan 12, 2026
c85d0ba
pixi.toml: brief documentation of the pypi-dominant environments
sdvillal Jan 12, 2026
fa84206
pixi.toml: add also the dev optional dependency group to openfold3-full
sdvillal Jan 12, 2026
09a665e
pyproject.toml: pin cuequivariance to <0.8 until we adapt tests
sdvillal Jan 12, 2026
f99478d
pixi.toml: add kalign to required non-pypi dependencies
sdvillal Jan 12, 2026
cf6a09f
pixi.toml: add more bioinformatics tools to non-pypi
sdvillal Jan 12, 2026
23e7ff2
pixi.toml: make env setup be part of the deepspeed-build feature
sdvillal Jan 12, 2026
968c5dc
pixi.toml: simplify management of pypi features
sdvillal Jan 13, 2026
d51297b
pixi.lock: update, all tests pass A100,B300 x CUDA12,CUDA13
sdvillal Jan 13, 2026
464395c
pixi.toml: add table of what works and what needs test
sdvillal Jan 13, 2026
a96f797
pixi.toml: add tasks for exporting to regular conda environment yamls
sdvillal Jan 15, 2026
bc2d785
conda environments: delete outdated modernized conda env, use new tas…
sdvillal Jan 15, 2026
5da40ca
pixi.toml: bump min pixi version
sdvillal Jan 17, 2026
87cd0b1
pixi.toml: remove unnecessary comments
sdvillal Jan 17, 2026
9a89c86
pixi.toml: remove unnecessary envvar definition for isolating extensi…
sdvillal Jan 17, 2026
324d969
pixi.toml: better definition of maintenance environment
sdvillal Jan 17, 2026
ff00354
pixi.toml: add simple task to run test and save rsults to an environm…
sdvillal Jan 17, 2026
ca10254
of3: enable pickling regardless of forking strategy and platform
sdvillal Jan 17, 2026
6eb140d
of3: enable multiple data loader workers in osx mps backed
sdvillal Jan 17, 2026
946e316
Vendor improved deepspeed builder from upstream PR
sdvillal Jan 17, 2026
f3aef42
pixi.lock: update
sdvillal Jan 18, 2026
23a7f0c
pixi.toml: remove some comment noise
sdvillal Jan 18, 2026
42e7a82
Merge main into modernize-conda-environment
sdvillal Jan 19, 2026
4fbc53e
of3: fix multiprocessing configuration corner case in osx
sdvillal Jan 19, 2026
9d0e7f1
docker: move outdated example dockerfiles to docker/pixi-examples
sdvillal Jan 19, 2026
32dd7e5
examples: add example runner for osx inference
sdvillal Jan 19, 2026
1199a94
pixi.toml: ensure we get the right pytorch from pypi
sdvillal Jan 19, 2026
6970215
pixi.lock: update, fixed torch cuda missmatch in pypi environments
sdvillal Jan 19, 2026
2797a12
pixi.toml: fix lock export + make default environment be maintenance
sdvillal Jan 19, 2026
57f265e
pixi.toml: use a more consitent name for environment arg
sdvillal Jan 19, 2026
9934213
pixi.lock: update
sdvillal Jan 19, 2026
430e600
pixi.toml: workaround for no-default-feature breaking the test task (…
sdvillal Jan 19, 2026
ded3482
pixi.toml: issue with pixi pypi resolution seems solved
sdvillal Jan 21, 2026
7674a58
Revert "pixi.toml: issue with pixi pypi resolution seems solved"
sdvillal Jan 22, 2026
779bcd2
pixi.toml: better document problem and workaround
sdvillal Jan 22, 2026
d9a46a6
pixi.toml: make the test task present in all relevant environments
sdvillal Jan 22, 2026
76de949
pixi.toml: let CUDA13 flow freely
sdvillal Jan 22, 2026
884b5db
pixi.lock: update for initial pytorch 2.10, cuda 13.1 support
sdvillal Jan 22, 2026
58948fa
pixi.toml: add safe cuda environments (no accelerators)
sdvillal Jan 22, 2026
307c3b7
of3: remove deepspeed hacks
sdvillal Jan 31, 2026
083c002
of3: unvendor deepspeed
sdvillal Jan 31, 2026
f8026cb
pixi.toml: simplify deepspeed dependency after our changes made it to…
sdvillal Jan 31, 2026
b0c6cfa
pixi.toml: remove safe environments as we are not maintaining them
sdvillal Jan 31, 2026
11658b3
pixi.toml: enable pytorch-coda in cuda 13 env after 2.10 release
sdvillal Jan 31, 2026
49bd5ff
pyproject.toml: pin deepspeed to >0.18.5, improved evoformer compilation
sdvillal Jan 31, 2026
240d295
Merge main
sdvillal Jan 31, 2026
46025fb
Add awscrt to dependencies, missing from recent PR
sdvillal Jan 31, 2026
0977e8d
pixi.toml: setup correctly path to PTXAS_BLACKWELL for triton >=3.6.0
sdvillal Feb 1, 2026
9122b7c
pixi.toml: add -safe environments, at the moment just without cuequiv…
sdvillal Feb 2, 2026
fb7118b
pixi.lock: update after consolidation (no vendor, pytorch 2.10 + CF c…
sdvillal Feb 2, 2026
779e5f2
pixi.toml: update outdated comments
sdvillal Feb 3, 2026
1a40c35
Merge branch 'main' into modernize-conda-environment
jandom Feb 10, 2026
1e8e0b1
updates with GB10 tests (#2)
jandom Feb 12, 2026
f1db6a4
pixi.toml: remove safe environments
sdvillal Feb 12, 2026
308d6ce
pixi.lock: update after removal of safe environments
sdvillal Feb 12, 2026
70f6050
Remove pixi docker examples, to rework
sdvillal Feb 12, 2026
020a526
Comment-out workaround for hard to reproduce ABI mismatch problem
sdvillal Feb 12, 2026
cb28d15
pixi.toml: bump pixi, improve conda export by including all env varia…
sdvillal Feb 25, 2026
a165438
Merge main
sdvillal Mar 7, 2026
caf46d9
pixi.toml: unpin biotite
sdvillal Mar 7, 2026
c8b1337
pixi.toml: python has its own feature
sdvillal Mar 7, 2026
c43f67f
pixi.toml: bump deepspeed
sdvillal Mar 7, 2026
c47ecde
pyproject.toml: bump deepspeed to version without Evoformer build bug
sdvillal Mar 7, 2026
637c822
pixi.toml: detail on workaround
sdvillal Mar 7, 2026
8838489
pixi.lock: update
sdvillal Mar 7, 2026
094ee72
pixi.toml: add example task to update safely the lockfile
sdvillal Mar 7, 2026
f7f2967
Merge main into modernize-conda-environment
sdvillal Mar 14, 2026
2b95bf0
pixi.toml: remove kalign2
sdvillal Mar 15, 2026
fcfbe34
tests: fix test depending on unspecified glob return order
sdvillal Mar 15, 2026
75ebed6
pixi.toml: better metadata
sdvillal Mar 15, 2026
cda0d57
docs: wip
sdvillal Mar 15, 2026
33b6bca
pixi.lock: update
sdvillal Mar 15, 2026
23811a4
Allow to configure multiprocessing start and set safe defaults
sdvillal Mar 17, 2026
7f97929
Fix capitalization error
sdvillal Mar 20, 2026
580ddb6
Fix capitalization error
sdvillal Mar 20, 2026
1231f2d
Fix typo
sdvillal Mar 20, 2026
287c6c5
Merge branch 'main' into modernize-conda-environment
jandom Mar 21, 2026
e5fbf1e
pixi.lock: update
sdvillal Mar 22, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# SCM syntax highlighting & preventing 3-way merges
pixi.lock merge=binary linguist-language=YAML linguist-generated=true
7 changes: 7 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,10 @@ cutlass/

# User-specific pre-commit settings
.pre-commit-config.yaml

# pixi environments
.pixi/*
!.pixi/config.toml

# uv environments
.venv/
27 changes: 26 additions & 1 deletion docs/source/Installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,34 @@ to install GPU accelerated {doc}`cuEquivariance attention kernels <kernels>`, us
pip install openfold3[cuequivariance]
```

### Modern conda environments with pixi

OpenFold3 can now be installed in conda environments with [pixi](https://pixi.prefix.dev/latest/index.html).

First [install pixi](https://pixi.prefix.dev/latest/installation/)

```shell
# Do this once and enjoy pixi for all your future projects!
curl -fsSL https://pixi.sh/install.sh | sh
# Then restart your shell and optionally install pixi completions
```

You can simply run openfold in one of the provided environments with pixi:
```shell
pixi run -e openfold3-cpu setup_openfold
pixi run -e openfold3-cpu run_openfold
```

We provide the following environments:
- openfold3-cpu (linux-64, linux-aarch64, osx-64,osx-arm64)
- openfold3-cuda12 and openfold-cuda13 (linux-64, linux-aarch64)

For more information, including rationale, tips and tricks, see [Modern Conda Environments with Pixi](./modern-conda-environments-with-pixi.md).

### Environment variables

> **Note:** This may need a revision given the pixi managed envs above (JD).

OpenFold may need a few environment variables set so CUDA, compilation, and JIT-built extensions can be found correctly.

- `CUDA_HOME` should point to the CUDA installation. On many HPC clusters you will this can be set by loading the appropriate toolchain using environment modules, for example `module load cuda`. If you do not set this you will likely get a `No such file or directory: '/usr/local/cuda/bin/nvcc'` error.
Expand All @@ -54,7 +80,6 @@ OpenFold may need a few environment variables set so CUDA, compilation, and JIT-
- Example: `export LIBRARY_PATH="$(echo "$CUDA_HOME" | sed 's|/cuda/|/math_libs/|')/targets/sbsa-linux/lib:${LIBRARY_PATH:-}"`



### OpenFold3 Docker Image

#### Dockerhub
Expand Down
3 changes: 3 additions & 0 deletions docs/source/modern-conda-environments-with-pixi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Modern OpenFold conda environments with pixi


14 changes: 14 additions & 0 deletions examples/example_runner_yamls/osx.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
model_update:
presets:
- "predict"
- "pae_enabled" # if using PAE enabled model
# - "low_mem" # for lower memory systems
custom:
settings:
memory:
eval:
# otherwise the current default is wrongly to use deepspeed, even if not available
use_deepspeed_evo_attention: false

data_module_args:
num_workers: 1
3 changes: 1 addition & 2 deletions openfold3/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@
import gemmi
from packaging import version

from . import hacks # noqa: F401

if version.parse(gemmi.__version__) >= version.parse("0.7.3"):
gemmi.set_leak_warnings(False)

Expand All @@ -32,3 +30,4 @@
# This has weird effects with hanging if libaio is not installed and can
# cause restart errors if run is preempted in the middle of autotuning
deepspeed.HAS_TRITON = False
# FIXME: do we need this? it is really invasive with other potential users of DS
80 changes: 79 additions & 1 deletion openfold3/core/data/framework/data_module.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@
import dataclasses
import enum
import logging
import multiprocessing
import platform
import sys
import warnings
from functools import partial
from typing import Any
Expand Down Expand Up @@ -148,14 +151,78 @@
return self.get_subset(datasets_stage_mask)



class DataModuleConfig(BaseModel):
datasets: list[SerializeAsAny[BaseModel]]
batch_size: int = 1
num_workers: int = 0
num_workers_validation: int = 0
multiprocessing_context: str = "openfold-default"
data_seed: int = 42
epoch_len: int = 1

@staticmethod
def safe_multiprocessing_context(
multiprocessing_context: str | None, num_workers: int
) -> str | None:
"""
Returns multiprocessing start methods with safer/sensible defaults:
- fork when using MPS
- forkserver for linux, matching the new 3.14 default
- default otherwise

For general info on risks and defaults across platformas and python versions see:

Check failure on line 174 in openfold3/core/data/framework/data_module.py

View workflow job for this annotation

GitHub Actions / ruff

ruff (E501)

openfold3/core/data/framework/data_module.py:174:89: E501 Line too long (89 > 88)
https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader
https://docs.pytorch.org/docs/stable/notes/multiprocessing.html#multiprocessing-poison-fork-note
https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
"""

# Do not bother if not using multiprocessing
if num_workers == 0:
return None

# Set safe defaults
if multiprocessing_context == "openfold-default":

# Use fork to create processes when using MPS. See:
# - https://github.com/pytorch/pytorch/issues/70344
# - https://github.com/pytorch/pytorch/issues/87688
if platform.system() == "Darwin" and torch.backends.mps.is_available():
return "fork"

# Use forkserver in linux
# Backports the new python 3.14 default in previous python versions.
# An alternative for further safety would be "spawn". Avoid "fork".
# See: https://github.com/python/cpython/issues/84559
if platform.system() == "Linux":
return "forkserver"

# Use the platform default otherwise - "spawn" at the time of writing
return multiprocessing.get_start_method()

# Warn about unsafe defaults
else:
if platform.system() == "Darwin" and torch.backends.mps.is_available():
if multiprocessing_context != "fork":

Check failure on line 206 in openfold3/core/data/framework/data_module.py

View workflow job for this annotation

GitHub Actions / ruff

ruff (SIM102)

openfold3/core/data/framework/data_module.py:205:13: SIM102 Use a single `if` statement instead of nested `if` statements help: Combine `if` statements using `and`
logger.warning(
f"Using multiprocessing context {multiprocessing_context} on MPS may cause "

Check failure on line 208 in openfold3/core/data/framework/data_module.py

View workflow job for this annotation

GitHub Actions / ruff

ruff (E501)

openfold3/core/data/framework/data_module.py:208:89: E501 Line too long (100 > 88)
"issues. Consider using 'fork' or 'openfold-default' (which resolves to 'fork' on MPS).",

Check failure on line 209 in openfold3/core/data/framework/data_module.py

View workflow job for this annotation

GitHub Actions / ruff

ruff (E501)

openfold3/core/data/framework/data_module.py:209:89: E501 Line too long (113 > 88)
stacklevel=2,
)
if platform.system() == "Linux":
dangerous_start_method = (
multiprocessing_context == "fork" or
multiprocessing_context is None and sys.version_info < (3, 14)
)
if dangerous_start_method:
logger.warning(
"Using 'fork' multiprocessing context in linux may cause issues. Consider using "

Check failure on line 219 in openfold3/core/data/framework/data_module.py

View workflow job for this annotation

GitHub Actions / ruff

ruff (E501)

openfold3/core/data/framework/data_module.py:219:89: E501 Line too long (105 > 88)
"'spawn', 'forkserver' or 'openfold-default' (which resolves to 'forkserver' on linux).",

Check failure on line 220 in openfold3/core/data/framework/data_module.py

View workflow job for this annotation

GitHub Actions / ruff

ruff (E501)

openfold3/core/data/framework/data_module.py:220:89: E501 Line too long (113 > 88)
stacklevel=2,
)

return multiprocessing_context


class DataModule(pl.LightningDataModule):
"""A LightningDataModule class for organizing Datasets and DataLoaders."""
Expand All @@ -167,6 +234,7 @@
self.batch_size = data_module_config.batch_size
self.num_workers = data_module_config.num_workers
self.num_workers_validation = data_module_config.num_workers_validation
self.multiprocessing_context = data_module_config.multiprocessing_context
self.data_seed = data_module_config.data_seed
self.next_data_seed = data_module_config.data_seed
self.epoch_len = data_module_config.epoch_len
Expand Down Expand Up @@ -433,8 +501,17 @@
# instead of pl.seed_everything(workers=True), so this function is
# passed explicitly here.
worker_init_fn = partial(pl_worker_init_function, rank=self.global_rank)

# Set a sensible default for multiprocesssing start method
# depending on platform and python version.
multiprocessing_context = DataModuleConfig.safe_multiprocessing_context(
self.multiprocessing_context, num_workers
)

logger.debug(
f"Creating {mode} dataloader: num_workers={num_workers}, "
f"Creating {mode} dataloader: "
f"num_workers={num_workers}, "
f"multiprocessing_context={multiprocessing_context}, "
f"rank={self.global_rank}."
)
return DataLoader(
Expand All @@ -445,6 +522,7 @@
collate_fn=openfold_batch_collator,
generator=self.generators[mode],
worker_init_fn=worker_init_fn,
multiprocessing_context=multiprocessing_context,
)

def train_dataloader(self) -> DataLoader:
Expand Down
4 changes: 4 additions & 0 deletions openfold3/core/data/pipelines/featurization/conformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,10 @@
coords = conf.GetAtomPosition(atom.GetIdx())
mol_ref_mask.append(int(atom.GetBoolProp("annot_used_atom_mask")))
mol_ref_pos.append(coords)
# Some PyPI installations crash here due to ABI mismatch between RDKit and PyTorch

Check failure on line 111 in openfold3/core/data/pipelines/featurization/conformer.py

View workflow job for this annotation

GitHub Actions / ruff

ruff (E501)

openfold3/core/data/pipelines/featurization/conformer.py:111:89: E501 Line too long (94 > 88)
# Leaving a quick fix commented (beware, moving into slow python land)
# Remove if nobody else hits the problem
# mol_ref_pos.append([coords.x, coords.y, coords.z])

# Atom elements (0-indexed)
element_symbol = atom.GetSymbol()
Expand Down
5 changes: 0 additions & 5 deletions openfold3/core/model/primitives/attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -507,11 +507,6 @@ def _deepspeed_evo_attn(
biases:
List of biases that broadcast to [*, H, Q, K]
"""
from openfold3 import hacks

hacks.prep_deepspeed()
hacks.prep_cutlass()

if not ds4s_is_installed:
raise ValueError(
"_deepspeed_evo_attn requires that DeepSpeed be installed "
Expand Down
40 changes: 0 additions & 40 deletions openfold3/hacks.py

This file was deleted.

3 changes: 1 addition & 2 deletions openfold3/tests/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,11 @@

import importlib

from openfold3 import hacks # noqa: F401

if importlib.util.find_spec("deepspeed") is not None:
import deepspeed

# TODO: Resolve this
# This is a hack to prevent deepspeed from doing the triton matmul autotuning
# I'm not sure why it's doing this by default, but it's causing the tests to hang
deepspeed.HAS_TRITON = False
# FIXME: potentially too invasive with other libraries, see also comments up about libaio
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ def test_preparse_databases(self, cli_runner, tmp_path):

# Check that npz files were created for both chains
npz_files = list(tmp_path.glob("*.npz"))
assert [f.name for f in npz_files] == ["2q2k_B.npz", "2q2k_A.npz"]
assert set([f.name for f in npz_files]) == {"2q2k_B.npz", "2q2k_A.npz"}

# Check contents of one npz file
npz_data = np.load(tmp_path / "2q2k_A.npz", allow_pickle=True)
Expand Down
2 changes: 1 addition & 1 deletion openfold3/tests/test_lmdb.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def test_lmdb_roundtrip(self, tmp_path):

# Create LMDB
test_lmdb_dir = tmp_path / "test_lmdb"
map_size = 20 * 1024
map_size = 200 * 1024
convert_datacache_to_lmdb(test_config_json, test_lmdb_dir, map_size)

# read lmdb
Expand Down
Loading
Loading