Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
194 commits
Select commit Hold shift + click to select a range
6c6e5c3
New version
avishart Oct 27, 2024
370d99c
Debug concatenate of constraints
avishart Oct 27, 2024
c5d4db7
Make a function that gets the data from database
avishart Oct 27, 2024
5eaf2b9
Get atoms from database and use a new write function
avishart Oct 27, 2024
a62a90a
Enable get_uncertainty from StoredDataCalculator
avishart Oct 27, 2024
19b23ec
Relocate NEB methods and make structure class
avishart Oct 27, 2024
904f9ef
Make optimization methods for active learning
avishart Oct 27, 2024
9af796f
Relocate the acquisition function classes
avishart Oct 27, 2024
287198b
Make a general active learning class and construct previous used clas…
avishart Oct 27, 2024
6fdc680
Make a general active learning class and construct previous used clas…
avishart Oct 27, 2024
efa8050
Make an argument for the minimum needed data points
avishart Oct 27, 2024
d081021
Change README
avishart Oct 28, 2024
2a57c74
Save calculation results in info
avishart Oct 28, 2024
0a77ff7
Update docstring and make None trajectory possible
avishart Oct 28, 2024
fb0c242
Change Bayesian optimzation to active learning
avishart Oct 28, 2024
ce9064b
Make None as default possible
avishart Oct 30, 2024
a210b24
Ensure properties are saved in structures
avishart Nov 1, 2024
667fb50
Save results in info and use get_all in method
avishart Nov 1, 2024
0fdf2b4
Have matplotlib as optional package
avishart Nov 1, 2024
64a900c
Make useful plots
avishart Nov 1, 2024
c21db95
Update and make tests
avishart Nov 1, 2024
f6d770c
Update README
avishart Nov 1, 2024
2429c55
Debug FixBondLengths
avishart Nov 1, 2024
e3eaf1e
Do not use constraint in annealing
avishart Nov 1, 2024
6225648
Make a proper copy of constraints and optimize without FixBondLengths
avishart Nov 1, 2024
8d52fba
Use global random seed
avishart Nov 1, 2024
ff91c1a
Debug to README
avishart Nov 1, 2024
cc215be
Broadcast the prediction
avishart Nov 1, 2024
98a263b
Broadcast the method by using structures
avishart Nov 1, 2024
de22cfe
Minor change in plot
avishart Nov 1, 2024
35dceaa
Use print instead of parprint
avishart Nov 1, 2024
12100bd
Change default ml_steps
avishart Nov 4, 2024
d2a7e26
Debug broadcast in NEB
avishart Nov 4, 2024
60ed1c2
Change initial calculator for adsorption
avishart Nov 4, 2024
4376cf6
Debug in Mie potential name
avishart Nov 4, 2024
3cf53ed
Use extra initial data for better uncertainty in active learning
avishart Nov 4, 2024
48aa027
Changes to min_data. Use minimum 3 trained points
avishart Nov 4, 2024
40cd712
More detailts about usage of code
avishart Nov 5, 2024
99da4c6
Change default reuse_ci_path to True
avishart Nov 5, 2024
3ef9b95
Debug verbose to mlmodel
avishart Nov 5, 2024
b6d6400
Change default ml_steps in MLGO and make ml_steps for local
avishart Nov 6, 2024
fa9a419
Change kappa in BO for MLGO
avishart Nov 6, 2024
ccc636a
Correct docstrings
avishart Nov 6, 2024
28c76c6
Minor change to functions
avishart Nov 6, 2024
b4cfe1c
Enable use_derivatives in setup_mlcalc but give a warning if positive…
avishart Nov 6, 2024
1be8321
Change default kappa value
avishart Nov 7, 2024
8c6bb55
Make it possible to reuse data from previous mlcalc in setup_mlcalc
avishart Nov 7, 2024
69ec882
Chane default pdis of length
avishart Nov 8, 2024
fffd027
Change kappa to -2 for adsorption active learning
avishart Nov 8, 2024
cb9b643
Option to get the mlcalc from active learning
avishart Nov 12, 2024
477ccf8
Remove initial convergence check since it is within the loop
avishart Nov 12, 2024
0dad4f5
Minor change in initiate_structure in activelearning
avishart Nov 12, 2024
a447a93
Store the best structures after initiate check
avishart Nov 12, 2024
4db3896
Make a trajectory file with the initial structure(s)
avishart Nov 13, 2024
37c9942
Make NEB plots with mlcalc curve predictions
avishart Nov 13, 2024
afebe95
Debug restart in active learning
avishart Nov 13, 2024
2180ff2
Save properties also when ploting
avishart Nov 13, 2024
1deaf12
Changes to the README
avishart Nov 13, 2024
d3da5b8
Minor correction to README
avishart Nov 13, 2024
5451d55
Change default alpha to 0.7
avishart Nov 15, 2024
fea73c3
Changes to the README file
avishart Nov 15, 2024
41baa00
Remove unused colors in plot_all_neb
avishart Nov 15, 2024
0f155ce
Minor change in README
avishart Nov 15, 2024
2715a95
Ensure comm and other arguments can be None
avishart Nov 15, 2024
01be4c7
No need to calculate forces in simulated annealing
avishart Nov 15, 2024
d257c72
Use tolerance and bond lengths for FixBondLengths
avishart Nov 18, 2024
5897048
More documentation on NEB interpolation in MLNEB
avishart Nov 20, 2024
c83b9f8
Define optimize_hp in setup_mlcalc. Use kwargs in calc and get_defaul…
avishart Nov 22, 2024
1d730f1
Use parent parameters if not specified
avishart Nov 22, 2024
0b4b199
Copy the end structures and set default remove_rotation_and_translati…
avishart Nov 28, 2024
d63a675
Load the summary table when active learning is restarted
avishart Nov 28, 2024
df38fdd
New version
avishart Nov 29, 2024
715f36c
Set default max_unc value
avishart Dec 3, 2024
2708db7
Parallelize over moving images
avishart Dec 9, 2024
31dbc3d
Ensure any steps are left and converged is used as output in functions
avishart Dec 9, 2024
0808716
Debug parallelization of optimization methods by using the correct ranks
avishart Dec 9, 2024
93dcdc9
Edit docstrings and make updates of mlmodel possible
avishart Dec 18, 2024
36482cd
Edit docstrings and set whether to include noise in uncertainty
avishart Dec 18, 2024
0b4f541
Use the true initial energy as reference
avishart Dec 18, 2024
f27d7eb
Include noise in the uncertainty for plotting NEB band
avishart Dec 18, 2024
2356036
Only make plots on master rank
avishart Jan 7, 2025
0b50608
New version
avishart Mar 26, 2025
2b4e51a
Check if endpoints are the same in NEB
avishart Mar 26, 2025
20255f3
Make it possible to change saving and only write last structure in da…
avishart Mar 26, 2025
e6cf4b1
Better exception handling
avishart Mar 26, 2025
87f8cb4
Better exception handling
avishart Mar 26, 2025
c5d4e2a
New version
avishart Apr 1, 2025
cb81c80
Rounding targets and use data type
avishart Apr 2, 2025
6839cae
Round predictions and use data type
avishart Apr 2, 2025
93781c3
Round hyperparameters and use data type
avishart Apr 2, 2025
3bb3ead
Use data type
avishart Apr 2, 2025
ee5cc1b
Informative ml model
avishart Apr 2, 2025
8f1c1c7
A given result can be given to a copy of atoms
avishart Apr 2, 2025
8ede4b3
Use seed for optimizers
avishart Apr 4, 2025
496128d
Use seed and copy_candidates
avishart Apr 4, 2025
73a9890
Use seed and give all structures
avishart Apr 4, 2025
cc2c175
Use seed and write the full rotation matrix
avishart Apr 4, 2025
30ebc0d
Inherit update arguments
avishart Apr 7, 2025
5a5501f
Use seed for acquisition
avishart Apr 7, 2025
1b7d3e9
Use seed and make get_default_mlcalc
avishart Apr 7, 2025
b0895a0
Change docstring
avishart Apr 7, 2025
9de0331
Use seed and inherit update arguments
avishart Apr 7, 2025
f930b7b
Remove kwargs in update_arguments
avishart Apr 15, 2025
bf30edd
Minor change
avishart May 15, 2025
39fe58e
New fingerprint and baseline
avishart May 15, 2025
85869c9
Informative and rounded values in MLModel
avishart May 15, 2025
7143a1f
Use dtype, seed, update arguments, and import from scipy and numpy. D…
avishart May 15, 2025
c71fbdc
Modified tests
avishart May 15, 2025
71f7996
dtype debug
avishart May 15, 2025
1d63d07
Use item(0) instead of [0][0]
avishart May 15, 2025
b44ad6d
Use Python’s rounding for float
avishart May 15, 2025
afcb835
Use rounding in tests and test BOCalc
avishart May 15, 2025
f056107
Debug of AnneallingOptimizer
avishart May 16, 2025
7776e75
Check if the atoms in the calculator is the same
avishart May 16, 2025
deaa8fc
Update and debug tests
avishart May 19, 2025
ac5c0b7
Better docstrings
avishart May 20, 2025
a3cb1bb
Use warnings
avishart May 21, 2025
f25cc1c
Use the endpoints in the NEB interpolations and implement the born in…
avishart May 21, 2025
9917347
Remove find_mic
avishart May 21, 2025
554877f
Update to the structures and NEB
avishart May 22, 2025
bdb7cdc
New version
avishart May 23, 2025
bc8c889
Set arguments
avishart Jun 3, 2025
7abddbe
Flatten list
avishart Jun 3, 2025
0450e2d
Enable saving of model
avishart Jul 22, 2025
556594d
Write hyperparameters in solution
avishart Jul 22, 2025
0d1d0fb
Change prior distribution of hp back
avishart Jul 22, 2025
59f77db
Remove baseline_targets (bug fix), and more arguments for creating de…
avishart Jul 22, 2025
7e5e69c
Save model option
avishart Jul 23, 2025
260e06b
Remove baseline_targets (bug fix), and more arguments for creating de…
avishart Jul 23, 2025
416ecfd
Debug in reduced database, debug fingerprint for when all atoms are f…
avishart Jul 23, 2025
4f59ad9
Debug baseline for when all atoms are fixed
avishart Jul 23, 2025
4877f4a
Change parameters in baseline
avishart Jul 23, 2025
c9d93e0
Get stored correction
avishart Jul 23, 2025
5930b11
Implement new clustering method
avishart Jul 23, 2025
75fa9c6
Update test due to change in baseline parameters
avishart Jul 23, 2025
5ead323
Changes to the acquisition classes
avishart Jul 23, 2025
7060081
Debug of structures and copy of atoms
avishart Jul 23, 2025
a2139b9
New functions and options in NEB
avishart Jul 23, 2025
a9acc16
A lot of new options in active learning. Including logging of time, s…
avishart Jul 23, 2025
a536396
Option to start with or without CI.
avishart Jul 23, 2025
de8a33e
Log the true predicted energy
avishart Jul 23, 2025
c7bc2e5
Bug fix to local optimizer
avishart Jul 23, 2025
fb19c28
Minor changes to adsorption
avishart Jul 23, 2025
216adfc
Implement option argument in active learning methods
avishart Jul 23, 2025
57118ee
Implementation of new global optimization method
avishart Jul 23, 2025
cd1b8e7
Debug find_minimum_path_length
avishart Jul 24, 2025
f790e5a
Implement rattle function
avishart Jul 24, 2025
a7292a2
Use 2 initial data points
avishart Jul 24, 2025
37ad32d
Reset calculator
avishart Jul 24, 2025
4f90544
Change the b parameter for TP
avishart Jul 24, 2025
8caefb8
Update README
avishart Jul 24, 2025
3d2c41a
Change default kappa value in AdsorptionAL
avishart Jul 24, 2025
1639821
Minor correction to README
avishart Jul 24, 2025
8fa6274
Crucial debug in activelearning
avishart Jul 25, 2025
c8f68f5
Change default values in RandomAdsorption
avishart Jul 25, 2025
ed3d922
Update README for gp
avishart Jul 25, 2025
93d146a
Update and use rng seeds for tests
avishart Jul 25, 2025
64db035
Split the default model, mlmodel, and database into a new file
avishart Jul 25, 2025
caac066
Implementation of default ensemble model function
avishart Jul 25, 2025
9ecb42c
Add the option to specify default mlcalc arguments
avishart Jul 25, 2025
3a4e65a
Minor bug in setup_default_mlcalc when reuse_mlcalc_data
avishart Jul 28, 2025
7fdae39
Minor change in suggested argument
avishart Jul 28, 2025
8441c7c
Include the Python versions 3.12 and 3.13 for testing.
avishart Jul 28, 2025
395959b
New version
avishart Aug 1, 2025
d64e043
Debug point_interest
avishart Aug 1, 2025
57eb4d0
Correct scale_fmax docstring
avishart Aug 1, 2025
70ad052
Use 5 local steps in extra_initial_data
avishart Aug 1, 2025
d8fd74a
Use max steps in parallel method and not sum
avishart Aug 1, 2025
3b71d8d
Change default reuse_data_local to False
avishart Aug 1, 2025
77cbb52
Use cutoff for RepulsionCalculator as default
avishart Aug 4, 2025
1f4fbeb
Reduce scale_fmax if structure is in database
avishart Aug 4, 2025
bfac099
Implement cutoff and activation functions
avishart Aug 7, 2025
920d1df
Maximum allowed uncertainty for BOCalculator
avishart Aug 7, 2025
bea2262
Implement calculation of predicted covariance matrix
avishart Aug 7, 2025
d2157ac
Implement max_unc and trust distance for global search
avishart Aug 11, 2025
42a0547
Use last two structures in local optimization
avishart Aug 11, 2025
ba83828
Debug saving of mlcalc
avishart Aug 11, 2025
ed52f60
Implement max_unc and dtrust in global optimization together with arg…
avishart Aug 12, 2025
dddf997
Implement max_unc and dtrust in global optimization together with res…
avishart Aug 12, 2025
913b58b
Use string or bool for database_reduction
avishart Aug 12, 2025
be09d36
Do not use restart structure for adsorption with annealing as default
avishart Aug 13, 2025
6eb5742
Adapt to new changes in ASE
avishart Aug 13, 2025
f2d593c
More adoptions to new ASE version
avishart Aug 13, 2025
85fc241
More adoptions to the new ASE version
avishart Aug 13, 2025
bd99264
Adopt to new ASE version in NEB
avishart Aug 13, 2025
cb4f90e
Debug restart of last structures
avishart Aug 20, 2025
d6c6b55
New version
avishart Aug 26, 2025
78c1839
Use the maximum energy instead of the first as in literature
avishart Aug 26, 2025
cd25635
Implement AvgEWNEB as an option for neb_method
avishart Aug 26, 2025
0d4a8ba
Only use one local step in extra initial data
avishart Aug 26, 2025
ca6ee93
Add one extra argument in example
avishart Aug 26, 2025
fd2c5a6
Extra information about the NEB interpolation
avishart Sep 22, 2025
a54709d
ASE comment
avishart Sep 22, 2025
6f9ed83
Minor README correction
avishart Sep 22, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/test-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11"]
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12", "3.13"]

steps:
- uses: actions/checkout@v4
Expand Down
279 changes: 261 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,101 @@
# CatLearn

CatLearn utilieties machine learning in form of Gaussian Process or Student T process to accelerate catalysis simulations. The Nudged-elastic-band method (NEB) is accelerated with MLNEB code. Furthermore, a global adsorption search is accelerated with the MLGO code.
CalLearn uses ASE for handling the atomic systems and the calculator interface for the potential energy calculations.
CatLearn utilizes machine learning, specifically the Gaussian Process or Student T process, to accelerate catalysis simulations.

The local optimization of a structure is accelerated with the `LocalAL` code.
The Nudged-elastic-band method (NEB) is accelerated with the `MLNEB` code.
Furthermore, a global adsorption search without local relaxation is accelerated with the `AdsorptionAL` code.
Additionally, a global adsorption search with local relaxation is accelerated with the `MLGO` code.
At last, a random sampling of adsorbate positions, combined with local relaxation, accelerates the global adsorption search with the `RandomAdsorptionAL` code.

CalLearn uses ASE to handle atomic systems and the calculator interface to calculate the potential energy.

## Installation

You can simply install CatLearn by dowloading it from github as:
You can install CatLearn by downloading it from GitHub as:
```shell
$ git clone https://github.com/avishart/CatLearn
$ git clone --single-branch --branch activelearning https://github.com/avishart/CatLearn
$ pip install -e CatLearn/.
```

You can also install CatLearn directly from github:
You can also install CatLearn directly from GitHub:
```shell
$ pip install git@github.com:avishart/CatLearn.git
$ pip install git@github.com:avishart/CatLearn.git@activelearning
```

However, it is recommended to install a specific tag to ensure it is a stable version:
```shell
$ pip install git+https://github.com/avishart/CatLearn.git@v.x.x.x
```

The dependency of ASE has only been thoroughly tested up to version 3.26.0.

## Usage
The active learning class is generalized to work for any defined optimizer method for ASE `Atoms` structures. The optimization method is executed iteratively with a machine-learning calculator that is retrained for each iteration. The active learning converges when the uncertainty is low (`unc_convergence`) and the energy change is within `unc_convergence` or the maximum force is within the tolerance value set.

Predefined active learning methods are created: `LocalAL`, `MLNEB`, `AdsorptionAL`, `MLGO`, and `RandomAdsorptionAL`.

The following code shows how to use MLNEB:
The outputs of the active learning are `predicted.traj`, `evaluated.traj`, `predicted_evaluated.traj`, `converged.traj`, `initial_struc.traj`, `ml_summary.txt`, and `ml_time.txt`:
- The `predicted.traj` file contains the structures that the machine-learning calculator predicts after each optimization loop.
- The training data and ASE calculator evaluated structures are within `evaluated.traj` file.
- The `predicted_evaluated.traj` file has the exact same structures as the `evaluated.traj` file, but with machine-learning predicted properties.
- The converged structures calculated with the machine-learning calculator are saved in the `converged.traj` file.
- The initial structure(s) is/are saved into the `initial_struc.traj` file.
- The summary of the active learning is saved into a table in the `ml_summary.txt` file.
- The time spent on structure evaluation, machine-learning training, and prediction at each iteration is stored in `ml_time.txt`.

### LocalAL
The following code shows how to use `LocalAL`:
```python
from catlearn.optimize.mlneb import MLNEB
from catlearn.activelearning.local import LocalAL
from ase.io import read
from ase.optimize import FIRE

# Load initial structure
atoms = read("initial.traj")

# Make the ASE calculator
calc = ...

# Initialize local optimization
dyn = LocalAL(
atoms=atoms,
ase_calc=calc,
unc_convergence=0.05,
local_opt=FIRE,
local_opt_kwargs={},
save_memory=False,
use_restart=True,
min_data=3,
restart=False,
verbose=True,
)
dyn.run(
fmax=0.05,
max_unc=0.30,
steps=100,
ml_steps=1000,
)

```

The active learning minimization can be visualized by extending the Python script with the following code:
```python
import matplotlib.pyplot as plt
from catlearn.tools.plot import plot_minimize

fig, ax = plt.subplots()
plot_minimize("predicted_evaluated.traj", "evaluated.traj", ax=ax)
plt.savefig('AL_minimization.png')
plt.close()
```

### MLNEB
The following code shows how to use `MLNEB`:
```python
from catlearn.activelearning.mlneb import MLNEB
from ase.io import read
from ase.optimize import FIRE

# Load endpoints
initial = read("initial.traj")
Expand All @@ -40,23 +109,78 @@ mlneb = MLNEB(
start=initial,
end=final,
ase_calc=calc,
interpolation="linear",
unc_convergence=0.05,
n_images=15,
full_output=True,
neb_method="improvedtangentneb",
neb_kwargs={},
neb_interpolation="linear",
start_without_ci=True,
reuse_ci_path=True,
save_memory=False,
parallel_run=False,
local_opt=FIRE,
local_opt_kwargs={},
use_restart=True,
min_data=3,
restart=False,
verbose=True,
)
mlneb.run(
fmax=0.05,
unc_convergence=0.05,
max_unc=0.30,
steps=100,
ml_steps=1000,
)

```

The following code shows how to use MLGO:
The `MLNEB` optimization can be restarted from the last predicted path and reusing the training data with the argument `restart=True`. Alternatively, the optimization can be restarted from the last predicted path without reusing the training data by setting the `neb_interpolation="predicted.traj"`.

The obtained NEB band from the MLNEB optimization can be visualized in three ways.

The converged NEB band with uncertainties can be visualized by extending the Python code with the following code:
```python
import matplotlib.pyplot as plt
from catlearn.tools.plot import plot_neb

fig, ax = plt.subplots()
plot_neb(mlneb.get_structures(), use_uncertainty=True, ax=ax)
plt.savefig('Converged_NEB.png')
plt.close()
```

The converged NEB band can also be plotted with the predicted curve between the images by extending with the following code:
```python
import matplotlib.pyplot as plt
from catlearn.tools.plot import plot_neb_fit_mlcalc

fig, ax = plt.subplots()
plot_neb_fit_mlcalc(
mlneb.get_structures(),
mlcalc=mlneb.get_mlcalc(),
use_uncertainty=True,
include_noise=True,
ax=ax,
)
plt.savefig('Converged_NEB_fit.png')
plt.close()
```

All the obtained NEB bands from `MLNEB` can also be visualized within the same figure by using the following code:
```python
import matplotlib.pyplot as plt
from catlearn.tools.plot import plot_all_neb

fig, ax = plt.subplots()
plot_all_neb("predicted.traj", n_images=15, ax=ax)
plt.savefig('All_NEB_paths.png')
plt.close()
```

### AdsorptionAL
The following code shows how to use `AdsorptionAL`:
```python
from catlearn.optimize.mlgo import MLGO
from catlearn.activelearning.adsorption import AdsorptionAL
from ase.io import read

# Load the slab and the adsorbate
Expand All @@ -79,17 +203,136 @@ bounds = np.array(
)

# Initialize MLGO
mlgo = MLGO(slab, ads, ase_calc=calc, bounds=bounds, full_output=True)
dyn = AdsorptionAL(
slab=slab,
adsorbate=ads,
adsorbate2=None,
ase_calc=calc,
unc_convergence=0.02,
bounds=bounds,
opt_kwargs={},
parallel_run=False,
min_data=3,
restart=False,
verbose=True
)
dyn.run(
fmax=0.05,
max_unc=0.30,
steps=100,
ml_steps=4000,
)

```

The `AdsorptionAL` optimization can be visualized in the same way as the `LocalAL` optimization.

### MLGO
The following code shows how to use `MLGO`:
```python
from catlearn.activelearning.mlgo import MLGO
from ase.io import read
from ase.optimize import FIRE

# Load the slab and the adsorbate
slab = read("slab.traj")
ads = read("adsorbate.traj")

# Make the ASE calculator
calc = ...

# Make the boundary conditions for the adsorbate
bounds = np.array(
[
[0.0, 1.0],
[0.0, 1.0],
[0.5, 1.0],
[0.0, 2 * np.pi],
[0.0, 2 * np.pi],
[0.0, 2 * np.pi],
]
)

# Initialize MLGO
mlgo = MLGO(
slab=slab,
adsorbate=ads,
adsorbate2=None,
ase_calc=calc,
unc_convergence=0.02,
bounds=bounds,
opt_kwargs={},
local_opt=FIRE,
local_opt_kwargs={},
reuse_data_local=True,
parallel_run=False,
min_data=3,
restart=False,
verbose=True
)
mlgo.run(
fmax=0.05,
max_unc=0.30,
steps=100,
ml_steps=4000,
)

```

The `MLGO` optimization can be visualized in the same way as the `LocalAL` optimization.

### RandomAdsorptionAL
The following code shows how to use `RandomAdsorptionAL`:
```python
from catlearn.activelearning.randomadsorption import RandomAdsorptionAL
from ase.io import read
from ase.optimize import FIRE

# Load the slab and the adsorbate
slab = read("slab.traj")
ads = read("adsorbate.traj")

# Make the ASE calculator
calc = ...

# Make the boundary conditions for the adsorbate
bounds = np.array(
[
[0.0, 1.0],
[0.0, 1.0],
[0.5, 1.0],
[0.0, 2 * np.pi],
[0.0, 2 * np.pi],
[0.0, 2 * np.pi],
]
)

# Initialize MLGO
dyn = RandomAdsorptionAL(
slab=slab,
adsorbate=ads,
adsorbate2=None,
ase_calc=calc,
unc_convergence=0.02,
bounds=bounds,
n_random_draws=200,
use_initial_opt=False,
initial_fmax=0.2,
use_repulsive_check=True,
local_opt=FIRE,
local_opt_kwargs={},
parallel_run=False,
min_data=3,
restart=False,
verbose=True
)
dyn.run(
fmax=0.05,
max_unc=0.30,
steps=100,
ml_steps=1000,
ml_chains=8,
relax=True,
local_steps=500,
ml_steps=4000,
)

```

The `RandomAdsorptionAL` optimization can be visualized in the same way as the `LocalAL` optimization.
2 changes: 1 addition & 1 deletion catlearn/_version.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
__version__ = "5.6.1"
__version__ = "7.2.0"

__all__ = ["__version__"]
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
from .mlneb import MLNEB
from .mlgo import MLGO
from .acquisition import (
Acquisition,
AcqEnergy,
Expand All @@ -13,10 +11,14 @@
AcqEI,
AcqPI,
)
from .activelearning import ActiveLearning
from .local import LocalAL
from .mlneb import MLNEB
from .adsorption import AdsorptionAL
from .mlgo import MLGO
from .randomadsorption import RandomAdsorptionAL

__all__ = [
"MLNEB",
"MLGO",
"Acquisition",
"AcqEnergy",
"AcqUncertainty",
Expand All @@ -28,4 +30,10 @@
"AcqULCB",
"AcqEI",
"AcqPI",
"ActiveLearning",
"LocalAL",
"MLNEB",
"AdsorptionAL",
"MLGO",
"RandomAdsorptionAL",
]
Loading