Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration of OpenMM #87

Merged
merged 23 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
8c9981c
Miscellaneous fixes
JMorado Mar 8, 2024
1f88323
Updated environment_mace.yml
JMorado Mar 8, 2024
27f9f72
Refactored md_openmm.py
JMorado Mar 8, 2024
c573cc5
Added quotes to type hints
JMorado Mar 25, 2024
45845ed
Fixed issues with pre-commit hooks
JMorado Mar 25, 2024
5e532b7
Added option to select OpenMM Platform to use
JMorado Mar 25, 2024
75d04e9
Decorated _run_mlp_md_openmm so that it runs in a temp dir
JMorado Mar 25, 2024
5c8912d
Added restart of OpenMM simulation
JMorado Mar 25, 2024
36823ca
Added quotes to type hints
JMorado Mar 25, 2024
ff4a503
Removed packages from the dependencies to avoid conflicting builds
JMorado Apr 2, 2024
a308887
Remove CUDA from available_platform is a GPU is not available
JMorado Apr 2, 2024
236c21c
Fixed foreach argument (pytorch/pytorch#110940)
JMorado Apr 2, 2024
93fabdc
Ruff fixes
JMorado Apr 2, 2024
3aa6374
Merge branch 'main' into openmm-features
JMorado Apr 2, 2024
801bdaf
Updated README.md
JMorado Apr 3, 2024
cca9c12
Updated md_openmm
JMorado Apr 3, 2024
854ac3f
Added OpenMM tests
JMorado Apr 3, 2024
9e3f29d
Skip tests that start with 'test_openmm' for the GAP CI run
JMorado Apr 3, 2024
2e19e66
Added checks to AL settings before it starts
JMorado Apr 8, 2024
05ef6c7
Merge branch 'openmm-features' of https://github.com/JMorado/mlp-trai…
JMorado Apr 8, 2024
1508641
Added checks to ensure only MACE can be used when using OpenMM
JMorado Apr 8, 2024
d8ffcee
Ruff linting
JMorado Apr 8, 2024
fb54bd1
Merge branch 'main' into openmm-features
JMorado Apr 8, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/pytest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
run: ./install_gap.sh

- name: Test basic install
run: pytest --cov
run: pytest --cov -k "not test_openmm"

- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v4
Expand Down
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,19 @@ Each model is installed into individual conda environment:

- Units are: distance (Å), energy (eV), force (eV Å$`^{-1}`$), time (fs)

## Using with OpenMM (Experimental!)
## Using with OpenMM

The OpenMM backend only works with MACE at the moment. The necessary dependencies are installed automatically via conda:

```console
```
./install_mace.sh
```

Depending on your machine, you might need to prefix the command above with something like `CONDA_OVERRIDE_CUDA="11.2"` in two scenarios:

- To ensure an environment that is compatible with your CUDA driver.
- To force CUDA builds to be installed, even if the installation is being done from a CPU-only machine. This is typical in a situation where you are installing from a head node without GPUs but intend to run on GPUs and want to install the CUDA builds.

You should now be able to run `water_openmm.py` in `./examples` or run the jupyter notebook on Google Colab [`water_openmm_colab.ipynb`](./examples/water_openmm_colab.ipynb).

You can use OpenMM during active learning by passing the keyword argument `md_program="OpenMM"` to the `al_train` method.
Expand Down
12 changes: 1 addition & 11 deletions environment_mace.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,17 +24,7 @@ dependencies:
- openmm
- openmm-torch
- nnpops
# MACE dependencies
- pytorch=2.0
- torchvision
- torchaudio
- torch-ema
# TODO: You might also need CUDA-specific libraries,
# but that depends on CUDA version
# https://pytorch.org/get-started/locally/
# - pytorch-cuda=11.8
# - pytorch-cuda=12.1
- pip:
- mace-torch
- openmmml@git+https://github.com/openmm/openmm-ml.git@main
- ase@git+https://gitlab.com/ase/ase.git@f2615a6e9a # For PLUMED
- mace-torch
3 changes: 3 additions & 0 deletions mlptrain/potentials/mace/mace.py
Original file line number Diff line number Diff line change
Expand Up @@ -689,6 +689,9 @@ def opt_param_options(self) -> Dict:
],
lr=self.args.lr,
amsgrad=Config.mace_params['amsgrad'],
foreach=False
if torch.get_default_dtype() == torch.float64
else True,
)

return param_options
Expand Down
13 changes: 10 additions & 3 deletions mlptrain/sampling/md.py
Original file line number Diff line number Diff line change
Expand Up @@ -557,17 +557,24 @@ def add_momenta(idx, vector, energy):


def _initialise_traj(
ase_atoms: 'ase.atoms.Atoms', restart: bool, traj_name: str
ase_atoms: 'ase.atoms.Atoms',
restart: bool,
traj_name: str,
remove_last: bool = True,
) -> 'ase.io.trajectory.Trajectory':
"""Initialise ASE trajectory object"""

if not restart:
traj = ASETrajectory(traj_name, 'w', ase_atoms)

else:
# Remove the last frame to avoid duplicate frames
previous_traj = ASETrajectory(traj_name, 'r', ase_atoms)
previous_atoms = previous_traj[:-1]

if remove_last:
# Remove the last frame to avoid duplicate frames
previous_atoms = previous_traj[:-1]
else:
previous_atoms = previous_traj

os.remove(traj_name)

Expand Down
Loading
Loading