Skip to content

Clarification on fine-tuning with MACE-mp-0 large vs. mace-mp-0b medium models #765

Answered by ilyes319
AlghamdiNada asked this question in Q&A
Discussion options

You must be logged in to vote

Hey @AlghamdiNada, sorry for the delay.

The reason is that it is important to use actual E0s from DFT for multihead finetuning. The original MP-0 models were trained with E0s estimated as averages over the dataset and not by DFT single point. We changed that starting from MP-0b and the subsequent model. If you want to finetune a model now, I would recommend using our newest MPA-0 model that you can download here. It will be suitable for both multihead replay and normal finetuning and is quite accurate https://github.com/ACEsuit/mace-mp/releases/tag/mace_mpa_0.

For the multihead finetuning, I recommend you use the latest main branch, and try different --num_samples_pt, from 100 to 100 000.…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by ilyes319
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants