Replies: 3 comments
-
Hello, I am having the same error when I tried to run npt test using the deployed model |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi all, Please ensure you are using the latest |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hello Alby,
The model I wanted to deploy was trained with default_dtype: float32.
for model_type i did not indicate any specific option. What do i do in this
case?
…On Fri, Sep 20, 2024 at 5:00 AM Alby M. ***@***.***> wrote:
Hi all,
Please ensure you are using the latest nequip and pair_allegro on their
main branches, and that you have set default_dtype: float64, model_dtype:
float32.
—
Reply to this email directly, view it on GitHub
<#100 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BJJJ6KWLBVVK6MULOV4RPNDZXOFTNAVCNFSM6AAAAABOIEJBXKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTANRZHE3DONY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I successfully train a model using allegro and was able to deploy it. The float type i used in my training was float32 .
I was also able to compile lammps with pair_allegro as described in the link 'https://github.com/mir-group/pair_allegro' and generated the
executable. However i tested the deployed model i got the error below:
Exception: expected scalar type Double but found Float
Exception raised from check_type at aten/src/ATen/core/TensorMethods.cpp:12 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x96 (0x2ad3cef7b156 in /home/sogenyi/miniforge3/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x2ad3cef29d80 in /home/sogenyi/miniforge3/lib/python3.10/site-packages/torch/lib/libc10.so)
frame #2: + 0x3640292 (0x2ad3d3c3a292 in /home/sogenyi/miniforge3/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #3: double* at::TensorBase::mutable_data_ptr() const + 0x43 (0x2ad3d3c3b343 in /home/sogenyi/miniforge3/lib/python3.10/site-packages/torch/lib/libtorch_cpu.so)
frame #4: at::TensorAccessor<double, 2ul, at::DefaultPtrTraits, long> at::TensorBase::accessor<double, 2ul>() const & + 0x3e (0x64d4ae in /home/sogenyi/ALLEGRO/lammps/build/lmp)
frame #5: /home/sogenyi/ALLEGRO/lammps/build/lmp() [0x89e696]
frame #6: /home/sogenyi/ALLEGRO/lammps/build/lmp() [0x5db3aa]
frame #7: /home/sogenyi/ALLEGRO/lammps/build/lmp() [0x56384e]
frame #8: /home/sogenyi/ALLEGRO/lammps/build/lmp() [0x450040]
frame #9: /home/sogenyi/ALLEGRO/lammps/build/lmp() [0x450396]
frame #10: /home/sogenyi/ALLEGRO/lammps/build/lmp() [0x44150d]
frame #11: __libc_start_main + 0xf5 (0x2ad3e8a0ec05 in /lib64/libc.so.6)
frame #12: /home/sogenyi/ALLEGRO/lammps/build/lmp() [0x442c50]
Please how i can fix this problem?
Beta Was this translation helpful? Give feedback.
All reactions