You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I follow the steps I just get "Killed" after I run inference.
The model crashed first at tlt-converter with output killed, which I fixed by adding -w 1000000.
Is this model able to run on Jetson Nano 2gb?
The text was updated successfully, but these errors were encountered:
It's been a while, but I can pretty much say that no, this won't run on a 2GB Nano. The Killed message means you've run out of RAM.
In fact, barely any tensorrt/cuda/gstreamer based stuff can run on a 2GB Nano, because the OS/nvargus-daemon/tensorrt consume so much RAM by default.
You could use heaps of swap, but swap is so unbelievably slow, it makes it seem pointless. Using swap is also a good way to destroy your SD card, as I have done many times.
When I follow the steps I just get "Killed" after I run inference.
The model crashed first at tlt-converter with output killed, which I fixed by adding -w 1000000.
Is this model able to run on Jetson Nano 2gb?
The text was updated successfully, but these errors were encountered: