Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jetson Nanon 2gb #13

Open
vladimirjankov opened this issue Jul 23, 2021 · 2 comments
Open

Jetson Nanon 2gb #13

vladimirjankov opened this issue Jul 23, 2021 · 2 comments

Comments

@vladimirjankov
Copy link

When I follow the steps I just get "Killed" after I run inference.
The model crashed first at tlt-converter with output killed, which I fixed by adding -w 1000000.

Is this model able to run on Jetson Nano 2gb?

@vladimirjankov
Copy link
Author

If someone can give me clarification I would appreciate it. Thanks in advance

@atoaster
Copy link

Hi @vladimirjankov

It's been a while, but I can pretty much say that no, this won't run on a 2GB Nano. The Killed message means you've run out of RAM.

In fact, barely any tensorrt/cuda/gstreamer based stuff can run on a 2GB Nano, because the OS/nvargus-daemon/tensorrt consume so much RAM by default.

You could use heaps of swap, but swap is so unbelievably slow, it makes it seem pointless. Using swap is also a good way to destroy your SD card, as I have done many times.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants