Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not load model if it's already loaded #27

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

gibiansky
Copy link

Not only is this slow, this will break on GPUs because TensorFlow does not release its allocated memory. So if you try to run a catalog with multiple files on a GPU, the first file will succeed, and the second file will give you an OOM error.

Not only is this slow, this will break on GPUs because TensorFlow does not release its allocated memory. So if you try to run a catalog with multiple files on a GPU, the first file will succeed, and the second file will give you an OOM error.
@gibiansky
Copy link
Author

I'm not totally sure how this interacts with multiple processes though, or if this creates some sort of race condition. On first glance I think it's fine but I haven't thought about this deeply.

I made this PR because I hit this issue and the change I put in this PR fixed it so figured I'd put this up in case someone else hits this.

Otherwise using this on GPU fails, because TensorFlow constantly allocates the entire heap and doesn't let go of it
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant