-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loading a pre-trained model core to get responses to images #120
Comments
Hi there! thank you for your questions. Yes it is possible to load the pretrained model. you can just run this:
You'd need the docker environment unfortunately. It'd be easiest to use the docker image and to download the data.
|
Thanks for the quick response! I followed code in the notebook you reccomended but I was unable to load the initial model:
I get the error 170 # This function throws if there's a driver initialization error, no GPUs
171 # are found or any other error occurs
--> 172 torch._C._cuda_init()
173 # Some of the queued calls may reentrantly call _lazy_init();
174 # we need to just return without initializing in that case.
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx Do I need an NVIDIA GPU in order to run the model? Is there a switch I can throw to just run on the CPU? |
Oh yes, by default all parts of the model and dataloaders are transferred to a GPU. But you can disable it by adding so this would be your whole dataset_config
that should do the trick. everything else will be on the cpu from that point on and you can run model inference on the cpu. In the model evaluation notebook, if you want to recreate the plots, you have set all the |
Thanks for all your help. I believe I am close now. I am working through https://github.com/sinzlab/sensorium/blob/main/notebooks/model_tutorial/2_model_evaluation_and_inspection.ipynb Including But when I tried to load the pre-trained weight:
I get a error having to do missing keys in loading the state_dict:
If it is helpful here is the model description that got printed:
|
I'm sorry for all of these issues! There have been a few very subtle changes to the core architecture, which makes the loaded weights also a bit different, but the impact of this should be negligible. you can solve both issues though simply by running:
hope it'll work now! you should then also be able to compute the accuracy metrics from that notebook. let me know if the values that you are getting for one or more datasets are comparable. |
No worries at all! Thanks for helping me work through them! Woohoo! It works! I did have to change your recommended code a little,
and then I needed to change the You were correct, I had only loaded one dataset. Does that matter for using the pre-trained SOTA model? It does seem odd to require a dataset to run a model ... Otherwise thanks for all the help! I am happy to summarize what I needed to do to load and run the SOTA model without a GPU and a single dataset if that would be helpful. |
Is there a straightforward way to get a pre-trained model core from which I can put in images (numpy arrays) and get responses from the feature map?
I have yet to figure out how to do so. I have managed to install the code. I looked at the
0_baseline_CNN.ipynb
and it appears to get a baseline model requires training the model. For the sake of saving electricity and time I was hoping to get pre-trained weights and load it into the baseline model.Ideally, I would bypass the custom data loaders and the shifter network---if I could do this outside of the docker enviroment that would be the cherry on top. Sorry in advance if the answer was obvious!
The text was updated successfully, but these errors were encountered: