This repository has been archived by the owner on Mar 17, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 39
I found why it doesn't work! #16
Comments
Thanks! I would love to see a PR for this. Let me know if I can help |
Great work! Despite your solution I dont seem to get accurate results, I havent done a direct comparison yet, Ive just scanned over the results. Im going to tinker with it a bit more before I post my model and results. Im not really sure if it will be useful though since its a pretty deep model. I can say now that the model uses Relu activation layers, Maxpool2D, Conv2d, and batchnorm. |
In my project I used exactly this layers: Conv2d, MaxPool, BatchNorm and relu activation and I could get the same output |
Im not really sure if Ive made a mistake, do you think MaxPool2D works properly? I'll try to step through it later to see if it produces the right output. |
It worked for me |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
the "view" in pytorch and "flatten" in keras work differently.
To fix this problem add this layer before "Flatten":
x = Lambda(lambda x: K.permute_dimensions(x, (0, 3, 1, 2)))(x)
The text was updated successfully, but these errors were encountered: