-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytorch and tensor flow backend pass through the CPU. #1558
Comments
Main issue is that we don't have CUDA support in ODL directly, so any ODL operator will make the data go through CPU memory. Apart from the issues and PRs @ozanoktem referred to, there are two more: #1231 and #1401 (basically the same though). But my current stance is that all the tedious work to make ODL element wrappers behave nicely is waste of time (read: I won't do it) since I want to get away from that concept and use arrays directly, see #1475. So yes, there are plans, but at least for me the order is #1475, then #1401. If anyone else would like to give it a try independently, go ahead. But it's not trivial. In the short run @jonasteuwen you could hack something yourself by pulling out the |
I guess #1401 would likely take bring it much closer as there seems to be a back and forth possible between cupy <-> pytorch. |
Definitely, you can just hand over device memory pointers. |
Current implementations of the forward and backward projector wrappers in ODL pass through numpy arrays. Are there any plans to take the GPU context of ASTRA and connect it somehow?
The text was updated successfully, but these errors were encountered: