Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch and tensor flow backend pass through the CPU. #1558

Open
jonasteuwen opened this issue May 1, 2020 · 4 comments
Open

Pytorch and tensor flow backend pass through the CPU. #1558

jonasteuwen opened this issue May 1, 2020 · 4 comments

Comments

@jonasteuwen
Copy link

Current implementations of the forward and backward projector wrappers in ODL pass through numpy arrays. Are there any plans to take the GPU context of ASTRA and connect it somehow?

@ozanoktem
Copy link
Contributor

This has been discussed at several occasions and especially so for algorithms that want to make use of automatic differentiation in PyTorch. A closely related issue is #731 (see also #739) and the pull-request (that is still open) #1546.

@kohr-h
Copy link
Member

kohr-h commented May 1, 2020

Main issue is that we don't have CUDA support in ODL directly, so any ODL operator will make the data go through CPU memory. Apart from the issues and PRs @ozanoktem referred to, there are two more: #1231 and #1401 (basically the same though). But my current stance is that all the tedious work to make ODL element wrappers behave nicely is waste of time (read: I won't do it) since I want to get away from that concept and use arrays directly, see #1475.

So yes, there are plans, but at least for me the order is #1475, then #1401. If anyone else would like to give it a try independently, go ahead. But it's not trivial.

In the short run @jonasteuwen you could hack something yourself by pulling out the Operator-specific parts in OperatorFunction and implement the calls to ASTRA directly.

@jonasteuwen
Copy link
Author

I guess #1401 would likely take bring it much closer as there seems to be a back and forth possible between cupy <-> pytorch.

@kohr-h
Copy link
Member

kohr-h commented May 1, 2020

Definitely, you can just hand over device memory pointers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants