You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am conducting numerical integration using JAX and require guidance on memory optimization for Importance Sampling. Specifically, I am generating approximately 10^5 samples and utilizing their weights relative to the underlying distribution. However, these weight arrays consume substantial GPU memory, frequently resulting in memory exhaustion errors. Given that these weights remain constant throughout the computation, I am exploring potential memory management strategies. My primary inquiry is whether CUDA constant memory could be leveraged to efficiently store and access these arrays, thereby mitigating GPU memory constraints. I would appreciate any insights and recommendations on this matter.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am conducting numerical integration using JAX and require guidance on memory optimization for Importance Sampling. Specifically, I am generating approximately 10^5 samples and utilizing their weights relative to the underlying distribution. However, these weight arrays consume substantial GPU memory, frequently resulting in memory exhaustion errors. Given that these weights remain constant throughout the computation, I am exploring potential memory management strategies. My primary inquiry is whether CUDA constant memory could be leveraged to efficiently store and access these arrays, thereby mitigating GPU memory constraints. I would appreciate any insights and recommendations on this matter.
I have used the following formalism,
Beta Was this translation helpful? Give feedback.
All reactions