You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your advice @junyixu! You mentioned about Jacobian matrices - are they the only parts that could benefit from auto differentiation on GPU?
huiyuxie
changed the title
Integrate TrixiGPU.jl with Enzyme.jl for Differentiable Programming
Integrate TrixiCUDA.jl with Enzyme.jl for Differentiable Programming
Nov 5, 2024
Some potential benefits: When computing large-scale Jacobians, we avoid data transfer overhead between CPU and GPU. In DG methods, with 100k elements, that matrix gets too big to store on CPU. If we compute derivatives directly on GPU, we can use matrix-free methods and compute them on the fly when needed.
This is especially useful for optimization problems - when studying how inlet conditions affect shock positions, or doing mesh adaptation where gradients are needed frequently. Computing directly on GPU is much faster than shuffling data back and forth.
Though it is trickier to implement than on CPU, and requires more careful memory management. Some existing CPU code might need rewriting.
I propose integrating the GPU version of Trixi.jl with Enzyme.jl for differentiable programming.
Benefits:
Note: Jacobian matrices computed on CPU and GPU may differ due to:
The text was updated successfully, but these errors were encountered: