You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've often been asked how to hand data back and forth to MPI applications. At least to start this is likely to be handled on an application by application basis. It would be nice to provide an example doing this that others can copy.
As an example application it might be nice to use Elemental a well respected distributed array library.
There are probably a few ways to do this. In all cases you'll probably need to start Dask workers within an MPI Comm world, probably using mpi4py. You might then do a little pre-processing with Dask, persist a distributed dask array and then run a function on each of the workers that takes the local numpy chunks in their .data attribute and either starts some other MPI work or MPI.sends that data to the appropriate MPI rank running some other code. You'll then either need to wait for that function to return a bunch of numpy arrays or else MPI.recv them from other ranks. Then you can arrange those blocks back into a dask.array (perhaps by putting them into a Queue and letting some other client handle the coordination.
The text was updated successfully, but these errors were encountered:
We've often been asked how to hand data back and forth to MPI applications. At least to start this is likely to be handled on an application by application basis. It would be nice to provide an example doing this that others can copy.
As an example application it might be nice to use Elemental a well respected distributed array library.
There are probably a few ways to do this. In all cases you'll probably need to start Dask workers within an MPI Comm world, probably using mpi4py. You might then do a little pre-processing with Dask, persist a distributed dask array and then
run
a function on each of the workers that takes the local numpy chunks in their.data
attribute and either starts some other MPI work orMPI.send
s that data to the appropriate MPI rank running some other code. You'll then either need to wait for that function to return a bunch of numpy arrays or elseMPI.recv
them from other ranks. Then you can arrange those blocks back into a dask.array (perhaps by putting them into a Queue and letting some other client handle the coordination.The text was updated successfully, but these errors were encountered: