Discussion:
[PyCUDA] Pycuda multiple gpus
Irving Enrique Reyna Nolasco
2016-08-24 10:42:25 UTC
Permalink
I am a student in physics. I am pretty new
in pycuda. Currently I am interesting in finit volume methods running on
multiple GPUS in a single node. I have not found relevant documentation
related to this issue, specifically how to communicate different contexts
or how to run the same kernel on different devices at the same time.
Would you suggest me some literature/documentation about that?

Regards
--
------------------------------
This message and its contents, including attachments are intended solely
for the original recipient. If you are not the intended recipient or have
received this message in error, please notify me immediately and delete
this message from your computer system. Any unauthorized use or
distribution is prohibited. Please consider the environment before printing
this email.
Andreas Kloeckner
2016-08-24 14:41:22 UTC
Permalink
Post by Irving Enrique Reyna Nolasco
I am a student in physics. I am pretty new
in pycuda. Currently I am interesting in finit volume methods running on
multiple GPUS in a single node. I have not found relevant documentation
related to this issue, specifically how to communicate different contexts
or how to run the same kernel on different devices at the same time.
Would you suggest me some literature/documentation about that?
I think the common approach is to have multiple (CPU) threads and have
each thread manage one GPU. Less common (but also possible, if
cumbersome) is to only use one thread and switch contexts. (FWIW,
(Py)OpenCL makes it much easier to talk to multiple devices from a
single thread.)

Lastly, if you're thinking of scaling up, you could just have one MPI
rank per device.

Hope that helps,
Andreas

Loading...