MPI.jl or pmap Lecture?


#1

Hello, all!

Are there any plans for a lecture on MPI.jl or efficient use of pmap? Most examples out there are very un-economics-y, like FFT and diff equations, nothing like the “patterns” we use for NGM (e.g. distribute V0 to workers, construct V1 from V0 in a distributed manner, gather the pieces of V1 and b’ from all workers).

Something like this could be very useful for people either using MPI with Fortran or parfor in Matlab.

If something like this is already out there, I’d love to read about it! Thank you!


#2

Yes, there is an issue in our issue tracker reminding us to treat parallelization in Julia. We just haven’t got to it yet. Thanks for prodding us to get it done and your suggestions.

I was thinking of something fairly straightforward as a first pass, using shared memory and multiple cores on a local machine. Perhaps with dynamic programming…

Is that what you had in mind, or were you hoping for something more sophisticated?


#3

That would be great! Parallelizing the current baseline value function iteration code would be a natural start?

Presumably many people have prior experience with MPI so showing that you can do the same in Julia, with MPI.jl, could be helpful, albeit not that idiomatic.

In my (very limited) experience with using pmap (over a set of indices across the state space), it’s hard to know what variables are “visible” and/or transmitted to the workers and when. It seemed like there was quite a bit of network overhead compared to finer control via MPI.jl I guess this gets to the shared vs distributed memory point.


#4

Yes, parallelizing the VFI code sounds like a good idea. @cc7768 @spencer.lyon Do you have thoughts on @Gabriel_M 's suggestions?


#5

Sounds like the issue you ran into here is that you need to use a CachingPool if you’re sending data along with the pmap in order to make that part efficient. You can see an example of it here:

The problem is moreso that it’s a thing in Julia which isn’t well documented yet, and I’m not sure not many people actually know it exists (I didn’t until quite recently). That and you’ll want to control the batching. The Julia manual itself might need an overhaul here.


#6

I think a lecture that shows how to parallelize VFI in a few ways would be a great start. I think showing a parallel implementation using each of:

  • pmap or @parallel loop
  • SharedArray
  • Threads.@threads
  • MPI.jl

would provide a pretty decent picture of what is possible with parallel in Julia.

I don’t have time to write this now, but could give some tips if someone else wants to take it on.


#7

Ooh that’s cool. I’ve never seen CachingPool before! Thanks for the tip