@jit with parallelization in Python

Hello,

I have a model using value function iteration with grid search method. Since I have 5 state variables and can’t reduce them further, it takes very long (more than 5 days) to run the code. I have added @jit to accelerate. I wonder how to parallelize to use multiple CPUs. I have tried those:

  1. Manually parallelize the loops using joblib within the function, and add @jit before the function. However, it seems that @jit doesn’t work with joblib.

  2. Use parallel option of jit, i.e.,@jit(nopython=True, parallel=True). However, I tested on the cluster, it doesn’t actually use multiple CPUs.

Am I wrong at some point? Could you please help me?

Best,
Minjie

Hi Minjie,

When you use @jit(nopython=True) does your code run without error?

If so, inside the function you annotate with @jit(nopython=True, parallel=True), you need to use the prange iterator, from numba, instead of range, at the point you want to parallelize.

Also, make sure you have the latest version of numba: From a shell or Anaconda prompt type conda install numba=0.37

Regards, John.

Hi John,

Many thanks for your reply. I have installed the latest numba after I received your message.

When I use @jit(nopython=True), the code runs without error.
When I use @jit(nopython=True, parallel=True) and use prange, it has the following error:


File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/compiler.py”, line 246, in run
raise patched_exception
File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/compiler.py”, line 238, in run
stage()
File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/compiler.py”, line 494, in stage_parfor_pass
parfor_pass.run()
File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/parfor.py”, line 192, in run
self._convert_prange(self.func_ir.blocks)
File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/parfor.py”, line 352, in _convert_prange
self._convert_prange(parfor_blocks)
File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/parfor.py”, line 269, in _convert_prange
call_table, _ = get_call_table(blocks)
File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/ir_utils.py”, line 783, in get_call_table
topo_order = find_topo_order(blocks)
File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/ir_utils.py”, line 747, in find_topo_order
cfg = compute_cfg_from_blocks(blocks)
File “/software/python3/3.5.2/lib/python3.5/site-packages/numba/analysis.py”, line 212, in compute_cfg_from_blocks
for target in term.get_targets():
AttributeError: Failed at nopython (convert to parfors)
‘SetItem’ object has no attribute ‘get_targets’

It seems that it fails at nopython part, but it can run without the parallel option.
Thanks a lot!

Best wishes,
Minjie

Hi Minjie,

If you run import numba and then numba.__version__ do you see 0.37?

If yes, then I’m not sure what’s going on. If the function is not too large, please post a code snippet with just the function.

Here are some tips for formatting, to make it easier to read:

Hi John,

Good news! I solved the problem. It seems that there was an incompatibility problem. I’m running on Bluehive cluster. There’s python in anaconda and there’s a default python outside of anaconda. So when I was running the code, it causes some confusion. After I only module load anaconda, there’s no problem.

Thanks!
Minjie

Well done Minjie. Please send a Github link to your code when you are ready to share your work with the world.

I will John, thanks a lot for your help!

thanks for the awesome information.

my issue has been fixed.