Unknown attribution error with JIT-compilation in DiscreteDP

Greetings:

As something of a pedagogical exercise, I am taking the Aiyagari model from the Python leture and adding preference shocks with index l_i, following a Markov process independent of that of the income shocks. I adapt the aiyagari_household file accordingly. The main challenge is that functions that admit jit compilation with the option nopython=True give rise to an UntypedAttributeError. The error results from a type inference problem when the command “ravel_multi_index” is called:

UntypedAttributeError: Unknown attribute “ravel_multi_index” of type Module(<module ‘numpy’

To give an example, the populate_Q function looks as follows:

def populate_Q(Q, a_size, z_size, l_size, Pi, Pi_l):
n = a_size * z_size *l_size
for s_i in range(n):
a_i, z_i, l_i = np.unravel_index(s_i, (a_size, z_size, l_size))
for a_i in range(a_size):
for next_z_i in range(z_size):
for next_l_i in range(l_size):
s_i_new = np.ravel_multi_index((a_i, next_z_i, next_l_i),(a_size, z_size, l_size))
Q[s_i, a_i, s_i_new] = Pi[z_i, next_z_i]*Pi_l[l_i, next_l_i]

So the issue here is that there is not a numba-ready implementation of numpy.ravel_multi_index

This means that we can’t call that function with nopython=True. To just get something that works you could try nopython=False, but that will cause the resulting function to be quite slow and probably defeat the purpose of using numba at all.

An alternative would be to your own version of the ravel_multi_index function as a numba jitted function in nopython=True mode. Given that the recursion is fairly straightforward I don’t think this would be more than a few lines of code

Dear Spencer:

Yes, disabling the nopython=True option largely defeats the purpose of
using numba and makes the computation time very burdensome. I will write my
own version of ravel_multi_index, as you suggest.

By the way, is there any thread/discussion of alternatives to discrete
dynamic programming for more state variables, witht he aim of mitigating
the curse of dimensionality? (I understand this is a big topic; I was just
wondering if there was anything active on quantecon.org).

Thanks so much for the help!

Best,

Mario Silva

Unfortunately I don’t think there is another thread at this time.

I’m actually thinking carefully about this very issue. For my research I’m working on a model with 7 state variables and three purely discrete choices, so I’d love to find a solution myself. If I come up with anything I’ll start a thread here about it and mention you so you see it.

Hi:

Excellent! Thank you very much. If I come across something useful, I will
be sure to post it as well.

Thanks,

Mario

By the way, is there any thread/discussion of alternatives to discrete
dynamic programming for more state variables, witht he aim of mitigating
the curse of dimensionality? (I understand this is a big topic; I was just
wondering if there was anything active on quantecon.org).

Good question. (This topic is close to my research interests…)

Yes, discretization is bound to suffer from the curse of dimensionality — and we can do better if we use some kind of continuous state approximation. Is this what you have in mind?

We do use continuous state approximation in some lectures, such as this one. The approximation method is piecewise linear. There are some logical reasons for favoring this over something like polynomials, although I don’t know the full story of which is better in every situation.

Of course the lecture mentioned above is still simplistic — not really a challenging problem.

Can you tell us what you’d like to see in the lectures that tackles the curse of dimensionality? Is there a specific application you have in mind?

Greetings!

I am very interested, broadly, in the incomplete markets literature,
particularly the sub-literature building on the interaction between
unemployment and self-insurance. This working paper
http://serdarbirinci.weebly.com/uploads/4/8/6/3/48631293/birinci_and_see_2017.pdf
represents one of the most recent contributions on optimal unemployment
insurance over the business cycle. Generally, I am also interested in
solution methods which handle occasionally binding constraints seriously.

In terms of a stripped-down application, a pedagogical example might be a
simple extension of the Aiyagari model–say with both income and preference
shocks. Thus, the state space of the household would be a triple (a,z,l) of
the asset position, productivity, and preference position.

Two alternative solution methods are: (1) Parameterized expectations
algorithm and (2) parametric dynamic programming (Judd 433-442). Both of
these seem to mitigate the curse of dimensionality while handling
occasionally binding constraints.

I am curious if parametric dynamic programming is a more stable algorithm
than parameterized expectations algorithm due to the contraction involved.

Thank you very much!

Regards,

Mario Silva