I am working through the Aiyagiri example and it ran fine (if a bit slowly). Now I am trying to extend it by endogenizing labour. The issue here is that the R and Q matrices become too large (I think) and as a result python keeps throwing a memory error after a couple of iterations.
I am using python 64 bit on a 64 bit machine. I have 8 GB of Ram. I am using windows 10.
I have modified the code to be that it iterates through a range of r_values to get the equilibrium level of interest rate and capital stock. I read an issue with the discrete DP quant econ package that for larger problems the fact that we need to specify R and Q before hand could prove to be to be too memory intensive (ref: https://github.com/QuantEcon/QuantEcon.py/issues/185). I am not sure if this was fixed or not.
Would slightly complicated to code up but I feel like it won’t be as memory intensive? I am going to try it today, but I was wondering if anyone else had any tips here which may save me some unnecessary effort.
Hi @Hariharan_Jayasankar, this is indeed a known issue with the way the DP routines are written. But there are advantages to pre-building R and Q in other situations, so it’s a trade off.
@oyama.daisuke is the author of the code and might have further thoughts.
You can write your own version of the DP routines, certainly, and that’s probably a good option in a high dimensional setting. Here’s some jitted, parallelized optimal savings code that might help get you started:
As discussed at https://github.com/QuantEcon/QuantEcon.py/issues/185 which you already referred to, it is an inherent limitation of the way in which DiscreteDP is implemented, which assumes that P and Q are stored in RAM upfront. The best you can do currently is to use a sparse matrix format for Q (if your problem is sparse enough).
It would be interesting to see if the Blaze ecosystem would help, which can handle “larger-than-memory” arrays. (I have never used it yet though.)
I will see if the sparse matrix option works for me.
@john.stachurski, if I may ask, whats the difference between the model you posted and the Aiyagari model? It seems like the only difference from the lecture notes and this is how z is defined. Or am I wrong?
Yes that’s right, @Hariharan_Jayasankar. But it’s solved with a much lower memory footprint. Of course that only gives you the household problem. For the equilibrium problem you can follow the Aiyagari lecture.
Sorry if this is too much to ask, but it’s a related question which I am not able to figure out.
Is it possible to use a jitted function to loop over parameter values in these scenarios? For example in the notebook you posted, you loop over values of r to compute the capital supply for different levels of interest rates.
I find a normal loop to be far too slow, but jitting it in a straightforward manner throws too many errors. This is the baseline code for it:
r_range = np.linspace(r_min, r_max, r_size)
iter = 0
error = tol + 1
n = 0
while error > tol and iter < n_iter:
r_i = r_range
# Figure out firm problem
w_i = r_to_w(r_i)
# Solve agents problems
mod = AiyagariProb(r = r_i, w = w_i)
T = mod.bellman_operator()
v_init = np.ones((mod.a_size, mod.z_size))
v_star, pol = vfi(T, v_init)
k_s = prices_to_capital_stock()
# get back how much the firm is willing to pay for that K_s
r_star = rd(k_s)
error = np.absolute(r_star - r_i)
n = n + 1
return r_i, k_s
What this is doing is looping over different values of r, getting a value of w, inserting w and r into the consumer problem, getting a level of capital stock, then seeing what interest rate (r_star) firms are willing to pay for that level of capital stock. If r_i and r_star aren’t close enough, the loop goes to the next value of r.
@Hariharan_Jayasankar: I suggest you practice with Numba to get a sense of what’s possible and what’s not. Have a read of the documentation to see what can be just in time compiled. But in this case, there won’t be much speed gain to jitting the outer loop — given that the inner loops are already jitted. I don’t usually do it myself. I try to make the inner loop as fast as possible and then after that it just depends on your hardware.