Borrowing Constraints in Aiyagari using DDP

We want to introduce a negative borrowing constraint in the code of Aiyagari using DDP.

We start by changing the values of “a” and we introduce a borrowing limit “phi” inside the Reward Function. However, because the value of a is negative, we fail the feasibility test when we create the instance of household. We understand that the A space stands for the indices of possible actions. However, we don’t know exactly understand how we can put that in the Reward Function since when we are calculating the Consumption values we use the asset values as well.

We tried to do that by hand (using the functions inside Quantecon like the S_WISE_MAX) but we are always getting a kink at the beggining of the policy function (graph on the top). Using a code with EGM, the exact same calibration give us the expected policy functions. (middle graph)

Our main goal is to replicate the graph of Aiyagari’s Marginal Propensities to Consume (last graph). However, with that kink, the Consumption policy functions are not correct and so we cannot replicate the above graph.

Do you have a suggestion on how to overcome this problem?

Many thanks in advance,

Best Regards,
Valter

Hi @Valter_Nobrega,

That does look wrong, as you say, but I’m finding it hard to diagnose the bug. Would it be possible to post your Jupyter notebook or Python program as a gist (see, e.g., this one)? Please keep it as simple as possible, while still producing the error.

Hi @john.stachurski

Thank you again for you answer.
I created this gist:

Best,
Valter

I do not know the detail of the model, but generally, if you want to add a side constraint, either

  • remove the actions that do not satisfy the constraint from the set of feasible actions, or
  • give a reward “negative infinity” to those actions.

This line

a_new = max(a_vals[new_a_i], phi) #Borrowing Constraint a>=phi

looks suspicious. Maybe you want to give a reward “negative infinity” if a_vals[new_a_i] < phi?

Hi there, thanks for all your useful reply!

I followed your suggestion and I saw that, in those points below the borrowing constraint (a_vals < phi), the R function was giving the same value for every possible action, because in all possibilities the consumption was negative.

Thus I tried to change the values of the reward when they are below the borrowing constraint. (I show that in the picture below)

But after this step I have another questions that I still cannot manage to solve from the DDP code. As I said, my final goal is to replicate the MPC of the Aiyagary Model, and to do so I need the Consumption Policy Function. But from the Asset Policy Function that we take from DDP, there are there some “discrete steps” that, when I do the CPF, they become very strong.

I believe it is possible to solve this using interpolation, but I still didn’t manage to do that. I tried to do that inside the Value Function Iteration, but it didn’t solve the problem. Is there a way to overcome this issue?

Many Thanks