I was working on the optimal savings / income fluctuations problem and noticed that the code for minimizing in the bellman_operator
and rootfinding in the coleman_operator
use the following bound for choosing c
:
np.min(z_vals) < c < R * a + z + b
More specifically, the optimization/rootfinding lines are:

bellman_operator:
c_star = fminbound(obj, np.min(z_vals), R * a + z + b)

coleman_operator:
Kc[i_a, i_z] = brentq(h, np.min(z_vals), R * a + z + b)
I understand that, given the timing of the problem setup, the agent will always have the income shock “in hand” before making the consumption decision. However it isn’t immediately clear to me that this should give us a lower bound on the optimal choice of c
(which isn’t just 0). This is particularly true as b
grows larger.
I played around a bit with the code in a “scratchwork” notebook that I posted here.
I basically ran the code using two different bounds for the coleman and bellman operators:

np.min(z_vals) < c < R * a + z + b
 the original, as it appears in the lecture (and quantecon.applications)

1e8 < c < R * a + z + b
 near “machine zero” for the lower bound on c
The key question I have can be seen in the four figures at the end of the section titled 3.2 ReRun Demos.
The plots are titled:
 Replication with z at low value, b=0.1
 Replication with z at high value, b=0.1
 Replication with z at low value, b=2.0
 Replication with z at high value, b=2.0
These are just replications of the figure in Exercise 1 from the Optimal Savings lecture, but for both the z=low and z=high shocks.
Here’s the key thing: the dashed black line represents the solution under np.min(z_vals)
lower bound, and the solid black line represents the solution under the 1e8
(“machine zero,” yeah, yeah…) lower bound.
You can see that they diverge in both, and more when b is bigger.
What am I missing here? Which is correct?
Thanks so much!
Nathan