40.3.4. An Example - on modeling optimal growth

In the example there is a model:

f(k) = k^{\alpha} \quad \text{and} \quad u(c) = \ln c

For production function we have:

#python
α = 0.4
def fcd(k):
    return k**α

og = OptimalGrowthModel(u=np.log, f=fcd)

But the code in class OptimalGrowthModel has a line for value function return u(c) + β * np.mean(v(f(y - c) * shocks))

One may see that f(y - c) inside v but the function would have been v( f(y) - c ). What am I getting wrong?

The reference in the section point to the book where we see the whole model:

Hi @used4regnum,

Nice question! I can see this is slightly different from the reference.

It really depends on the timing you are assuming. For this lecture, I think f(y - c) is used consistently, so it shouldn’t be an issue. You could also try v(f(y) - c) by making a slight modification to the code.

I see, thank you.

Well, I’ve tried to modify the code, but it throws some errors from optimization routines regarding some bounds. I’m not sure what to tweak to avoid these problems.

Feel free to post them. We can work through them together.

Thank you. Well, I think that in the example the production function is y = k^\alpha, but the analytical functions are for (k-c)^\alpha, i.e. for the problem that is solved.

From the book, we have:
image

So: the problem tweaks are as follows:
In the class OptimalGrowthModel:

def state_action_value(self, c, y, v_array):
    """
    Right hand side of the Bellman equation.
    """
    u, f, β = self.u, self.f, self.β
    v = interp1d(self.grid, v_array, fill_value="extrapolate")    # correction -> fill_value="extrapolate"
    return u(c) + β * v( f(y) - c )                               # correction -> v(f(y) - c)

Then we need to change analytical function v_star, to say v_super_star:

def v_super_star(k, α, β, μ):
    """
    True value function for A * (k**α) case
    """
    A = 1
    c1 = np.log(A * (1 - α * β) )
    c2 = α * β * np.log(A * α * β)
    c3 = 1/(1 - β)
    c4 = 1/(1 - α * β)
    return c3 * (c1 + c2 * c4) + c4 * α * np.log(k)

Finally, add another plot to see that the values have changed.

grid = og.grid

v_init = v_star(grid, α, og.β, og.μ)    # Start at the solution

v_init_1 = v_super_star(grid, α, og.β, og.μ)    # correction
v_greedy, v = T(v_init_1, og)                   # Apply T once to correction

fig, ax = plt.subplots()
ax.set_ylim(-35, -24)
ax.plot(grid, v, lw=2, alpha=0.6, label='$Tv_{update}^*$')         # name correction
ax.plot(grid, v_init, lw=2, alpha=0.6, label='$v^*$')
ax.plot(grid, v_init_1, lw=2, alpha=0.6, label='$v_{update}^*$')   # correction
ax.legend()
plt.show()

As I cannot upload more than one picture per post, here is a follow up.

Compare (values are of the same magnitude but different):

One more (!) crucial point.
Bounds of optimization should be changed.

c_star, v_max = maximize(og.state_action_value, 1e-10, y, (y, v))

Here one should control that y = A k^ \alpha, i.e. y = A* (k**α)

Hi @used4regnum,

Thanks for sharing. This looks good.

Could you please share the error you got? Have you already fixed it?

yes, I’ve fixed two major issues with bounds.
The first was technical - one needs to add fill_value="extrapolate" in the code as shown above.
Another one is more tricky - one needs to carefully think about bounds of optimization when the form of the value function changes. I’ve added the warning, and the line where one should check bounds 1e-10, y. The bounds of y changes when we use different production functions.