Discrete DP error - ' Rows of P must sum to 1'



I was trying to solve a discrete DP problem and I get an error saying that ‘the rows of P must sum to 1’.

It seems to be related to ‘Q_sigma’ which is the transition matrix which the program seems to calculate for a given policy function.

I suspect that this is probably because I haven’t specified my R and Q matrices properly in the problem, but going over those again I still can’t find the fault. Can someone point me more specifically to why exactly this problem might arise?



Hi @Hariharan_Jayasankar, can you give us a minimal piece of code that reproduces the error?


Hi John,

Unfortunately I couldn’t write a code which was different from my problem which reproduced the error.

The code is here: https://github.com/HariharanJayashankar/aiyagari/blob/master/endo_lab/model.py

Here are the relevant bits of code:

The initial definitions which build the R and Q matrices:

class agent():

def __init__(self,
             r = 0.02,
             w = 1.0,
             cbeta = 0.96,
             a_min = 1e-10,
             a_max = 18,
             a_size = 200,
             P = [[0.9, 0.1],
                 [0.1, 0.9]],
             z_vals = [0.1, 1.0],
             n_controls = 2):

    self.r, self.w, self.cbeta, self.a_min, self.a_max, self.a_size = r, w, cbeta, a_min, a_max, a_size
    self.n_controls = n_controls
    self.P = np.asarray(P)
    self.z_vals = np.asarray(z_vals)
    self.z_size = len(z_vals)

    self.a_vals = np.linspace(a_min, a_max, a_size)
    self.l_vals = np.linspace(1e-10, 1.0, a_size)
    self.n = self.z_size * self.a_size

    # -- setting R and Q -- #

    # R
    self.R = np.empty((self.n, self.n_controls * self.a_size))
    self.R.fill(-np.inf) # invalid values of utility have to be impossible to choose

    # Q
    self.Q = np.empty((self.n, self.n_controls * self.a_size, self.n))

def set_R(self):
    build_R(self.R, self.z_vals, self.z_size, self.a_size, self.a_vals, self.l_vals, self.r, self.w)

def set_Q(self):
    build_Q(self.Q, self.z_vals, self.z_size, self.a_vals, self.a_size, self.P)

def set_prices(self, r, w):
    Use this method to reset prices.  Calling the method will trigger a
    re-build of R.
    self.r, self.w = r, w

def build_R(R, z_vals, z_size, a_size, a_vals, l_vals, r, w):
n = z_size * a_size
for i in range(n):
for j in range(a_size):
for k in range(a_size):
a_i = i // z_size
z_i = i % z_size

            z = z_vals[z_i]
            a = a_vals[a_i]

            a_1 = a_vals[j]
            l = l_vals[k]

            c = w * l * z + (1 + r) * a - a_1

            if c > 0:
                R[i, j + k] = np.log(c) - cpsi * (l ** (1.0 + ceta))/(1.0 + ceta)

def build_Q(Q, z_vals, z_size, a_val, a_size, P):
n = a_size * z_size
for i in range(n):
z_i = i % z_size
for z_i_1 in range(z_size):
for a_i in range(a_size):
for l_i in range(a_size):
Q[i, a_i + l_i, a_i * z_size + z_i_1] = P[z_i, z_i_1]

Function which solves for equilibrium:

def solve_st_eq(r_min = 0.02, r_max = 0.04,
n_vals = 100, tol = 1e-2):

r_range = np.linspace(r_min, r_max, n_vals)
#n_range = np.linspace(n_min, n_max, n_vals)

iter = 1

for r_i in r_range:

    # == Initialize agent == #
    am = agent(a_max=20)

    # == k supply == #
    k_s, l_s, w = cap_lab_stock(am, r_i)

    # == corresponding firm r using inverse capital demand func == #
    r_star = rd(k_s, l_s)

    # Check deviation
    error = np.absolute(r_i - r_star)

    #print(f'## == Iter {iter}')
    print('R guess: ', r_i)
    print('R star: ', r_star)
    print('Error: ', error)

    iter += 1
    if error < tol:
        r_eq = r_i
        k_eq = k_s
        w_eq = w
        l_eq = l_s
        return r_eq, k_eq, w_eq, l_eq

Actually running the code:

if name == ‘main’:
# parameters
cpsi = 1
ceta = 2
A = 1
calpha = 0.33
cbeta = 0.96
cdelta = 0.05

r, k, w, l = solve_st_eq()
print(f'Equilibrium Interest Rate: {r}')
print(f'Equilibrium Capital Stock: {k}')
print(f'Equilibrium Wage: {w}')
print(f'Equilibrium Labour Stock: {l}')

Sorry for the long post. If I had to guess the problem is in how I define R and Q.
The indexing rule I used for the states is the same as in the lecture notes. For the action variable, since I basically have 2 actions (k_{t+1} and l_t), so I use the indexing rule a_i = l_i + k_i; where a_i is the i^{th} position in the vector a.



But for any future observers, the mistake I had made was in my indexing rule for action variables. I simply used i_k + i_l to specify the appropriate index in the vector defining the action space.

The obvious problem is that this leads to multiplicity in assignment (different combinations of k and l can take the same position), which is what lead to the error.

I changed the scheme to i_a = i_k * l_{size} + i_l and it worked perfectly.



Thanks for letting us know. Good luck solving your problem.