Continuous State Markov Chains ... exercise 1 ... help


Hello -

I have been staring at this problem for too long now, that I decided to ask …

I have a problem showing the stochastic kernel. I believe that I have done the right calculations, but am unable to show the resulting data in a graph. An empty graph appears … and then nothing … Anyone, please help.

import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm

n = 1000
theta = 0.8

d = np.sqrt(1-theta**2)

def p(x,y):
"Stochastic kernel for the TAR model"
return norm().pdf((y-theta*np.abs(x))/d)/d

Z = norm().rvs(n)
X = np.empty(n)

for t in range(n-1):
X[t+1] = thetanp.abs(X[t])+dZ[t+1]

n = len(X)
X = X.reshape((n, 1))

ys = np.linspace(-3,3,200)
k = len(ys)
ys = ys.reshape((1,k))

v = p(X,ys)
kernel = np.mean(v, axis=0)
h = len(kernel)
kernel = kernel.reshape((1,h))

fig, ax = plt.subplots(figsize=(10,7))
ax.plot(ys,kernel, ‘b-’, lw=2,alpha=0.6, label=‘look ahead estimate’)


Hey there terman,

I think the issue was that the arrays you were trying to plot weren’t one-dimensional, and matplotlib was making the wrong guess of what you wanted to do.

I rewrote it a bit to make it work, and also to remove the ‘vectorized’ approach and replace it with loops:

I try to avoid vectorized code more and more these days, and replace it with explicit loops. Sometimes vectorized code is succinct, but often it’s less clear than the explicit loop version.

Yes, explicit loops are slower in Python, but when it’s an outer loop the difference is negligible. When it’s an inner loop I speed it up using Numba and jit.


John Stachurski.


Hi John - thank you very much for you answer and suggestions.