This question is not directly related to QuantEcon lecture
I am writing a program to implement value function iteration with discretized grid and interpolation
The value function iteration must converge monotonically, at least in continuous case
However, I sometimes face a case in which the difference of old value and new value, measured by the norm of the gap, increases for a couple of iterations and goes back to decline (namely, the convergence is non-monotonic)
The value eventually converges to a certain value, which does not depend on initial guess (likely to be true one?)
Does it mean I am doing something wrong? or Is it a matter of discretization?