Anyone ever using fmin_l_bfgs_b to solve a simple utility maximization problem?

I have tried my best to use the fmin algorithm provide by scipy, but I couldn’t get the correct answer. It seems no one wrote how to apply this to utility maximization anywhere. (The same error happens with fmin_tnc, fmin_cobyla too.)

I keep getting error message. Here is the version that no error message but the answer is incorrect. The utility function is just from the Datascience Lecture on Optimization.

The answer is A=B=6.67 but I failed to get this.
If any was able to solve this using fmin_l_bfgs_b could you please share your code here? Thanks a million!

Another related question is ex 2 in the Python Scientific Computation: Optimization

I was pretty hard for me to do this using Khun-Tucker condition. (The course material says one needs no more than ECON101, so I think Khun-Tucker is probably not the way to go.)

Just by looking at the function:

W = 10: C=0, P=10,
W = 50 and W = 150: C = 1, P=20

It is a bit strange to solve this without any normal approach.

Any clarification is appreciated. Thanks!

Because you didn’t read the documentation of l_bfgs_b.It requires you to provide the gradient along with the function value. If not, you need to set approx_grad = True. There are some other errors in your code that I don’t want to dig into, but the following would work:

from scipy.optimize import fmin_l_bfgs_b
import numpy as np

def U(Z, alpha = 1/3):
    A, B = Z
    return -(B**alpha * A**(1-alpha))
# A = np.linspace(1, 10, 100).reshape(100,1)
# B = np.linspace(1, 20, 100).reshape(100,1)
fmin_l_bfgs_b(U, x0 = np.array([2.0, 1.0]), bounds=[(0.01, 10), (0.01, 20)], approx_grad=True)

I was pretty hard for me to do this using Khun-Tucker condition.

  1. It’s Kuhn-Tucker.
  2. You don’t apply KT in a non-convex separation scenario – you need to understand KT before applying KT, and it is irrelevant to this satiation point exercise.

My suggestion is to stare at the utility function and think about the properties of it, before you start applying the tools that you are not very familiar with. A satiation point means that your optimal point may not lie on the budget constraint

1 Like

I misspoke above as KT is more general. But you don’t have to think about the shadow price (the lagrange multiplier of the constraint here) in this simple exercise. Just treat it as it might be binding or not binding.

Thanks for your help/ It stills yield incorrect answer from what you typed for me. A, B seems become the upperbounds of their constraints.

It stills yield incorrect answer from what you typed for me. A, B seems become the upperbounds of their constraints.

Because the utility grows as A and B increase. Without constraints, you’ll end up with infinity. And the utility function is maximized at the upper bounds.
I suggest you go through the lecture rather than running the codes per se in order to understand the logic behind.

1 Like

Thanks for pointing this out. I realized that the constraint must be binding in this simple case, but the fact that the keyword argument “bounds” is provided in the as a bound rather than the budget constraint. I admit that I did not go to the link in the lecture, but rather went to the link in quantecon part and looked for optimization for constrained optimization.

The link in the datascience part shows something easier and now I can solve the problem. Thanks very much!

PS As for some people who is a starter, I put what I did in exercise 1 and 2 in optimization here.