Distances in LQ Dynamic Programming Problems

When talking about preferences in LQ DP it is said that for the state and the control, loss is measured as squared distance from the origin. Additionally, it is said that R and Q can identify other – non-Euclidean – notions of “distance” from the zero vector,

My question is how can I transform R and Q and the expression of losses (5) to account from Manhattan distances (L_1) or even Chebyshev distances (L_{\infty})?

Good question. I’m not sure if that’s possible. These are more like weighted quadratic distances. I doubt they generalize in that direction — you’re going to end up with nondifferentiabilities and boundary solutions that the LQ methods can’t handle.

1 Like

Some ideas are:

At least, Manhattan distances from the zero vector are linear and can be computed as L_1 = 1' R x_t and I assume we can derive an analytic solution for this case.

For Chebyshev distances of vector x_t from zero L_{\infty} = max(x_{t,i}) where x_{t,i} are the elements of x_t, At first glance, it seems more complex to linearize. Maybe a promising way would be setting a very large number M and as many comparisons as the dimension of vector x_t along the lines of the methods/tricks suggested in:

Bemporad, A., & Morari, M. (1999). Control of systems integrating logic, dynamics, and constraints. Automatica , 35 (3), 407-427.