Stability, implies that the optimal control has finite cost, but does to guarantee that we can reach any state. (For example, we always move to x=0)

We like to reach x=0, and should think that we "normalize" the system so that the origin (x=0) is the desired state of operation. ]]>

In the LQR lecture we defined controllability as a sufficient condition for solving the ARE equations.

Then we defined stability which basically tell us if our system will explode or not depending on the eigenvalues of the proposed optimal solution.

Can someone explain how are the two related ?

We can reach every state but then cannot stay there? we will try to reach it but the system will be very unstable?

Also it says that a good system is a system where the eigenvalues are lower than 1 hence x_t goes to 0, why is it good?

We want x_t to be a specific state and not zero.

Thanks!

]]>