In this book, the reader is introduced to a variety of problem statements in classical optimal control, in optimal control problems with non-scalar performance criteria, and in optimal estimation and filtering. The optimal control theory is based on rather basic methods in the calculus of variation, in particular on the Lagrange multiplier methods. The theory is explained in a very transparent way.
Many engineering optimal control problems are solved completely. Each chapter contains a small collection of additional statements of optimal control problems. Their solutions are sketched in the appendix.
Optimal Control.- Optimal State Feedback Control.- Differential Games.