Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Review of optimal control

An optimal control system seeks to maximize the return from a system for the minimum cost. In general terms, the optimal control problem is to find a control u which causes the system [Pg.272]

The problem is one of constrained functional minimization, and has several approaches. [Pg.272]

Variational calculus, Dreyfus (1962), may be employed to obtain a set of differential equations with certain boundary condition properties, known as the Euler-Lagrange equations. The maximum principle of Pontryagin (1962) can also be applied to provide the same boundary conditions by using a Hamiltonian function. [Pg.272]

An alternative procedure is the dynamic programming method of Bellman (1957) which is based on the principle of optimality and the imbedding approach. The principle of optimality yields the Hamilton-Jacobi partial differential equation, whose solution results in an optimal control policy. Euler-Lagrange and Pontrya-gin s equations are applicable to systems with non-linear, time-varying state equations and non-quadratic, time varying performance criteria. The Hamilton-Jacobi equation is usually solved for the important and special case of the linear time-invariant plant with quadratic performance criterion (called the performance index), which takes the form of the matrix Riccati (1724) equation. This produces an optimal control law as a linear function of the state vector components which is always stable, providing the system is controllable. [Pg.272]


See other pages where Review of optimal control is mentioned: [Pg.272]   


SEARCH



Control optimization

Control optimizing

Control optimizing controllers

© 2024 chempedia.info