Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hamilton-Jacobi-Bellman equation

In short, the principle of optimality states that the minimum value of a function is a function of the initial state and the initial time and results in Hamilton-Jacobi-Bellman equations (H-J-B) given below. [Pg.88]

Thus, for the optimal trajectory, we have the following PDE for (p r, x), known as the Hamilton-Jacobi-Bellman (HJB) equation ... [Pg.249]

An alternative procedure is the dynamic programming method of Bellman (1957) which is based on the principle of optimality and the imbedding approach. The principle of optimality yields the Hamilton-Jacobi partial differential equation, whose solution results in an optimal control policy. Euler-Lagrange and Pontrya-gin s equations are applicable to systems with non-linear, time-varying state equations and non-quadratic, time varying performance criteria. The Hamilton-Jacobi equation is usually solved for the important and special case of the linear time-invariant plant with quadratic performance criterion (called the performance index), which takes the form of the matrix Riccati (1724) equation. This produces an optimal control law as a linear function of the state vector components which is always stable, providing the system is controllable. [Pg.272]


See other pages where Hamilton-Jacobi-Bellman equation is mentioned: [Pg.92]    [Pg.89]    [Pg.92]    [Pg.89]    [Pg.105]    [Pg.64]   
See also in sourсe #XX -- [ Pg.249 ]




SEARCH



Bellman

Hamilton

Hamilton equations

Hamilton-Jacobi

Jacoby

© 2024 chempedia.info