Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bellman’s principle

Bellman s principle of optimality states that an optimal strategy (a sequence of actions) of a decision process with T stages is characterized by the fact that all remaining decisions T-1 after the decision in t = 1 constitute an optimal strategy for the resulting state in i = 2, regardless of the initial state and the chosen action at t = 1 (Bellman 1957 Bamberg et al. 2008). [Pg.932]

In the first case, we seek to minimize one or more criteria (for instance, we could cite the minimization of the hydrogen consumption for a fuel cell/battery vehicle) expressed in the form of mathematical function(s). In order to carry out this minimization, the dynamic programming method based on Bellman s principle of optimality [BEL 55] ... [Pg.290]

Bellman s (1957) principle of optimality An optimal policy has the property that, whatever the initial state and the initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision. ... [Pg.29]

To find this maximum it suffices to choose the conditions in the first of the two beds, since the optimal policy of the second has been calculated already. Equation (11.5.d-2) is Bellman s optimum principle. Consider now the last three beds. These can be decomposed into a first bed (in the direction of process flow) and a pseudo stage consisting of the last two beds, for which the optimal policy has already been calculated for a series of inlet conditions. The procedure is continued in the same way towards the inlet of the multibed reactor. [Pg.496]

Consider now two steps, the last two of the multibed adiabatic reactor. From Bellman s maximum principle it follows that the optimal policy of bed 1 is preserved. This time X2 and Ti have to be chosen in an optimal way to arrive at... [Pg.498]

Moreover, quite often are the cases when the derivatives of functions and functionals in specific points and regions do not really exist (e.g. in diseontinuous or broken-line funetions). To get over these difficulties, non-classic methods of ealeulus of variations are applied. Among these the most effective and naturally the more popular ones are Bellman s method of dynamic programming and Pontryagin s principle of maximum. [Pg.64]

An alternative procedure is the dynamic programming method of Bellman (1957) which is based on the principle of optimality and the imbedding approach. The principle of optimality yields the Hamilton-Jacobi partial differential equation, whose solution results in an optimal control policy. Euler-Lagrange and Pontrya-gin s equations are applicable to systems with non-linear, time-varying state equations and non-quadratic, time varying performance criteria. The Hamilton-Jacobi equation is usually solved for the important and special case of the linear time-invariant plant with quadratic performance criterion (called the performance index), which takes the form of the matrix Riccati (1724) equation. This produces an optimal control law as a linear function of the state vector components which is always stable, providing the system is controllable. [Pg.272]


See other pages where Bellman’s principle is mentioned: [Pg.29]    [Pg.29]    [Pg.213]    [Pg.148]    [Pg.154]    [Pg.29]    [Pg.29]    [Pg.213]    [Pg.148]    [Pg.154]    [Pg.374]    [Pg.64]    [Pg.271]   
See also in sourсe #XX -- [ Pg.154 ]




SEARCH



Bellman

Bellman’s Principle of Optimality

S Principle

© 2024 chempedia.info