Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimal control dynamic programming

An alternative procedure is the dynamic programming method of Bellman (1957) which is based on the principle of optimality and the imbedding approach. The principle of optimality yields the Hamilton-Jacobi partial differential equation, whose solution results in an optimal control policy. Euler-Lagrange and Pontrya-gin s equations are applicable to systems with non-linear, time-varying state equations and non-quadratic, time varying performance criteria. The Hamilton-Jacobi equation is usually solved for the important and special case of the linear time-invariant plant with quadratic performance criterion (called the performance index), which takes the form of the matrix Riccati (1724) equation. This produces an optimal control law as a linear function of the state vector components which is always stable, providing the system is controllable. [Pg.272]

Dunn, J. C., and Bertsekas, D., Efficient dynamic programming implementations of Newton s method for unconstrained optimal control problems, J. Opt. Tkeo. Applies. 63(1), 23 (1989). [Pg.253]

R. Luus, Optimal control by dynamic programming using systematic reduction in grid size, Int. J. Control 51 (1990) 995—1013. [Pg.114]

Culver, T. B., and Shoemaker, C. A. (1993). "Optimal control for groundwater remediation by differential dynamic programming with quasi-Newton approximations." Water Resour. Res., 29(4), 823-831. [Pg.19]

Jones, L., Willis, R., and Yeh, W. W.-G. (1987). Optimal control of nonlinear groundwater hydraulics using differential dynamic programming. Water Resources Research, 23(11), 2097-2106. [Pg.43]

R. Luus, M. Galli, 1991, Multiplieity of Solutions in using Dynamic Programming for Optimal Control, Hung. J. Ind. Chem., vol. 19, p. 55... [Pg.318]

I. Development of the Mathematical Model and Algorithm, Rev. Chim., vol. 37, p. 697 A. Woinaroschy, 2007, Time-Optimal Control of Distillation Columns by Iterative Dynamic Programming, Chem. Eng. Trans., vol. 11, p. 253... [Pg.318]

DP Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, Belmont, MA, 2nd edition, 2000. [Pg.278]

Dynamic programming (DP) is an approach for the modeling of dynamic and stochastic decision problems, the analysis of the structural properties of these problems, and the solution of these problems. Dynamic programs are also referred to as Markov decision processes (MDP). Slight distinctions can be made between DP and MDP, such as that in the case of some deterministic problems the term dynamic programming is used rather than Markov decision processes. The term stochastic optimal control is also often used for these types of problems. We shall use these terms synonymously. [Pg.2636]

Chang, C. S. (1966), Discrete-Sample Curve Fitting Using Chebyshev Polynomials and the Approximate Determination of Optimal Trajectories via Dynamic Programming, IEEE Transactions on Automatic Control, Vol. AC-11, pp. 116-118. [Pg.2646]

A numerical technique that has become very popular in the control field for optimization of dynamic problems is the IDP (iterative dynamic programming) technique. For application of the IDP procedure, the dynamic trajectory is divided first into NS piecewise constant discrete trajectories. Then, the Bellman s theory of dynamic programming [175] is used to divide the optimization problem into NS smaller optimization problems, which are solved iteratively backwards from the desired target values to the initial conditions. Both SQP and RSA can be used for optimization of the NS smaller optimization problems. IDP has been used for computation of optimum solutions in different problems for different purposes. For example, it was used to minimize energy consumption and byproduct formation in poly(ethylene terephthalate) processes [ 176]. It was also used to develop optimum feed rate policies for the simultaneous control of copolymer composition and MWDs in emulsion reactions [36, 37]. [Pg.346]

The optimal control problem represents one of the most difficult optimization problems as it involves determination of optimal variables, which are vectors. There are three methods to solve these problems, namely, calculus of variation, which results in second-order differential equations, maximum principle, which adds adjoint variables and adjoint equations, and dynamic programming, which involves partial differential equations. For details of these methods, please refer to [23]. If we can discretize the whole system or use the model as a black box, then we can use NLP techniques. However, this results in discontinuous profiles. Since we need to manipulate the techno-socio-economic poHcy, we can consider the intermediate and integrated model for this purpose as it includes economics in the sustainabiHty models. As stated earlier, when we study the increase in per capita consumption, the system becomes unsustainable. Here we present the derivation of techno-socio-economic poHcies using optimal control appHed to the two models. [Pg.196]

The va/Mg-based approach significantly improves the effectiveness of procedures of controlling chemical reactions. Optimal control on the basis of the value method is widely used with Pontryagin s Maximum Principle, while simultaneously calculating the dynamics of the value contributions of individual steps and species in a reaction kinetic model. At the same time, other methods of optimal control are briefly summarized for a) calculus of variation, b) dynamic programming, and c) nonlinear mathematical programming. [Pg.59]

For simplest cases the use of the dynamic programming method leads to analytical solutions. Meanwhile, for several phase variables and control parameters the search of optimal solutions is an extraordinarily complicated problem. Therefore, the application of the method of dynamic programming proves to be accurate in numerical computations where powerful computer techniques are used. [Pg.64]

Despite the obvious success of munerical methods for nonlinear mathematical programming, their weaknesses were discovered early on. Among them it should be highlighted the main one, namely, the absence of the physicochemical visualization. To some extent it relates also to Bellmann s dynamic programming method. Naturally, incomplete information about the nature of the studied process on a way to optimal result constrains strongly the creative capabilities of a researcher. In particular, identification of the most active control parameters from a variety of the candidates is complicated, thus also complicating the solution of the defined problem. [Pg.69]

Simultaneous process and control design using mixed integer dynamic optimization and parametric programming... [Pg.187]


See other pages where Optimal control dynamic programming is mentioned: [Pg.161]    [Pg.64]    [Pg.70]    [Pg.13]    [Pg.105]    [Pg.63]    [Pg.8]    [Pg.620]    [Pg.2448]    [Pg.217]    [Pg.632]    [Pg.327]    [Pg.495]    [Pg.197]    [Pg.267]    [Pg.154]    [Pg.46]    [Pg.412]    [Pg.64]    [Pg.254]    [Pg.298]    [Pg.298]    [Pg.52]    [Pg.262]    [Pg.252]    [Pg.524]    [Pg.282]    [Pg.223]    [Pg.295]    [Pg.180]    [Pg.1146]   
See also in sourсe #XX -- [ Pg.248 , Pg.249 , Pg.250 ]




SEARCH



Control dynamics

Control optimization

Control optimizing

Control optimizing controllers

Dynamic Controllability

Dynamic controllers

Dynamic program

Dynamic programing

Dynamic programming

Dynamical control

Optimization dynamic

Program controllers

Program optimization

Programmed optimization

© 2024 chempedia.info