Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimal control theory problem solutions

The above formulation is a well posed problem in optimal control theory and its solution can be obtained by the application of Pontryagin s Minimum Principle (Sage and White (1977)). [Pg.326]

For the numerical solution of optimal control problems, there are basically two well-established approaches, the indirect approach, e. g., via the solution of multipoint boxmdary-value problems based on the necessary conditions of optimal control theory, and the direct approach via the solution of constrained nonlinear programming problems based on discretizations of the control and/or the state variables. The application of an indirect method is not advisable if the equations are too complicated or a moderate accuracy of the numerical solution is commensurate with the model accuracy. Therefore, the easier-to-handle direct approach has been chosen here. Direct collocation methods, see, e. g., Stryk [6], as well as direct multiple shooting methods, see, e. g., Bock and Plitt [1], belong to this approach. In view of forthcoming large scale problems, we will focus here on the direct multiple shooting method, since only the control variables have to be discretized for this method. This leads to lower dimensional nonlinear programming problems. [Pg.78]

Method of Solution The fundamental numerical problem of optimal control theory is the solution of the two-point boundary-value problem, which invariably arises from the application of the maximum principle to determine optimal control profiles. The state and... [Pg.332]

We emphasize that the question of stability of a CA under small random perturbations is in itself an important unsolved problem in the theory of fluctuations [92-94] and the difficulties in solving it are similar to those mentioned above. Thus it is unclear at first glance how an analogy between these two unsolved problems could be of any help. However, as already noted above, the new method for statistical analysis of fluctuational trajectories [60,62,95,112] based on the prehistory probability distribution allows direct experimental insight into the almost deterministic dynamics of fluctuations in the limit of small noise intensity. Using this techique, it turns out to be possible to verify experimentally the existence of a unique solution, to identify the boundary condition on a CA, and to find an accurate approximation of the optimal control function. [Pg.502]

The situation is quite different when inequality constraints are included in the MPC on-line optimization problem. In the sequel, we will refer to inequality constrained MPC simply as constrained MPC. For constrained MPC, no closed-form (explicit) solution can be written. Because different inequahty constraints may be active at each time, a constrained MPC controller is not linear, making the entire closed loop nonlinear. To analyze and design constrained MPC systems requires an approach that is not based on linear control theory. We will present the basic ideas in Section III. We will then present some examples that show the interesting behavior that MPC may demonstrate, and we will subsequently explain how MPC theory can conceptually simplify and practically improve MPC. [Pg.145]

A numerical technique that has become very popular in the control field for optimization of dynamic problems is the IDP (iterative dynamic programming) technique. For application of the IDP procedure, the dynamic trajectory is divided first into NS piecewise constant discrete trajectories. Then, the Bellman s theory of dynamic programming [175] is used to divide the optimization problem into NS smaller optimization problems, which are solved iteratively backwards from the desired target values to the initial conditions. Both SQP and RSA can be used for optimization of the NS smaller optimization problems. IDP has been used for computation of optimum solutions in different problems for different purposes. For example, it was used to minimize energy consumption and byproduct formation in poly(ethylene terephthalate) processes [ 176]. It was also used to develop optimum feed rate policies for the simultaneous control of copolymer composition and MWDs in emulsion reactions [36, 37]. [Pg.346]

Nonlinear mathematical programming. For a wide variety of problems concerning the optimal control of chemical-engineering processes, the mathematical programming (exploratory methods) is the most routine tool to obtain numerical solutions [13,16-20,24,25]. In contrast to the classical optimization theory, in mathematical programming special... [Pg.64]

Theory and numerical methods for the solution of optimal control problems have reached a high standard. There is a wide range of applications, the most challenging of which are from the field of aerospace engineering and robotics see for example the survey paper [5]. [Pg.75]

Mathematical theory and state-of-the-art numerical methods possess a great ability in computing optimal solutions for process control in chemical engineering which is until today not exhausted compared to other fields. The investigation of the optimal temperature control of a semi-batdi polymerization reactor being still a comparatively simple problem, might show some... [Pg.79]


See other pages where Optimal control theory problem solutions is mentioned: [Pg.44]    [Pg.223]    [Pg.46]    [Pg.298]    [Pg.80]    [Pg.1127]    [Pg.63]    [Pg.235]    [Pg.476]    [Pg.613]    [Pg.625]    [Pg.1127]    [Pg.649]    [Pg.349]    [Pg.13]    [Pg.275]    [Pg.2]    [Pg.9]    [Pg.505]    [Pg.518]    [Pg.5]    [Pg.104]    [Pg.233]    [Pg.349]    [Pg.577]    [Pg.577]    [Pg.366]   
See also in sourсe #XX -- [ Pg.51 , Pg.56 ]




SEARCH



Control optimization

Control optimizing

Control optimizing controllers

Control problems

Control theory

Optimal control problem

Optimal control theory solution

Optimization optimal solution

Optimization problems

Optimization theory

Solution theory

© 2024 chempedia.info