Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimal control problems general

An optimal control system seeks to maximize the return from a system for the minimum cost. In general terms, the optimal control problem is to find a control u which causes the system... [Pg.272]

Therefore, for large optimal control problems, the efficient exploitation of the structure (to obtain 0(NE) algorithms) still remains an unsolved problem. As seen above, the structure of the problem can be complicated greatly by general inequality constraints. Moreover, the number of these constraints will also grow linearly with the number of elements. One can, in fact, formulate an infinite number of constraints for these problems to keep the profiles bounded. Of course, only a small number will be active at the optimal solution thus, adaptive constraint addition algorithms can be constructed for selecting active constraints. [Pg.249]

Optimal Control. Optimal control is extension of the principles of parameter optimization to dynamic systems. In this case one wishes to optimize a scalar objective function, which may be a definite integral of some function of the state and control variables, subject to a constraint, namely a dynamic equation, such as Equation (1). The solution to this problem requires the use of time-varying Lagrange multipliers for a general objective function and state equation, an analytical solution is rarely forthcoming. However, a specific case of the optimal control problem does lend itself to analytical solution, namely a state equation described by Equation (1) and a quadratic objective function given by... [Pg.104]

The LQP is the only general optimal control problem for which there exists an analytical representation for the optimal control in closed-loop or feedback form. For the LQP, the optimal controller gain matrix K becomes a constant matrix for tf>°°. K is independent of the initial conditions, so it can be used for any initial condition displacement, except those which, due to model nonlinearities, invalidate the computed state matrices. [Pg.105]

We have used sensitivity equation methods (Leis and Kramer, 1985) for gradient evaluation as these are simple and efficient for problems with few parameters and constraints. In general, the balance in efficiency between sensitivity and adjoint methods depends on the type of problem being addressed. Adjoint methods are particularly advantageous for optimal control problems in which the inputs are represented as a large number of piecewise constant input values and few interior point constraints exist. Sensitivity methods are preferable for problems with few parameters and many constraints. [Pg.334]

In Figure 1.11 we see a general trend that the steam flow rate must follow to optimize the profit O. Therefore a control system is needed which will (1) compute the best steam flow rate for every time during the reaction period and (2) adjust the valve (inserted in the steam line) so that the steam flow rate takes its best value [as computed in (1)]. Such problems are known as optimal control problems. [Pg.373]

The terms Pmax and Pmax denote the limiting capacities of the inspiratory muscles, and n is an efficiency index. The optimal Pmus(t) output is found by minimization of / subjects to the constraints set by the chemical and mechanical plants. Equation 11.1 and Equation 11.9. Because Pmusif) is generally a continuous time function with sharp phase transitions, this amounts to solving a difficult dynamic nonlinear optimal control problem with piecewise smooth trajectories. An alternative approach adopted by Poon and coworkers [1992] is to model Pmus t) as a biphasic function... [Pg.184]

In an optimal control problem, we arrive at an integral objective functional having the general form... [Pg.44]

For the simplest optimal control problem with m control functions and n state variables in general, the objective functional to be optimized is... [Pg.67]

It must be noted that the necessary conditions are frequently nonlinear and cannot be solved analytically. Therefore, optimal control problems are generally solved using numerical algorithms, the focus of Chapter 7. [Pg.71]

The above result can be readily generalized for the optimal control problem in which J is dependent on vectors y and u of state and control functions and is subject to m constraints, Ki = ki, i = 1,2,..., m. In this case, the Lagrange multipliers are given by... [Pg.109]

The above cases show that Pontryagin s minimum principle provides an overarching necessary condition for the minimum. Appreciating this fact, we present a general optimal control problem involving a wide class of controls for which we will derive Pontryagin s minimum principle. [Pg.126]

Optimal control problems involving multiple integrals are constrained by partial differential equations. A general theory similar to the Pontryagin s minimum principle is not available to handle these problems. To find the necessary conditions for the minimum in these problems, we assume that the variations of the involved integrals are weakly continuous and find the equations that eliminate the variation of the augmented objective functional. [Pg.178]

Dynamic optimization problems, also referred to as optimal control problems, are generally nonlinear, and most of them do not have analytical solutions. There are two main numerical approaches for solving dynamic optimization problems indirect and direct methods. [Pg.546]

Non-linear programming technique (NLP) is used to solve the problems resulting from syntheses optimisation. This NLP approach involves transforming the general optimal control problem, which is of infinite dimension (the control variables are time-dependant), into a finite dimensional NLP problem by the means of control vector parameterisation. According to this parameterisation technique, the control variables are restricted to a predefined form of temporal variation which is often referred to as a basis function Lagrange polynoms (piecewise constant, piecewise linear) or exponential based function. A successive quadratic programming method is then applied to solve the resultant NLP. [Pg.642]

However the accurate treatment of state variable inequality constraints presents a few problems. Parameter optimization problems obtained by discretizing the control profile generally allow inequality constraints to be active only at a finite set of points, simply because a finite set of decisions cannot influence an infinite number of values (i.e., keeping the state fixed at every point in a finite time period). [Pg.238]


See other pages where Optimal control problems general is mentioned: [Pg.323]    [Pg.47]    [Pg.200]    [Pg.221]    [Pg.247]    [Pg.316]    [Pg.506]    [Pg.102]    [Pg.155]    [Pg.527]    [Pg.550]    [Pg.448]    [Pg.177]    [Pg.221]    [Pg.342]    [Pg.46]    [Pg.527]    [Pg.191]    [Pg.557]    [Pg.3]    [Pg.83]    [Pg.59]    [Pg.275]    [Pg.44]    [Pg.284]    [Pg.75]    [Pg.217]    [Pg.189]    [Pg.235]    [Pg.203]    [Pg.7]    [Pg.151]    [Pg.215]   
See also in sourсe #XX -- [ Pg.67 ]




SEARCH



Control optimization

Control optimizing

Control optimizing controllers

Control problems

Generalities, problems

Generalization problem

Optimal control problem

Optimization problems

© 2024 chempedia.info