Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimal control law

An alternative procedure is the dynamic programming method of Bellman (1957) which is based on the principle of optimality and the imbedding approach. The principle of optimality yields the Hamilton-Jacobi partial differential equation, whose solution results in an optimal control policy. Euler-Lagrange and Pontrya-gin s equations are applicable to systems with non-linear, time-varying state equations and non-quadratic, time varying performance criteria. The Hamilton-Jacobi equation is usually solved for the important and special case of the linear time-invariant plant with quadratic performance criterion (called the performance index), which takes the form of the matrix Riccati (1724) equation. This produces an optimal control law as a linear function of the state vector components which is always stable, providing the system is controllable. [Pg.272]

The Linear Quadratic Regulator (LQR) provides an optimal control law for a linear system with a quadratic performance index. [Pg.274]

Using a direct search technique on the performance index and the steepest ascent method, Seinfeld and Kumar (1968) reported computational results on non-linear distributed systems. Computational results were also reported by Paynter et al. (1969). Both the gradient and the accelerated gradient methods were used and reported (Beveridge and Schechter, 1970 Wilde, 1964). All the reported computational results were carried out through discretization. However, the property of hyperbolic systems makes them solvable without discretization. This property was first used by Chang and Bankoff (1969). The method of characteristics (Lapidus, I962a,b) was used to synthesize the optimal control laws of the hyperbolic systems. [Pg.218]

In most optimal control problems, it is not possible to obtain optimal control laws, i. e., optimal controls as explicit functions of system state. Note that system state is the set of system properties such as temperature, pressure, and concentration. They are subject to change with independent variables like time and space. In the absence of an optimal control law, the optimal control needs to be determined all over again if the initial system state changes. [Pg.20]

The controls that are not given by optimal control laws are often called open-loop controls. They simply are functions of independent variables and specific to the initial system state. The application of open-loop controls is termed open loop control, which is the subject matter of this book. [Pg.20]

Sometimes, it is possible to derive optimal control laws when the underlying mathematical models describing the system are simple enough. In many problems though, obtaining optimal control laws mostly requires drastic simplifications of the underlying mathematical models, thereby compromising on the accuracy of control. [Pg.20]

Nonetheless, when determined, optimal control laws are easier to implement for the control of continuous processes where inputs are susceptible to change during long operation periods. Optimal controls are then readily obtained from the system state and applied or fed back to the system, as shown in Figure 1.12. These controls are called feedback controls and the control strategy is termed feed,hack control, which is a type of closed-loop control. [Pg.20]

Chapter 12 considers the combination of optimal control with state and parameter estimation. The separation principle is developed, which states that the design of a control problem with measurement and model uncertainty can be treated by first performing a Kalman filter estimate of the states and then developing the optimal control law based upon the estimated states. For linear regulator problems, the problem is known as the linear quadratic Gaussian (LQG) problem. The inclusion of model parameter identification results in adaptive control algorithms. [Pg.2]

According to this method, an artificial earthquake record is modeled and a response of an undamped stmcture to this earthquake is obtained. After that dampers are connected to the structure at all floors and the stmctuial response to the artificial ground motion is calculated. The control forces correspond to the optimal control law requirements and the number of active controlled devices at each floor is obtained according to a maximum force that can be developed by a single device. Energy at each floor is calculated and its portion in the total energy over the entire structure is obtained. It is further considered that dampers cost and energy, required to activate them, are limited. Hence, dampers are placed at the most effective positions and their number is increased until desired reduction in seismic response is achieved. [Pg.237]

In Ref 24, an approximative approaeh to quantify the nonlinearity of the controller u = k x) is used, the so-ealled Optimal Control Strueture (OCS). In this presentation, we are going to use the more rigorous approaeh introduced in Ref 25, that is based on the following definition The optimal control law (OCL) nonlinearity measure for a certain control problem is defined as the quantity... [Pg.87]

Fig. 4. Definition of the operator Nocl and setup for the optimal control law (OCL) nonlinearity measure. Fig. 4. Definition of the operator Nocl and setup for the optimal control law (OCL) nonlinearity measure.
In this section, the optimal control law (OCL) nonlinearity measure has been introduced. The two important ideas of control-relevant nonlinearity (or control problem nonlinearity or controller nonlinearity) that have been adapted from Ref. 24 and further expanded are (1) the three-fold problem structure considering plant dynamics, region of operation and cost criterion and (2) the concept of using optimal control theory to get benchmark controller for a wide class of nonlinear systems. [Pg.89]

Important advantages of the presented approach are, firstly, that exact solutions for the optimal control problem are considered. Secondly, the optimal static state feedback control law is compared to linear static state feedback, respecting the nature of the optimal control law. By this means, an evaluation of the nonlinearity of the optimal control law in closed-loop operation is possible without the necessity to compute the feedback law. A big advantage of the presented approach is that the OCL nonlinearity can be computed for stable as well as for unstable systems, due to the fact that the optimal control law is a static relation in any case. [Pg.89]


See other pages where Optimal control law is mentioned: [Pg.284]    [Pg.293]    [Pg.109]    [Pg.539]    [Pg.217]    [Pg.184]    [Pg.122]    [Pg.205]    [Pg.235]    [Pg.86]    [Pg.88]    [Pg.92]    [Pg.188]    [Pg.209]    [Pg.198]    [Pg.403]    [Pg.251]   
See also in sourсe #XX -- [ Pg.3 , Pg.272 , Pg.275 , Pg.281 ]




SEARCH



Control law

Control optimization

Control optimizing

Control optimizing controllers

© 2024 chempedia.info