Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Solving an Optimal Control Problem

Although a httle bit early, it is worthwhile to know how we can actually solve an optimal control problem. As indicated earlier, the answer lies in the necessary conditions we have established above. They must be satisfied by the optimal control functions. [Pg.71]

It must be noted that the necessary conditions are frequently nonlinear and cannot be solved analytically. Therefore, optimal control problems are generally solved using numerical algorithms, the focus of Chapter 7. [Pg.71]

At this moment, we outline a simple munerical algorithm, which begins by guessing the control functions, and involves the following steps  [Pg.71]

forward integration of state equations using the initial conditions [Pg.71]

backward integration of costate equations using the final conditions [Pg.71]


As we know the vertical displacements of the plate defined from (2.7), (2.8) can be found as a limit of solutions to the problem (2.9)-(2.11). Two questions arise in this case. The first one is the following. Is it possible to solve an optimal control problem like (2.19) when w = w/ is defined from (2.9)-(2.11) The second question concerns relationships between solutions of (2.19) and those of the regularized optimal control problem. Our goal in this subsection is to answer these questions. [Pg.75]

When solving an optimal control problem, it has to be kept in mind that several local optima may exist. Consider for example a problem with a single control function. The objective functional value may be locally optimal, i.e., optimal only in a vicinity of the obtained optimal control function. In another location within the space of all admissible control functions, the objective functional may again be locally optimal corresponding to some other optimal control function. This new optimal objective functional value may be better or worse than, or, even the same as the previous one. [Pg.73]

In practice, we solve an optimal control problem with increasing values of AT up to a limit beyond which either there is no significant difference in the solution or the computations become very intensive. Note that larger the N the larger is number of discrete control values to be optimized and the harder it is to solve the problem. [Pg.191]

This is a simple method for solving an optimal control problem with inequality constraints. As the name suggests, the method penalizes the objective functional in proportion to the violation of the constraints, which are not enforced directly. A constrained problem is solved using successive applications of an optimization method with increasing penalties for constraint violations. This strategy gradually leads to the solution, which satisfies the inequahty constraints. [Pg.201]

Steady state operability is a necessary but not sufficient requirement for a well-designed plant, as the dynamic characteristics should also be considered. The dynamic operability is examined by the use of a dynamic model of the process and considers the issue of whether a given disturbance will be rejected quickly or whether a set-point change can be implemented within a given time interval, or both. This is addressed by solving an optimal control problem to find the minimum time, within which the process can respond to a disturbance or move to a new operating point with the available ranges of inputs. Such performance represents the best possible performance of any feedback controller, and similar to the steady state case, identifies the inherent operability characteristics of the process. [Pg.122]

There are numbers of different methods to solve an optimal control problem as discussed in the chapter on optimization and optimal control. Benavides and Diwekar (2011, 2013)[32, 34] used the maximum principle to solve the maximum concentration problem where the batch time was fixed at 100 minutes. Figure 3.10 shows the concentration profile for the base case versus the profile obtained using optimal temperature profile shown in Figure 3.11. It can be seen that the optimal concentration for methyl ester is found to be 0.7944 mol/L, while at constant temperature, the maximum concentration is 0.7324 mol/L (8.46% gain). Alternatively, if we fix the concentration at 0.7324 mol/L, the reaction time needed would be 69.5% less than it was at the beginning. [Pg.37]

Here, the last two equations define the flow rate and the mean residence time, respectively. This formulation is an optimal control problem, where the control profiles are q a), f(a), and r(a). The solution to this problem will give us a lower bound on the objective function for the nonisothermal reactor network along with the optimal temperature and mixing profiles. Similar to the isothermal formulation (P3), we discretize (P6) based on orthogonal collocation (Cuthrell and Biegler, 1987) on finite elements, as the differential equations can no longer be solved offline. This type of discretization leads to a reactor network more... [Pg.267]

Some times the stationarity condition = 0 of an optimal control problem can be solved to obtain an explicit expression for u in terms of the state y and the costate A. That expression, when substituted into state and costate equations, couples them. Thus, state equations become dependent on A and must be integrated simultaneously with the costate equations. The simultaneous integration constitutes a two point boundary value problem in which... [Pg.223]

In this section, an approach to solving the optimal control problem is introduced for reactor-separator processes. The approach involves the simultaneous determination of the batch times and size factors for both of the process units. Furthermore, the interplay between the two units involves trade-offs between them that are adjusted in the optimization. It should be noted that simpler models, than in normal practice, are used here to demonstrate the concept and, in the first example, provide an analytical solution that is obtained with relative ease. [Pg.391]

On the other hand, the optimal control problem with a discretized control profile can be treated as a nonlinear program. The earliest studies come under the heading of control vector parameterization (Rosenbrock and Storey, 1966), with a representation of U t) as a polynomial or piecewise constant function. Here the mode is solved repeatedly in an inner loop while parameters representing V t) are updated on the outside. While hill climbing algorithms were used initially, recent efficient and sophisticated optimization methods require techniques for accurate gradient calculation from the DAE model. [Pg.218]

Unlike parameter optimization, the optimal control problem has degrees of freedom that increase linearly with the number of finite elements. Here, for problems with many finite elements, the decomposition strategy for SQP becomes less efficient. As an alternative, we discussed the application of Newton-type algorithms for unconstrained optimal control problems. Through the application of Riccati-like transformations, as well as parallel solvers for banded matrices, these problems can be solved very efficiently. However, the efficient solution of large optimal control problems with... [Pg.250]

In principle, a laser control field e(/) could be designed with the evolution - reliably producing an acceptable value for (opf o xpi, where O is a chosen observable operator. This design problem may be best treated variationally, seeking an optimal control e(t) for this purpose[14,l5]. The practical implementation of quantum optimal control theory (OCT) poses challenging numerical tasks due to the need to repeatedly solve the Schrodinger equation, Eq. [Pg.80]

The basins of attraction of the coexisting CA (strange attractor) and SC are shown in the Fig. 14 for the Poincare crosssection oyf = O.67t(mod27t) in the absence of noise [169]. The value of the maximal Lyapunov exponent for the CA is 0.0449. The presence of the control function effectively doubles the dimension of the phase space (compare (35) and (37)) and changes its geometry. In the extended phase space the attractor is connected to the basin of attraction of the stable limit cycle via an unstable invariant manifold. It is precisely the complexity of the structure of the phase space of the auxiliary Hamiltonian system (37) near the nonhyperbolic attractor that makes it difficult to solve the energy-optimal control problem. [Pg.504]

The last one is based on discretization techniques, received major attention and considered as an efficient solution method. The concept of this approach is to transform the original optimal control problem into a finite dimensional optimization problem, typically a nonlinear programming problem (NLP). Then, the optimal control solution is given by applying a standard NLP solver to directly solve the optimization problem. For this reason, the method is known as a direct method. The transformation of the problem can be made by using discretization technique on either only control variables (partial discretization) or both state and control variables (complete discretization). Based on this con-... [Pg.105]

The terms Pmax and Pmax denote the limiting capacities of the inspiratory muscles, and n is an efficiency index. The optimal Pmus(t) output is found by minimization of / subjects to the constraints set by the chemical and mechanical plants. Equation 11.1 and Equation 11.9. Because Pmusif) is generally a continuous time function with sharp phase transitions, this amounts to solving a difficult dynamic nonlinear optimal control problem with piecewise smooth trajectories. An alternative approach adopted by Poon and coworkers [1992] is to model Pmus t) as a biphasic function... [Pg.184]

Hence, to solve the problem, we need to first obtain an explicit solution y = y u) and then substitute it in the expression of F. However, such solutions do not exist for most optimal control problems, which are typically constrained by highly non-linear state equations. [Pg.59]

When solving an inequality-constrained optimal control problem numerically, it is impossible to determine which constraints are active. The reason is one cannot obtain a p, exactly equal to zero. This difficulty is surmounted by considering a constraint to be active if the corresponding p < a where a is a small positive number such as 10 or less, depending on the problem. Slack variables may be used to convert inequalities into equalities and utilize the Lagrange Multiplier Rule. [Pg.115]

In this chapter, we first describe how to solve an optimal periodic control problem. Next, we derive the pi criterion to determine whether better periodic operation is possible in the vicinity of an optimal steady state operation. [Pg.235]

The performance of this model based parametric controller is optimal in terms of the given performance criteria, the plant model and the imposed constraints. The implementation of the parametric controller is based merely on simple function evaluations, rather than solving an optimization problem on-line which makes it attractive for a wide range of systems. A summary of the principle of parametric controllers is shown in Figure 6. [Pg.202]

Processing parameters have to be set according to the mold cavity design and its size, materials properties, and the quality of molded product without defects. Parameters are a focus since an optimal processing parameter design could help to solve most quality control problems. [Pg.75]


See other pages where Solving an Optimal Control Problem is mentioned: [Pg.102]    [Pg.71]    [Pg.170]    [Pg.102]    [Pg.71]    [Pg.170]    [Pg.316]    [Pg.408]    [Pg.105]    [Pg.185]    [Pg.527]    [Pg.83]    [Pg.91]    [Pg.322]    [Pg.323]    [Pg.65]    [Pg.63]    [Pg.221]    [Pg.248]    [Pg.249]    [Pg.102]    [Pg.109]    [Pg.155]    [Pg.546]    [Pg.306]    [Pg.613]    [Pg.625]    [Pg.224]    [Pg.1979]    [Pg.646]   


SEARCH



Control optimization

Control optimizing

Control optimizing controllers

Control problems

Optimal control problem

Optimization problems

Problem solving

© 2024 chempedia.info