Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimal control, discrete-time

For unconstrained optimal control problems, methods exist, however, where the computational effort grows only linearly with NE. To deal with more complex problems such as (16), it therefore is necessary to reconsider these methods and exploit the structure of the problem. Consider the simpler discrete time optimal control problem given as follows ... [Pg.247]

Wright, S., Solution of discrete-time optimal control problems on parallel computers, preprint MCS-P89-0789, Argonne National Lab, Argonne, Illinois (1989). [Pg.256]

Recently there has been great interest in discrete-time optimal control based on a one-step ahead optimization criterion, also known as minimum variance control. A number of different approaches for minimum variance control has been developed in the last decade. MacGregor (51) and Palmor and Shinnar (52) have provided overviews of these minimum variance controller design techniques. [Pg.106]

It may be useful to point out a few topics that go beyond a first course in control. With certain processes, we cannot take data continuously, but rather in certain selected slow intervals (c.f. titration in freshmen chemistry). These are called sampled-data systems. With computers, the analysis evolves into a new area of its own—discrete-time or digital control systems. Here, differential equations and Laplace transform do not work anymore. The mathematical techniques to handle discrete-time systems are difference equations and z-transform. Furthermore, there are multivariable and state space control, which we will encounter a brief introduction. Beyond the introductory level are optimal control, nonlinear control, adaptive control, stochastic control, and fuzzy logic control. Do not lose the perspective that control is an immense field. Classical control appears insignificant, but we have to start some where and onward we crawl. [Pg.8]

As shown in the above works, an optimal feedback/feedforward controller can be derived as an analytical function of the numerator and denominator polynomials of Gp(B) and Gn(B). No iteration or integration is required to generate the feedback law, as a consequence of the one step ahead criterion. Shinnar and Palmor (52) have also clearly demonstrated how dead time compensation (discrete time Smith predictor) arises naturally out of the minimum variance controller. These minimum variance techniques can also be extended to multi-variable systems, as shown by MacGregor (51). [Pg.107]

DO profile during control. The sensitivity of the DO concentration to load changes is largest in subreactor 3, so DO(3) has been chosen for control. With a discrete time controller supplied with a feed-forward signal from the influent flow rate the total air supply to the four subreactors was controlled to keep DO(3) as constant as possible, fig 9. Even if the regulator is not optimally tuned, the improvement of the aerator performance as seen by the DO profile is significant. [Pg.369]

The kicked rotor is often described only at discrete time immediately after/ before the periodic kicks. In our control problem, however, we must represent dynamics driven by e t) between those kicks. Then, we can apply the Zhu-Botina-Rabitz scheme as usual. According to Eq. (5), the optimal external field is given by... [Pg.442]

Prett and Garcia (1988) pose the validation problem as a discrete time linear optimal control problem under uncertainty. The uncertainty is defined by simple bounds, giving a polyhedral set of uncertain parameters V. For this problem, certain forms of uncertainty, e.g., in gains only, together with a quadratic performance index can be shown to satisfy the convexity requirements for the worst-case parameters to lie at vertices of V. This allows the algorithm of Gross-mann et al, based on examination only of vertices of V, to be applied (see Section II.A.l). The mathematical formulation is... [Pg.323]

In the classical concept of predictive control, the trajectory (or set-point) of the process is assumed to be known. Control is implemented in a discrete-time fashion with a fixed sampling rate, i.e. measurements are assumed to be available at a certain frequency and the control inputs are changed accordingly. The inputs are piecewise constant over the sampling intervals. The prediction horizon Hp represents the number of time intervals over which the future process behavior will be predicted using the model and the assumed future inputs, and over which the performance of the process is optimized (Fig. 9.1). Only those inputs located in the control horizon H, are considered as optimization variables, whereas the remaining variables between Hr+1 and Hp are set equal to the input variables in the time interval Hr. The result of the optimization step is a sequence of input vectors. The first input vector is applied immediately to the plant. The control and the prediction horizon are then shifted one interval forward in time and the optimization run is repeated, taking into account new data on the process state and, eventually, newly estimated process parameters. The full process state is usually not measurable, so state estimation techniques must be used. Most model-predictive controllers employed in industry use input-output models of the process rather than a state-based approach. [Pg.402]

It should be noted that (MIP) problems, and their special cases, may be regarded as steady-state models. Hence, one important extension is the case of dynamic models, which in the case of discrete time models gives rise to multiperiod optimization problems, while for the case of continuous time it gives rise to optimal control problems that contain differential-algebraic equation (DAE) models. [Pg.300]

Bertsekas, D. P, and Shreve, S. E. (1978), Stochastic Optimal Control The Discrete Time Case,... [Pg.2646]

Chow, C. S., and Tsitsiklis, J. N. (1991), An Optimal One-Way Multigrid Algorithm for Discrete-Time Stochastic Control, IEEE Transactions on Automatic Control, Vol. AC-36, pp. 898-914. [Pg.2646]

In this equation, k is the number of discrete time sections in the step-by-step solution (t = Mr). Meteorologic data are at one s disposal hourly, so in the calculation a time step of 1 h can be applied. For stability it is advantageous if the system of equations is well conditioned. Accordingly, heat capacities of insignificant influence are best neglected. As a result of calculation, the time dependence of the medium outlet temperature, as well as the time variation of the approximate temperature distribution of the collector, is obtained. The results can be used for instantaneous and steady-state efficiency diagrams, for collector design and optimization, and for the solution of process control problems [33,34,43,50,56,98]. It is of course desirable to verify the calculation results with experimental measurements where possible. [Pg.323]

Chapter 9 develops necessary conditions for optimality of discrete time problems. In implementing optimal control problems using digital computers, the control is usually kept constant over a period of time. Problems that were originally described by differential equations defined over a continuous time domain are transformed to problems that are described by a set of discrete algebraic equations. Necessary conditions for optimality are derived for this class of problems and are applied to several process control situations. [Pg.2]

Minimum-time polynomial manipulator trajectory algorithms based on constrained objective optimization with goal programming are developed in the fourth paper. Bezier curves are used to fit the control points along the manipulator path. An efficient discrete time algorithmic search method is proposed in the fifth paper to find a locally minimum time trajectory for the motion of coordinated robots. [Pg.480]

MPC (The Model Predictive Control) uses predictive control methods with a dynamic model (linear or non-linear) to compute control signal trajectory that minimize quality indicator for a given time horizon. In each step of the algorithm, the control vector in consecutive moments is computed x(fe),x(fe+1),..., x k + Ns — 1) (k - actual time, Ns - control horizon). In each discrete time step k first control vector x(fe) from optimized control trajectory is used—then, when the prediction and control time horizon are moved one step forward, the whole procedure is repeated. [Pg.58]

However the accurate treatment of state variable inequality constraints presents a few problems. Parameter optimization problems obtained by discretizing the control profile generally allow inequality constraints to be active only at a finite set of points, simply because a finite set of decisions cannot influence an infinite number of values (i.e., keeping the state fixed at every point in a finite time period). [Pg.238]

It is evident that with the discrete cycles of the non-flame atomizers several reactions (desolvation, decomposition, etc.) which occur simultaneously" albeit over rather broad zones in a flame (due to droplet size distributions] are separated in time using a non-flame atomizer. This allows time and temperature optimization for each step and presumably improves atomization efficiencies. Unfortunately, the chemical composition and crystal size at the end of the dry cycle is matrix determined and only minimal control of the composition at the end of the ash cycle is possible, depending on the relative volatilities and reactivities of the matrix and analyte. These poorly controlled parameters can and do lead to changes in atomization efficiencies and hence to matrix interferences. [Pg.102]

Vol. 529 W. Krabs, S. W. Pickl, Analysis, Controllability and Optimization of Time-Discrete Systems and Dynamical Games. XII, 187 pages. 2003. [Pg.244]

In the sequential strategy, a control (manipulated) variable profile is discretized over a time interval. The discretized control profile can be represented as a piecewise constant, a piecewise linear, or a piecewise polynomial function. The parameters in such functions and the length of time subinterval become decision variables in optimization problem. This strategy is also referred to a control vector parameterization (CVP). [Pg.105]

These control schemes are very effective for a certain class of processes but are not versatile and ineffective for, for example, multilevel-multilevel transitions we shall consider in this chapter. There exist several mathematical studies that investigate controllability of general quantum mechanical systems [11,12]. The theorem of controllability says that quantum mechanical systems with a discrete spectrum under certain conditions have complete controllability in the sense that an initial state can be guided to a chosen target state after some time. Although the theorem guarantees the existence of optimal fields, it does not tell us how to construct such a field for a given problem. [Pg.436]

After phase inversion, the dispersed phase is the rubber phase. This is the first time discrete rubber particles are present in the reaction mixture. The particle size in the final product is an important parameter to optimize the physical properties. To be successful in the manufacturing of mass ABS, it is necessary to understand and control which parameters can be used to control the final rubber particle size. [Pg.308]

Here, the last two equations define the flow rate and the mean residence time, respectively. This formulation is an optimal control problem, where the control profiles are q a), f(a), and r(a). The solution to this problem will give us a lower bound on the objective function for the nonisothermal reactor network along with the optimal temperature and mixing profiles. Similar to the isothermal formulation (P3), we discretize (P6) based on orthogonal collocation (Cuthrell and Biegler, 1987) on finite elements, as the differential equations can no longer be solved offline. This type of discretization leads to a reactor network more... [Pg.267]

A linear model predictive control law is retained in both cases because of its attracting characteristics such as its multivariable aspects and the possibility of taking into account hard constraints on inputs and inputs variations as well as soft constraints on outputs (constraint violation is authorized during a short period of time). To practise model predictive control, first a linear model of the process must be obtained off-line before applying the optimization strategy to calculate on-line the manipulated inputs. The model of the SMB is described in [8] with its parameters. It is based on the partial differential equation for the mass balance and a mass transfer equation between the liquid and the solid phase, plus an equilibrium law. The PDE equation is discretized as an equivalent system of mixers in series. A typical SMB is divided in four zones, each zone includes two columns and each column is composed of twenty mixers. A nonlinear Langmuir isotherm describes the binary equilibrium for each component between the adsorbent and the liquid phase. [Pg.332]

After the 20% inerease in the produetion is aehieved, the optimizer is asked to ensure a new steady state is reaehed and the produetion is kept constant for a while. The two control variables are discretized into 25 time intervals. The size of the first 20 intervals is free, while for the last 5 it is fixed. [Pg.341]

This work presents the on-line level control of a batch reactor. The on-line strategy is required to accommodate the reaction rate disturbances which arise due to catalyst dosing uncertainties (catalyst mass and feeding time). It is concluded that the implemented shrinking horizon on-line optimization strategy is able to calculate the optimal temperature profile without causing swelling or sub-optimal operation. Additionally, it is concluded that, for this process, a closed-loop formulation of the model predictive controller is needed where an output feedback controller ensures the level is controlled within the discretization intervals. [Pg.530]


See other pages where Optimal control, discrete-time is mentioned: [Pg.77]    [Pg.178]    [Pg.75]    [Pg.178]    [Pg.528]    [Pg.529]    [Pg.308]    [Pg.205]    [Pg.75]    [Pg.271]    [Pg.322]    [Pg.371]    [Pg.27]    [Pg.13]    [Pg.651]    [Pg.44]    [Pg.109]    [Pg.248]    [Pg.2]    [Pg.102]    [Pg.166]    [Pg.2760]    [Pg.65]   
See also in sourсe #XX -- [ Pg.101 ]




SEARCH



Control discrete

Control optimization

Control optimizing

Control optimizing controllers

Discrete Optimization

Discrete-time

Time control

© 2024 chempedia.info