Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Finite prediction horizon

Example 9—A finite prediction horizon may not be a good approximation of an infinite onefor nonlinear processes. In the second example, consider the single-state, single-input system... [Pg.163]

Rawlings and Muske (1993) have shown that this idea can be extended to unstable processes. In addition to guaranteeing stability, their approach provides a computationally efficient method of on-line implementation. Their idea is to start with a finite control (decision) horizon but an infinite prediction (objective function) horizon, i.e., m < < and p = , and then use the principle of optimality and results from optimal control theory to substitute the infinite prediction horizon objective by a finite prediction horizon objective plus a terminal penalty term of the form... [Pg.175]

Stochastic objective function. The preceding MPC formulation assumes that future process outputs are deterministic over the finite optimization horizon. For a more realistic representation of future process outputs, one may consider a probabilistic (stochastic) prediction for y[/c + i k] and formulate an objective function that contains the expectation of appropriate functionals. For example, if y[k + i k] is probabilistic, then the expectation of the functional in Eq. (4) could be used. This formulation, known as open-loop optimal feedback, does not take into account the fact that additional information would be available at future time points k + i and assumes that the system will essentially run in open-loop fashion over the optimization horizon. An alternative, producing a closed-loop optimal feedback law relies... [Pg.140]

A second proposition relies on the idea that the on-line optimization problem is unconstrained after a certain time step in the finite moving horizon. Where in the finite horizon that happens is determined by examining whether the state has entered a certain invariant set (Mayne, 1997). Once that happens, then closed-form expressions can be used for the objective function from that time point the end of the optimization horizon, p. The idea is particularly useful for MFC with nonlinear models, for which the computational load of the on-line optimization is substantial. A related idea was presented by Rawlings and Muske (1993), where the on-line optimization problem has a finite control horizon length, m, and infinite prediction horizon length, p, but the objective function is truncated, because the result of the optimization is known after a certain time point. [Pg.186]

The prediction horizon is discretized in cycles, where a cycle is a switching time tshift multiplied by the total number of columns. Equation 9.1 constitutes a dynamic optimization problem with the transient behavior of the process as a constraint f describes the continuous dynamics of the columns based on the general rate model (GRM) as well as the discrete switching from period to period. To solve the PDE models of columns, a Galerkin method on finite elements is used for the liquid... [Pg.408]

The MPC strategy can be summarized as follows. A dynamic process model (usually linear) is used to predict the expected behavior of the controlled output variable over a finite horizon into the future. On-line measurement of the output is used to make corrections to this predicted output trajectory, and hence provide a feedback correction. The moves of the manipulated variable required in the near future are computed to bring the predicted output as close to the desired target as possible without violating the constraints. The procedure is repeated each time a new output measurement becomes available. [Pg.279]


See other pages where Finite prediction horizon is mentioned: [Pg.409]    [Pg.507]    [Pg.409]    [Pg.507]    [Pg.192]    [Pg.291]    [Pg.200]    [Pg.187]   
See also in sourсe #XX -- [ Pg.163 ]




SEARCH



Finite horizon

Predictability horizon

© 2024 chempedia.info