Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Finite horizon

The MPC strategy can be summarized as follows. A dynamic process model (usually linear) is used to predict the expected behavior of the controlled output variable over a finite horizon into the future. On-line measurement of the output is used to make corrections to this predicted output trajectory, and hence provide a feedback correction. The moves of the manipulated variable required in the near future are computed to bring the predicted output as close to the desired target as possible without violating the constraints. The procedure is repeated each time a new output measurement becomes available. [Pg.279]

For sophisticates, there can be multiple perception-perfect strategies when there is an infinite horizon. However, there is a unique perception-perfect strategy for sophisticates when there is a finite horizon (given the assumption of hitting when indifferent). Throughout this chapter, we focus on perception-perfect strategies for an infinite... [Pg.182]

For both TCs and naifs, the unique infinite-horizon perception-perfect strategy corresponds to the unique finite-horizon perception-perfect strategy as the horizon becomes long. [Pg.202]

This conclusion relies on our restricting our attention to infinite-horizon, perception-perfect strategies that correspond to a perception-perfect strategy for some long, finite horizon. [Pg.202]

Recall that we restrict attention to perception-perfect strategies corresponding to the unique perception-perfect strategy for a finite horizon as the horizon becomes long. For a finite horizon, we suppose the last period is a weekend, and of course people hit whether or not they are hooked on this weekend. [Pg.202]

A second proposition relies on the idea that the on-line optimization problem is unconstrained after a certain time step in the finite moving horizon. Where in the finite horizon that happens is determined by examining whether the state has entered a certain invariant set (Mayne, 1997). Once that happens, then closed-form expressions can be used for the objective function from that time point the end of the optimization horizon, p. The idea is particularly useful for MFC with nonlinear models, for which the computational load of the on-line optimization is substantial. A related idea was presented by Rawlings and Muske (1993), where the on-line optimization problem has a finite control horizon length, m, and infinite prediction horizon length, p, but the objective function is truncated, because the result of the optimization is known after a certain time point. [Pg.186]

Persistent excitation constraints on inputs over finite horizon. (171)... [Pg.192]

The periodic Lorentz gas model with finite-horizon configuration is of particular interest since it resembles the situation of diffusive motions in multi-valley potential surfaces of molecular systems. The local dynamics inside the array of disks is strongly hyperbolic as mentioned above, which reminds us of the intrabasin mixing within a potential well. If the distance between disks is sufficiently small, the channel between arrays might play the role of the bottleneck on the configuration space. The interbasin diffusion process may be modeled by the large-scale diffusion represented by the Lorentz gas model. The similarity will be discussed more closely in the final section. [Pg.387]

Suppose that the process is considered over a finite horizon with time periods f = 1.T. Our... [Pg.2631]

From the memoryless properties of the feasible sets, transition probabilities, and rewards, it is intuitive that it should be sufficient to consider memoryless deterministic policies. This can be shown to be true for finite horizon problems of the form (34). [Pg.2641]

Solving a finite horizon dynamic program usually involves using (38) to compute F with the following backward induction algorithm. An optimal policy ir G H" is then obtained using (40), or an 8-optimal policy itJ G H" is obtained using (41). [Pg.2641]

In this section we present dynamic programming models with an infinite time horizon. Although an infinite time horizon is a figment of the imagination, these models often are useful for decision problems with many decision points. Many infinite horizon models also have the desirable feature that there exist stationary deterministic optimal policies. Thus, optimal decisions depend only on the current state of the process and not on the sometimes artificial notion of time, as in finite horizon problems. This characteristic makes optimal policies easier to understand, compute, and implement, which is desirable in apphcations. [Pg.2643]

Again motivated by the stationary input data, it is intuitive, and can be shown to be true, that the value function V of a stationary policy ir satisfies an equation similar to (37) for the finite horizon case, that is. [Pg.2643]

Unlike the finite horizon case, is not computed directly using backward induction. An approach that is often used is to compute a sequence of approximating functions V , i = 0, 1, 2,. . . , such that Vj — V as I — < . [Pg.2644]

Recall that solving a dynamic program usually involves using (38) in the finite horizon case or (46) in the infinite horizon case to compute the optimal value function V, and an optimal policy 17. To accomplish this, the following major computational tasks are performed ... [Pg.2645]

Finite horizon dynamic programs, 2641-2643 Fire safety systems, 1568 Firewalls, 734-735... [Pg.2731]

Maintenance optimization aims at finding the grouping structure with a minimal maintenance cost J2 on a finite horizon [t, max, f ]. [Pg.542]

Grouping maintenance optimization This step provides the optimized maintenance planning of the multi-components system on a finite horizon. [Pg.543]

First, at each decision time, a planning horizon is defined from the previous step. Suppose that all the first individual optimal maintenance dates are sorted. To consider aU the components (Fig. 4), the finite horizon planning is defined by max, //]. [Pg.543]

ABSTRACT This paper deals with a system which can be for example a large structure such as a bridge, a dike or a pipeline. A such system should be preventively maintained in order to avoid failures which would have disastrous consecpiences. An imperfect condition-based maintenance policy is then proposed considering a finite horizon. As preventive maintenance actions are imperfect their impact on the system is determined through a function called improvement function . The aim of this paper is to compare two types of improvement functions. An imperfect condition-based maintenance policy is then proposed and evaluated on a finite time span for each improvement function. [Pg.556]

If the deterioration level of the system, collected through perfect periodic inspections, exceeds a safety level the system is said to be failed and as the system is not repaired after a failure, it is unavailable imtil the finite horizon. Let us call the remaining time to the finite horizon, the residual time. In order to avoid failures and to stay in an operating state, the system should be preventively maintained. The strategy of the considered condition-based maintenance policy, in this paper, is a control hmit strategy as done in Wang (2002), Nicolai et al. (2009) and Moustafa et al. (2004). [Pg.556]

The system is studied on the finite time span [0, T], where T is called the finite horizon of the system. [Pg.557]

As seen in Section 1, Kijima Nakagawa (1991) proposed a cumulative damage shock model with imperfect periodic maintenance actions. Each action reduces the deterioration level by 100(1 — b)% of total damage, where b 6 [0,1]. The same authors introduced in Kijima Nakagawa (1992) an improvement factor t. The amount of damage after the preventive maintenance becomes bi Yi when it was T) before the maintenance action. Moreover the imperfect maintenance action can impact the failure rate of the system, this method is called the improvement factor method . This concept had been introduced by Malik (1979). Nicolai et al. (2009) consider different imperfect maintenance actions which have a random impact on the deterioration level of the system. A random improvement according to the residual time on a finite horizon is rarely considered, that is why, in this paper, such a random improvement is considered. [Pg.558]

To illustrate the maintenance policy aniunerical implementation is done using Monte Carlo simulations. Following results are obtained considering 60,000 shots for the maintenance cost with improvement function G2 and 100,000 shots for the maintenance cost with improvement function G. Such munbers of shots are needed to reduced uncertainties and then obtained exploitable results for each improvement function. Parameters that are supposed to be known are parameters of deterioration of the system (a, P), the safety threshold L, the finite horizon T and the cost of each maintenance action, c<, Cpre, q and c . Parameters of deterioration are chosen in order to be coherent with the slow deterioration of a large structure such as the considered system. Parameters of deterioration of the system are given in Table 1 and parameters of the maintenance poUcy and the cost of maintenance actions are given in Table 2. [Pg.560]

Gallego, G., and G. van Ryzin. 1994. Optimal Dynamic Pricing of Inventories with Stochastic Demand over Finite Horizons. Management Science 40(8), 999-1020. [Pg.326]

In the second paper, Thomas considers a related problem but incorporates a general stochastic demand function and backlogging of excess demand. Specifically, Thomas considers a periodic review, finite horizon model with a fixed ordering cost and stochastic, price-dependent demand. The paper postulates a simple policy, referred to by Thomas as (5,5,p), which can be described as follows. The inventory strategy is an (5, S) policy If the inventory level at the beginning of period t is below the reorder point, st, an order is placed to raise the inventory level to the order-up-to level, St. Otherwise, no order is placed. Price depends on the initial inventory level at the beginning of the period. Thomas provides a counterexample which shows that when price is restricted to a discrete set this policy may fail to be optimal. Thomas goes on to say ... [Pg.348]


See other pages where Finite horizon is mentioned: [Pg.183]    [Pg.183]    [Pg.200]    [Pg.164]    [Pg.182]    [Pg.528]    [Pg.387]    [Pg.387]    [Pg.2625]    [Pg.2625]    [Pg.2639]    [Pg.2641]    [Pg.2641]    [Pg.2641]    [Pg.2641]    [Pg.2643]    [Pg.2644]    [Pg.2724]    [Pg.932]    [Pg.541]    [Pg.541]    [Pg.556]    [Pg.10]   
See also in sourсe #XX -- [ Pg.8 , Pg.348 , Pg.714 , Pg.720 , Pg.722 ]




SEARCH



Finite prediction horizon

© 2024 chempedia.info