Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Infinite horizon

The decisions of the core problem may interact with each other over an infinite horizon. The number of polymerization reactors is the long-term bottleneck of the production process, whereas the capacity of the safety ventilation system imposes only short-term constraints. The run-away phase (four hours) of a polymerization... [Pg.143]

Example 1 illustrates some basic intuitions of the stationary model. We now show that these intuitions hold more generally. To do so, we focus on the case where there is an infinite horizon (T= ). We do so for two reasons. First, it is expositionally easier to describe the results for an infinite horizon. Second, this assumption is closer in spirit to the rational choice models of addiction and yields more realistic results. [Pg.182]

In an infinite-horizon model with stationary instantaneous utilities, TCs and naifs both follow a stationary strategy, wherein behavior depends only on the current addiction level and not the specific period t. In any period, both TCs and naifs choose today s behavior by determining their optimal lifetime path of behavior beginning from today. Given an infinite horizon, stationary instantaneous utilities, and our assumption that people hit when indifferent, for any t there is a unique optimal Lifetime path of behavior, and this path depends on the current addiction level but not the current period t. This logic is summarized in the following lemma ... [Pg.182]

For sophisticates, there can be multiple perception-perfect strategies when there is an infinite horizon. However, there is a unique perception-perfect strategy for sophisticates when there is a finite horizon (given the assumption of hitting when indifferent). Throughout this chapter, we focus on perception-perfect strategies for an infinite... [Pg.182]

Table 6.7 illustrates example 5. In example 5, consider an infinite horizon with 6 =. 99. To model the time variance of myopia in a simple and extreme way, suppose that P, = 1 for odd t and pt - 0 for even /. TCs refrain always, since any other course of action yields a negative average utility. Hitting always yields utility profile 2, -1, -1, -1,..., and the cost of refraining when hooked (-5) outweighs the benefit of hitting when unhooked (2), so any pattern of moderate consumption also will not be attractive. [Pg.196]

For both TCs and naifs, the unique infinite-horizon perception-perfect strategy corresponds to the unique finite-horizon perception-perfect strategy as the horizon becomes long. [Pg.202]

This conclusion relies on our restricting our attention to infinite-horizon, perception-perfect strategies that correspond to a perception-perfect strategy for some long, finite horizon. [Pg.202]

Chen H., and Allgower, F., A quasi-infinite horizon nonlinear predictive control scheme with guaranteed stability, Report AUT96-28, ETH, http //www.aut.ee.ethz.ch/cgi-bin/ reports.cgi (1996). [Pg.200]

Keerthi, S. S., and Gilbert, E. G., Optimal, infinite horizon feedback laws for a general class of constrained discrete time systems Stability and moving-horizon approximations, IOTA, 57, 265-293 (1998). [Pg.201]

In this section we present dynamic programming models with an infinite time horizon. Although an infinite time horizon is a figment of the imagination, these models often are useful for decision problems with many decision points. Many infinite horizon models also have the desirable feature that there exist stationary deterministic optimal policies. Thus, optimal decisions depend only on the current state of the process and not on the sometimes artificial notion of time, as in finite horizon problems. This characteristic makes optimal policies easier to understand, compute, and implement, which is desirable in apphcations. [Pg.2643]

In this article we focus on dynamic programs with total discounted reward objectives. As illustrated in the example of Section 4.1.7, infinite horizon dynamic programs with other types of objectives, such as long-run average reward objectives, may exhibit undesirable behavior. A proper treatment of dynamic programs with these types of objectives requires more space than we have available here, and therefore we refer the interested reader to the references. Besides, in most practical applications, rewards and costs in the near future are valued more than rewards and costs in the more distant future, and hence total discounted reward objectives are preferred for applications. [Pg.2643]

From the stationary properties of the feasible sets, transition probabilities, and rewards, one would expect that it should be sufficient to consider stationary deterministic policies. This can be shown to be true for infinite horizon discounted problems of the form (42). [Pg.2643]

Solving an infinite horizon discounted dynamic program usually involves computing V. An optimal policy 17 6 n or an e-optimal policy irf G can then be obteiined, as shown in this section. [Pg.2644]

Recall that solving a dynamic program usually involves using (38) in the finite horizon case or (46) in the infinite horizon case to compute the optimal value function V, and an optimal policy 17. To accomplish this, the following major computational tasks are performed ... [Pg.2645]

Bertsekas, D. P, and Castanon, D. A. (1989), Adaptive Aggregation Methods for Infinite Horizon Dynamic Programming, IEEE Transactions on Automatic Control, Vol. AC-34, pp. 589-598. [Pg.2646]

White, D. J. (1980a), Finite-State Approximations for Denumerable-State Infinite-Horizon Discounted Markov Decision Processes The Method of Successive Approximations, in Recent Developments in Markov Decision Processes, R. Hatley, L. C. Thomas, and D. J. White, Eds., Academic Press, New York, pp. 57-72. [Pg.2648]

Rolling horizon methods use properties of finite-and infinite-horizon methods. The maintenance decisions are based on a long-term plan but they are updated according to short-term information. [Pg.542]

To illustrate, consider an infinite-horizon variant of the newsvendor game with lost sales in each period and inventory carry-over to the subsequent period, see Netessine et al. 2002 for complete analysis. The solution to this problem in a non-competitive setting is an order-up-to policy. In addition to unit-revenue r and unit-cost c we introduce inventory holding cost h incurred by a unit carried over to the next period and a discount factor 0. Also denote by x the inventory position at the beginning of the period and by yj the order-up-to quantity. Then the infinite-horizon profit of each player is... [Pg.44]

This dynamic optimization problem can be solved using a standard dynamic programming recursion, and for brevity we will only consider the infinite horizon case T oo. Let V(x,r) denote the principal s infinite horizon optimal... [Pg.132]

A number of problems consider a single period problem, similar to a newsvendor setting. Other possibilities include a multiple period horizon, or an infinite horizon. Occasionally papers assume a two period problem, this is also noted when appropriate. [Pg.338]

Similarly, Federgruen and Heching [51] study a problem where price and inventory decisions are coordinated under linear production cost, stochastic demand and backlogging of excess demand. They assume revenue is concave (e.g., as under a linear demand curve with an additive stochastic component), and inventory holding cost is convex. All parameters are allowed to vary over time, and price changes may be bi-directional over a finite or infinite horizon. [Pg.345]

In [34], Chen and Simchi-Levi consider the infinite horizon model with stationary parameters and general demand processes. They show that in this case, the (s, 5, p) policy identified by Thomas is optimal under both the average and discounted expected profit criteria. They further consider the problem with continuous inventory review in [35], and show that a stationary (s, S,p) policy is optimal for both the discounted and average proft models with general demand functions and general inter-arrival time distribution. [Pg.349]

X. Chen and D. Simchi-Levi. Coordinating inventory control and pricing strategies with random demand and fixed ordering cost The infinite horizon case. Working Paper, MIT, 2002b. [Pg.384]

Ur, Uyj, n discounted infinite-horizon expected profit function of retailer, wholesaler, and total supply chain, correspondingly. [Pg.615]

C2) Infinite horizon with constant demand rate. [Pg.714]

The structure of this class of problems is S3mimetric to that of the problems reviewed in Section 4 in the following sense. They share most problem characteristics (e.g. infinite horizon, constant production and demand rates, batch production). The key difference is in their structure the problems of Section 4 integrate production with outbound transportation (delivery of finished products to customers), while the problems in this section integrate inbound transportation (delivery of raw material from suppliers) and production. [Pg.720]


See other pages where Infinite horizon is mentioned: [Pg.183]    [Pg.198]    [Pg.64]    [Pg.164]    [Pg.187]    [Pg.387]    [Pg.347]    [Pg.348]    [Pg.2625]    [Pg.2625]    [Pg.2642]    [Pg.2643]    [Pg.2643]    [Pg.2645]    [Pg.2646]    [Pg.2724]    [Pg.2738]    [Pg.616]    [Pg.617]    [Pg.619]   
See also in sourсe #XX -- [ Pg.132 , Pg.338 , Pg.345 , Pg.714 , Pg.720 ]




SEARCH



© 2024 chempedia.info