Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The VALUE Function

Occasionally, number values will be entered in cells as text. When these text values are used in formulas. Excel normally evaluates them as numbers, and calculates the desired result. In the rare instance where Excel does not perform this conversion, VALUE(fexf) can be used to convert a text argument to a number. [Pg.81]

Let the set of service parameters be denoted by s 5,, and the value function by W(s) it has the dimension of monetary units per unit time. The revenue in an accounting period, T, is then given by [Pg.235]

Of course, revenue is restricted to that part of the life cycle in which the plant is operating. [Pg.235]


The value functions appearing in equation 3 may be expanded in Taylor series about x and, because the concentration changes effected by a single stage are relatively small, only the first nonvanishing term is retained. When the value of is replaced by its material balance equivalent, ie, equation 4 ... [Pg.77]

The Value Function. The value function itself is defined, as has been indicated above, by the second-order differential ... [Pg.77]

In the design of cascades, a tabulation of p x) and of p (x) is useful. The solution of the above differential equation contains two arbitrary constants. A simple form of this solution results when the constants are evaluated from the boundary conditions u(0.5) = u (0.5) = 0. The expression for the value function is then ... [Pg.77]

However, recalling the definition of the value function, equation 11, and assuming that the value of a is the same for all stages, the integral maybe written in the form ... [Pg.81]

When the second stage decisions are real-valued variables, the value function Qu(x) is piecewise-linear and convex in x. However, when some of the second stage variables are integer-valued, the convexity property is lost. The value function Qafx) is in general non-convex and non-differentiable in x. The latter property prohibits the use of gradient-based search methods for solving (MASTER). [Pg.201]

This is also demonstrated numerically in the example presented in the preceding section. An indifference surface (or curve) is defined as a locus of different conditions in the objective space, any two of which cannot be distinguished by the preference criterion of the decision maker. An indifference curve or Surface can be expressed in terms of the value function, v(f), as... [Pg.320]

Note that the indifference surfaces are obtained without knowing the function form of the value function, v(f). They are generally determined by directly comparing many sampled points in the objective space based on the decision maker s preference. [Pg.320]

The value functions and criteria weightings used in MCDA can be solicited from different stakeholders in an open and transparent way. The conversion of a performance measure to a monetary value in CBA can be rather opaque. [Pg.22]

The separation potential may be thought of as related to the value of a mixture of isotopes, and has, in fact, been called the value function by Cohen [C3]. [Pg.677]

Fig. 5. Effect of transition-state symmetry on the isotope effect. The value function of... Fig. 5. Effect of transition-state symmetry on the isotope effect. The value function of...
Wetland values are dependent on social perceptions. The valued functions have historically included water storage, flood control, erosion control, sediment control, nutrient removal, protection of general water quality, habitat for crops and fisheries, recreation, and wildlife... [Pg.63]

Note the similarity between the above formulation for SEU and the earlier equation for expected value. EV and SEU are equivalent if the value function equals the utility function. Methods for eliciting value and utility functions differ in nature (Section 3). Preferences elicited for uncertain outcomes measure utility, t Preferences elicited for certain outcomes measure value. It accordingly has often been assumed that value functions differ from utility functions, but there are reasons to treat value and utility functions as equivalent (Winterfeldt and Edwards 1986). The latter authors claim that the differences between elicited value and utility functions are small and that severe limitations constrain those relationships, and only a few possibilities exist, one of which is that they are the same. ... [Pg.2182]

Assume that the value function is additive as follows ... [Pg.2606]

This simply means that we need only to know the value of the value function at point (yi, y ) to know Wj. If this is difficult to accomplish, we can alternatively identify q — pairs of indifferent... [Pg.2606]

Using the additive form of the value function, we obtain... [Pg.2606]

Method 2. Sooty s Eigenweight Vector Method (Saaty 1980). Suppose that the value function is of the form... [Pg.2606]

Thus, the value function is determined if the weights are known. Without losing generality, we can... [Pg.2607]

As mentioned in the introduction, preferences may be represented by one-dimensional comparison, which we discussed in the previous two sections, or in terms of multidimensional comparison, which we will discuss in this and the following section. Note that in one-dimensional comparison, we implicitly or explicitly assume that = 0 and that no ambiguity exists in preference. Once the value function or proper regret function is determined, MCDM becomes a one-dimensional comparison or a mathematical programming problem. In this section we shall tackle the problems with 0. [Pg.2614]

Dynamic programs and multistage stochastic programs deal with essentially the same types of problems, namely dynamic and stochastic decision problems. The major distinction between dynamic programming and stochastic programming is in the structures that are used to formulate the models. For example, in DP, the so-called state of the process, as weU as the value function, that depends on the state, are two structures that play a central role, but these concepts are usually not used in stochastic programs. Section 4.1 provides an introduction to concepts that are important in dynamic programming. [Pg.2636]

Example 4. This example shows that even if the dynamic program has stationary input data, it does not always hold that for any poUcy tt, and any e > 0, there exists a stationary deterministic policy IT that has value function V" within e of the value function V of policy n-. [Pg.2640]

It is easy to see that the value function V of a memoryless policy ir satisfies the following inductive equation ... [Pg.2641]

Recall that rr(s, t) denotes the decision under policy ir if the process is in state s at time t. If ir is a randomized policy, then the understanding is that the expected value is computed with the decision distributed according to probability distribution v(s, t). Also, even history-dependent policies satisfy a similar inductive equation, except that the value function depends on the history up to time t.) Similarly, the optimal value function 1 satisfies the following inductive optimality equation ... [Pg.2641]

The value function V of a policy -rr can be ceilculated with a similar eilgotithm, except that (37) is used instead of (39), that is, the maximization on the right-hand side of (39) is replaced by the decision under policy rr, and step 3 is omitted. [Pg.2642]

Again motivated by the stationary input data, it is intuitive, and can be shown to be true, that the value function V of a stationary policy ir satisfies an equation similar to (37) for the finite horizon case, that is. [Pg.2643]

For many interesting applications the state space S is too big for any of the algorithms discussed so far to be used. This is usually due to the curse of dimensionality —the phenomenon that the number of states grows exponentially in the number of dimensions of the state space. When the state space is too large, not only is the computational effort required by these algorithms excessive, but storing the value function and policy values for each state is impossible with current technology. [Pg.2645]

The objectivity is high in terms of comparisons between alternatives and between projects. The value-function curves have been standardized, and the rationale for the shapes of these curves is public knowledge. [Pg.34]

The original MCDA approach is focused on the point estimate of the BR score. Conditional on the value functions and weights selected, an interval estimate can be constructed for S, AS, and to account for the sampling variation of the data. The correlation matrix needs to be estimated or a resampling-based method can be employed to construct the interval estimate. [Pg.278]

Consider a set of K alternatives under evaluation. Denote the vector of endpoint values as y = Vy. .V f), which could be the original endpoint measurement transformed via the value functions. Assume that V is a random vector with density function/y(r ) in the evaluation space X X R . Let W be a vector of weights for the criteria. Instead of soliciting fixed values for W, assume that W is a vector of random variables with a joint distribution f w) in the feasible weight space ... [Pg.280]

Given the value functions, the weighted BR score S = S (1 W) is a random variable. Based on the observed values for K S = S(y, W). The benefit-risk comparison between treatment options will be made based on S. When there are no data available on W, several statistical indices were proposed to facilitate such comparison (Tervonen et al. 2011) rank acceptability index, center weight vectors, and confidence factors. [Pg.281]


See other pages where The VALUE Function is mentioned: [Pg.77]    [Pg.77]    [Pg.100]    [Pg.100]    [Pg.160]    [Pg.94]    [Pg.152]    [Pg.69]    [Pg.69]    [Pg.99]    [Pg.155]    [Pg.81]    [Pg.6]    [Pg.83]    [Pg.446]    [Pg.2606]    [Pg.2607]    [Pg.2641]    [Pg.2643]    [Pg.2644]   


SEARCH



The Value

Value functions

© 2024 chempedia.info