Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Effect sparsity

Effect Sparsity Only a small fraction of all possible effects is likely to be active. Thus, the prior probability that 8j = 1 will be set to less than 0.5. [Pg.244]

Effect sparsity and effect hierarchy are represented through the choice of hyperparameters n, jr0, tti, n2. Typically ttq < nx < n2 < n < 0.5. Section 2.3 provides details on the selection of these hyperparameters. [Pg.245]

For a given experiment with / factors conducted using a 2 q fractional factorial design, there are h = 2 q — 1 independent factorial effect estimators P, i = 1,..., h, where ft, Afi/3, aj). Suppose that effect sparsity holds so that most of the effects Pi are zero (or negligible), with only a few of the effects being large in magnitude. [Pg.272]

In summary, Lenth s method is robust in the sense that it maintains good power as long as there is effect sparsity and it is adaptive to the degree of effect sparsity, using a pseudo standard error that attempts to involve only the estimates of negligible effects. [Pg.275]

We now apply Voss method to the data from the plasma etching experiment to construct individual 95% confidence intervals for each effect using u = 8. The pooling of 8 sums of squares into the denominator (3) provides a reasonably robust procedure without undue loss of power. One could, of course, pool more than 8 sums of squares into the denominator, as one would usually anticipate greater effect sparsity—more than 8 negligible effects—in a screening experiment. Still, 8 provides a reasonable trade-off between power and robustness. Also, for simultaneous confidence intervals, Dean and Voss (1999) provided critical values for this choice because, in a single replicate 24 factorial experiment, an inactive factor is involved in 8 inactive main effects and interactions which could then be used to provide the denominator (3) in Voss method. [Pg.277]

This procedure typically stops within a few steps due to effect sparsity. Voss and Wang (2005) proved, for the above test of H0 i (>>l = ()(/ = 1,..., / ), that the probability of making any false inferences (Type I) is at most a for any values of the parameters Pi,...,ph. [Pg.282]

Wang, W. and Voss, D. T. (2001b). On the analysis of nonorthogonal saturated designs using effect sparsity. Statistics and Applications 3, 177-192. [Pg.286]

Section 1.4, 92 factors are studied but we find only 11 of these to be important. This is in line with the principle of effect sparsity, see Chapters 1 and 8. The simplest definition of importance occurs when an experiment has a single response (output from computer code) and the factors have only additive effects that is, the input-output relation is modelled by a first-order polynomial in regression terminology or a main effects only model in analysis of variance terminology (also see Chapter 8). The most important factor is then the one that has the largest absolute value for its first-order effect or main effect the least important factor is the one whose effect is closest to zero. [Pg.288]

From Table 2.20 (27), different numbers of effects are considered important for response S when evaluating the graphical and statistical methods. From the plots, only one clearly deviating effect was observed. It is nevertheless clear that factor A, responsible for the effect, should be examined further. For response Rs, usually the same number of effects is considered important, except for the approach based on dummies, where the critical effect seems somewhat underestimated. For response t, all approaches lead to the same number of significant effects, except the algorithm of Dong, which leads to an overestimation of the critical effect, probably caused by a violation of the effect sparsity principle (about half of effects are important). [Pg.59]

We will continue with the discussion of some practical aspects of the SM approach, but first we need to introduce concepts of effect sparsity, active variables, and variable transformation. [Pg.257]

In considering optimization of O, it is not necessary to include all model parameters (e.g., not all rate coefficients). Under conditions of an individual experiment or a set of experiments, the model responses that correspond to the experimental observations do not depend sensitively on all the parameters. In fact, it has been noted by many that usually only a small fraction of the parameters, called active variables, show a significant effect on measured responses. This phenomenon has been termed effect sparsity [27,32]. We designate in the rest of the chapter active variables by x, to distinguish them from the complete set of the model parameters 0 note that x is a subset of 0. [Pg.258]

Computing sensitivities for all 6 for a given response and ranking them by the absolute value produces results illustrated in Fig. 5. In this particular example the main effects are concentrated in just the first few (a dozen or so) parameters, consistent with the effect sparsity, and it is these parameters that can be selected to be active variables for model optimization. Actually, one rather needs to consider a similar ranking but for the parameter impact factors, a product of parameter sensitivity and its uncertainty. [Pg.259]


See other pages where Effect sparsity is mentioned: [Pg.70]    [Pg.156]    [Pg.162]    [Pg.191]    [Pg.270]    [Pg.275]    [Pg.278]    [Pg.283]    [Pg.283]    [Pg.284]    [Pg.58]    [Pg.58]    [Pg.258]    [Pg.274]   
See also in sourсe #XX -- [ Pg.10 , Pg.156 , Pg.191 , Pg.244 , Pg.270 , Pg.288 ]

See also in sourсe #XX -- [ Pg.257 , Pg.258 , Pg.274 ]




SEARCH



Sparsity of effects principle

© 2024 chempedia.info