Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Discrimination Among Rival Models

Rival models M, M2., Mj are to be compared on a common data matrix Y of weighted observations from n independent multiresponse events. Prior [Pg.156]

Case 1 Full Data, Known Covariance Matrix [Pg.157]

For a given experimental design and expectation model, let p Y Mj, S) denote the probability density of prospective data in Y-space, predicted via Eq. (7.1-2). According to Bayes theorem, the posterior probability of Model Mj conditional on given arrays Y and S is then [Pg.157]

If Model Mj contains an unknown parameter vector 6j, then integration of Eq. (7.5-9) over the permitted range of 6j gives the total posterior probability for this model  [Pg.157]

This integration includes only the parameters in the estimated set 6je, since any other parameters are held constant. Equation (7.5-10) can be expressed more naturally as [Pg.157]


Buzzi-Ferraris et al., (1983, 1984) proposed the use of a more powerful statistic for the discrimination among rival models however, computationally it is more intensive as it requires the calculation of the sensitivity coefficients at each grid point of the operability region. Thus, the simple divergence criterion of Hunter and Reimer (1965) appears to be the most attractive. [Pg.193]

Buzzi-Ferraris, G. and P. Forzatti, "A New Sequential Experimental Design Procedure for Discriminating Among Rival Models", Chem. Eng. Sci., 38, 225-232(1983). [Pg.392]

The logic and rigor of statistics also reside at the heart of modern science. Statistical evaluation is the first step in considering whether a body of measurements allows one to discriminate among rival models for a biochemical process. Beyond their value in experimental sciences, probabilistic considerations also help us to formulate theories about the behavior of molecules and particles and to conceptualize stochastic and chaotic events. [Pg.648]

The discrimination among rival models has to take into account the fact that, in general, when the number of parameters of a model increases, the quality of fit, evaluated by the sum S(a) of squared deviations, increases, but that, at the same time, the size of confidence regions for parameters also increases. Thus, there is, in most cases, a compromise between the wish to lower both the residuals and the confidence intervals for parameters. The simplest way to achieve the discrimination of models consists of comparing their respective experimental error variances. Other methods and examples have been given in refs. 25, 32 and 195—207. [Pg.316]

Note, however, that, in the case of fundamental models, there is not always a need to discriminate among rival models since, often, only a single model has been built up. Furthermore, the best criterion of the quality of a model is the consistency of fundamental parameter estimates with other values obtained by means of several methods under a large range of experimental conditions. Let us not be misled about the principle enemy the systematic errors both in experiments and in reaction and reactor models. [Pg.316]

For small data sets, the choice of the prior remains important. Investigators A and B could then reach agreement more readily by using a noninform tive prior. Such priors are discussed in the following sections. Another type of prior, appropriate for discrimination among rival models, will appear in Chapters 6 and 7. [Pg.84]

The sequential procedures discussed above are appropriate when the investigator wants either to estimate parameters or to discriminate among rival models. However, often an investigator wants to do both. A design procedure for this purpose should emphasize discrimination until one model is gaining favor and should emphasize parameter estimation after that point. The criterion should guard against premature choice of a model, with consequent waste of effort on parameter estimation for an inferior model. [Pg.118]

A computational algorithm for sequential discrimination among rival models is presented in Figure CS3.1. [Pg.876]

It is unusual for only one model to be compatible with experimental observations. Often data are not sufficiently extensive to discriminate among rival models and new experiments must be designed to answer the outstanding questions. The statistical, graph theoretical, and sensitivity analysis methods... can identify the areas for further investigation that are likely to produce significant new results. [Pg.104]

What are the best conditions to perform the next experiment so that with the additional information we will maximize our ability to discriminate among the rival models ... [Pg.191]

Based on the material presented above, the implementation steps to design the next experiment for the discrimination among r rival model are ... [Pg.195]

The use of time stages of varying lengths in iterative dynamic programming (Luus, 2000) may indeed provide a computationally acceptable solution. Actually, such an approach may prove to be feasible particularly for model discrimination purposes. In model discrimination we seek the optimal inputs, u(t), that will maximize the overall divergence among r rival models given by Equation 12.23. [Pg.201]

We have several options in terms of designing the next experiment that will have the maximum discriminating power among all the above four rival models ... [Pg.214]

The ability of the sequential design to discriminate among the rival models should be examined as a function of the standard error in the measurements (oe). For this reason, artificial data were generated by integrating the governing ODEs for Model 1 with "true" parameter values kt=0.31, k2=0.18, k3=0.55 and k4=0.03 and by adding noise to the noise free data. The error terms are taken from independent normal distributions with zero mean and constant standard deviation (oE). [Pg.215]

Recently certain diagnostic parameters have been exploited to allow a discrimination among several rival models. These diagnostic parameters can be grouped into two broad classes—those that are inherently present in the model, and those that are introduced solely for the purpose of model discrimination. [Pg.142]

If this linear analysis is to be used, the experimental conversion-spacetime data should first be taken at several pressure levels. Using the C2 analysis alone, then, the plots of Ct versus total pressure should be made for a preliminary indication of model adequacy. If several models are found to provide near-linear Cx plots, the complete linear analysis using the C2 plots should assist in the discrimination among the remaining rival models. If a model is adequate, both the Cl and C2 points should be correctable by a straight line with a common intercept, as demanded by Eqs. (85) and (86). If only one model is found to be adequate following the initial Cl analysis, the complete Ct and C2 analysis should still be carried out on this model to verify its ability to fit the high conversion data. [Pg.146]

The kinetics of methane combustion over a perovskite catalyst (Lao.9Ceo.iCo03) has been studied in Micro-Berty and fixed bed reactors. Discrimination among twenty-three rival kinetic models from Eley-Rideal, LHHW and Mars-van Krevelen (MVK) types has been achieved by means of (a) the initial rate method as well as by (b) integral kinetic data analysis. Two MVK type models could be retained as a result of the two studies, with a steady-state assumption implying the equality of the rate of three elementary steps. [Pg.599]

A block diagram in Figure 3 displays the way of discrimination among the 23 rival kinetic models that was based on the results from either reactor. Fixed bed reactor data have been treated at three hierarchical levels with increasing complexity in computation complications. Kinetic parameters were evaluated and tested at each level. Then models that could not meet the statistical criteria were rejected. The calculated parameters were sent to the next level as initial values. [Pg.603]

Van Parijs et al. [1986] applied sequential discrimination in their experimental study of benzothiophene hydrogenolysis. The final rate equations were already given in Example 2.6.4.A. The efficiency of the sequential discrimination was remarkable. At 533 K, a total of seven experiments (four preliminary, three designed) were sufficient to reject 14 out of 16 rivals. Only one of the two remaining models was consistently the best at 513, 553, and 573 K also, after four, five and nine designed experiments, respectively. A total of 41 experiments sufficed to select the best among the postulated rate equations. The location of the settings of these experiments is shown in Fig. 2.7.2.2.A-1 of Example 2.7.2.2.A. [Pg.137]


See other pages where Discrimination Among Rival Models is mentioned: [Pg.156]    [Pg.392]    [Pg.360]    [Pg.233]    [Pg.156]    [Pg.392]    [Pg.360]    [Pg.233]    [Pg.625]    [Pg.104]    [Pg.3]    [Pg.185]    [Pg.201]    [Pg.169]    [Pg.24]    [Pg.206]    [Pg.222]    [Pg.604]    [Pg.573]   
See also in sourсe #XX -- [ Pg.119 ]

See also in sourсe #XX -- [ Pg.127 ]




SEARCH



Model discrimination

Model discriminative

Rival

© 2024 chempedia.info