Temperature data were changed to a dimensionless form using the following equation [Pg.507]

Tq and Tj are reference temperatures, which are chosen to have 0 values between 0 and 1 (Tq = 30°C and Tj = 1000°C in this work) [Pg.507]

An optimization method was applied for obtaining the optimal set of parameters of the probability distribution function. The optimization criterion was the minimization of the residual sum of squares RSS) defined by Equation 12.146 [Pg.507]

FIGURE 12.12 Experimental (o) and predicted (-) distillation values with beta function (hydrocracked maya crude oil). [Pg.508]

Before proceeding, let us consider some simple examples of parameter estimation problems. We are studying the kinetics of the chemical reaction A + B C, which if assumed elementary, has the rate law... [Pg.373]

Furthermore, the implementation of the Gauss-Newton method also incorporated the use of the pseudo-inverse method to avoid instabilities caused by the ill-conditioning of matrix A as discussed in Chapter 8. In reservoir simulation this may occur for example when a parameter zone is outside the drainage radius of a well and is therefore not observable from the well data. Most importantly, in order to realize substantial savings in computation time, the sequential computation of the sensitivity coefficients discussed in detail in Section 10.3.1 was implemented. Finally, the numerical integration procedure that was used was a fully implicit one to ensure stability and convergence over a wide range of parameter estimates. [Pg.372]

This chapter contains examples of optimization techniques applied to the design and operation of two of the most common staged and continuous processes, namely, distillation and extraction. We also illustrate the use of parameter estimation for fitting a function to thermodynamic data. [Pg.443]

The amount of uncertainty in parameter estimates obtained for the hyperbolic models is particularly large. It has been pointed out, for example, that parameter estimates obtained for hyperbolic models are usually highly correlated and of low precision (B16). Also, the number of parameters contained in such models can be too great for the range of the experimental data (W3). Quantitative measures of the precision of parameter estimates are thus particularly important for the hyperbolic models. (Cl). [Pg.125]

Full second-order polynomial models used with central composite experimental designs are very powerful tools for approximating the true behavior of many systems. However, the interpretation of the large number of estimated parameters in multifactor systems is not always straightforward. As an example, the parameter estimates of the coded and uncoded models in the previous section are quite different, even though the two models describe essentially the same response surface (see Equations 12.63 and 12.64). It is difficult to see this similarity by simple inspection of the two equations. Fortunately, canonical analysis is a mathematical technique that can be applied to full second-order polynomial models to reveal the essential features of the response surface and allow a simpler understanding of the factor effects and their interactions. [Pg.254]

The art of experimental design is made richer by a knowledge of how the placement of experiments in factor space affects the quality of information in the fitted model. The basic concepts underlying this interaction between experimental design and information quality were introduced in Chapters 7 and 8. Several examples showed the effect of the location of one experiment (in an otherwise fixed design) on the variance and co-variance of parameter estimates in simple single-factor models. [Pg.279]

In this chapter we investigate the interaction between experimental design and information quality in two-factor systems. However, instead of looking again at the uncertainty of parameter estimates, we will focus attention on uncertainty in the response surface itself. Although the examples are somewhat specific (i.e., limited to two factors and to full second-order polynomial models), the concepts are general and can be extended to other dimensional factor spaces and to other models. [Pg.279]

Examples of Parameters Used to Estimate Melting Point. [Pg.25]

Random variable estimations have, apart from the mean, their own variance. It has been proved that when choosing an estimation it is not sufficient to require an estimation to be consistent and biased. It is easy to cite examples of different estimations for consistent and biased basic population means. The criterion for a better estimation is an estimation is better the smaller dispersion it has. Let us assume that we have two consistent and biased estimations 0i and 02 for a population parameter and let us suppose that 0j has smaller dispersion than 02. Fig. 1.9 presents distributions of the given estimations. [Pg.32]

The characteristic features of parameter estimation in a molecular model of adsorption are illustrated in Table 9.9, taking the simple example of the constant-capacitance model as applied to the acid-base reactions on a hydroxylated mineral surface. (It is instructive to work out the correspondence between equation (9.2) and the two reactions in Table 9.9.) Given the assumption of an average surface hydroxyl, there are just two chemical reactions involved (the background electrolyte is not considered). The constraint equations prescribe mass and charge balance (in terms of mole fractions, x) and two complex stability constants. Parameter estimation then requires the determination of the two equilibrium constants and the capacitance density simultaneously from experimental data on the species mole fractions as functions of pH. [Pg.252]

The first application of hierarchical SA for parameter estimation included refinement of the pre-exponentials in a surface kinetics mechanism of CO oxidation on Pt (a lattice KMC model with parameters) (Raimondeau et al., 2003). A second example entailed parameter estimation of a dual site 3D lattice KMC model for the benzene/faujasite zeolite system where benzene-benzene interactions, equilibrium constants for adsorption/desorption of benzene on different types of sites, and diffusion parameters of benzene (a total of 15 parameters) were determined (Snyder and Vlachos, 2004). While this approach appears promising, the development of accurate but inexpensive surfaces (reduced models) deserves further attention to fully understand its success and limitation. [Pg.53]

The unknown quantities of interest described in the previous section are examples of parameters. A parameter is a numerical property of a population. One may be interested in measures of central tendency or dispersion in populations. Two parameters of interest for our purposes are the mean and standard deviation. The population mean and standard deviation are represented by p and cr, respectively. The population mean, p, could represent the average treatment effect in the population of individuals with a particular condition. The standard deviation, cr, could represent the typical variability of treatment responses about the population mean. The corresponding properties of a sample, the sample mean and the sample standard deviation, are typically represented by x and s, which were introduced in Chapter 5. Recall that the term "parameter" was encountered in Section 6.5 when describing the two quantities that define the normal distribution. In statistical applications, the values of the parameters of the normal distribution cannot be known, but are estimated by sample statistics. In this sense, the use of the word "parameter" is consistent between the earlier context and the present one. We have adhered to convention by using the term "parameter" in these two slightly different contexts. [Pg.69]

To compare this model to the simpler base model, a likelihood ratio test may be utilized as is commonly applied in population model building. This test considers the log likelihood values (in NONMEM, the minimum values of the objective function) from two hierarchical models and compares the difference in these values to a statistic with the number of degrees of freedom equal to the difference in the number of parameters estimated in the two models. When the model including the effect of AUC was estimated in the example rash data set, the minimum value of the objective function was 1605.344. Thus, the difference in the log-likelihood values for the two models is 11.470 with 1 degree of freedom, relating to a p-value of 0.0007, and the conclusion that AUC is a statistically significant predictor of response at a = 0.05. The estimates (SE) for 0i and ft were -2.54 (0.176) and 0.000969 (0.000287), respectively, and was estimated at 2.80 (0.525). [Pg.642]

Local SA involves repeated groups of simulations, where in each group a fixed-point perturbation of one parameter is used for the simulations. Trial simulation output metrics are then calculated for each group and the impact on outcome is considered relative to this range of parameter estimates. For example, the degree of sensitivity may be considered by the rate of change in response relative to the unit change in the parameter. This process is then repeated for each parameter of interest. Limitations of the local sensitivity approach are that it only reflects sensitivity to uncertainty in one parameter or assumption at a time. It is therefore inefficient and conclusions about sensitivity are conditional on assumptions of all other parameters. [Pg.888]

See also Nicolaides [13] and Spyriouni and Vergelati [14], for examples of the estimation of % by the method developed in this book, and by atomistic simulations, respectively, to be used as an input parameter in mesoscale simulations of the dynamics of multiphase materials. [Pg.178]

Often in pharmacokinetics the analyst has data on more than one individual. In a typical Phase 1 clinical trial, there might 12-18 subjects who have pharmacokinetic data collected. Applying a compartmental model to each individual s data generates a vector of parameter estimates, each row of which represents a realization from some probability distribution. For example,... [Pg.119]

A standard way to describe the results of parameter estimation is a contour plot. The value of the objective function (Q) is investigated as a function of a parameter pair, and values, where the objective function gets the same, specified value are depicted in a single plot. An example of a contour plot is provided in Figure 10.16. [Pg.442]

In much of statistics, the notion of a population is stressed and the subject is sometimes even defined as the science of making statements about populations using samples. However, the notion of a population can be extremely elusive. In survey work, for example, we often have a definite population of units in mind and a sample is taken from this population, sometimes according to some well-specified probabilistic rule. If this rule is used as the basis for calculation of parameter estimates and their standard errors, then this is referred to as design-based inference (Lehtonen and Pahkinen, 2004). Because there is a form of design-based inference which applies to experiments also, we shall refer to it when used for samples as sampling-based inference. [Pg.41]

© 2019 chempedia.info