Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Implicit Estimation

Now we tum our attention to algebraic models that can only be represented implicitly though an equation of the form, [Pg.19]

In implicit estimation rather than minimizing a weighted sum of squares of the residuals in the response variables, we minimize a suitable implicit function of the measured variables dictated by the model equations. Namely, if we substitute the actual measured variables in Equation 2.8, an error term arises always even if the mathematical model is exact. [Pg.20]

The residual is not equal to zero because of the experimental error in the measured variables, i.e., [Pg.20]

Even if we make the stringent assumption that errors in the measurement of each variable ( ,. , M.2.N, j=l,2.R) are independently and identically distributed (i.i.d.) normally with zero mean and constant variance, it is rather difficult to establish the exact distribution of the error term e, in Equation 2.35. This is particularly true when the expression is highly nonlinear. For example, this situation arises in the estimation of parameters for nonlinear thermodynamic models and in the treatment of potentiometric titration data (Sutton and MacGregor. 1977 Sachs. 1976 Englezos et al., 1990a, 1990b). [Pg.20]

If we assume that the residuals in Equation 2.35 (e,) are normally distributed, their covariance matrix ( ,) can be related to the covariance matrix of the measured variables (COV(sy.,)= LyJ through the error propagation law. Hence, if for example we consider the case of independent measurements with a constant variance, i.e. [Pg.20]


As we shall discuss later, in many instances implicit estimation provides the easiest and computationally the most efficient solution to this class of problems. [Pg.10]

The choice of the objective function is very important, as it dictates not only the values of the parameters but also their statistical properties. We may encounter two broad estimation cases. Explicit estimation refers to situations where the output vector is expressed as an explicit function of the input vector and the parameters. Implicit estimation refers to algebraic models in which output and input vector are related through an implicit function. [Pg.14]

The above implicit formulation of maximum likelihood estimation is valid only under the assumption that the residuals are normally distributed and the model is adequate. From our own experience we have found that implicit estimation provides the easiest and computationally the most efficient solution to many parameter estimation problems. [Pg.21]

Implicit estimation offers the opportunity to avoid the computationally demanding state estimation by formulating a suitable optimality criterion. The penalty one pays is that additional distributional assumptions must be made. Implicit formulation is based on residuals that are implicit functions of the state variables as opposed to the explicit estimation where the residuals are the errors in the state variables. The assumptions that are made are the following ... [Pg.234]

Many multiresponse investigations have used procedures based on Eq. (7.1-7) or (7.1-15), which exclude S from the parameter set but implicitly estimate S nonetheless. Additional formulas of this type are provided in Eqs. (7.1-8,9) and (7.1-16,17), which give estimates of 6 that are more consistent with the full posterior density functions for those problem structures see Eqs. (7.1-6) and (7.1-14). [Pg.166]

Effective ways to estimate tree energies since solvent degrees ot treedom are taken into account implicitly, estimating tree energies ot solvated structures is much more straightforward than with explicit water models. [Pg.126]

The problems that occur when one tries to estimate affinity in terms of component terms do not arise when perturbation methods are used with simulations in order to compute potentials of mean force or free energies for molecular transformations simulations use a simple physical force field and thereby implicitly include all component terms discussed earlier. We have used the molecular transformation approach to compute binding affinities from these first principles [14]. The basic approach had been introduced in early work, in which we studied the affinity of xenon for myoglobin [11]. The procedure was to gradually decrease the interactions between xenon atom and protein, and compute the free energy change by standard perturbation methods, cf. (10). An (issential component is to impose a restraint on the... [Pg.137]

UNIQUAC is significant because it provides a means to estimate multicomponent interactions using no more than binary interaction experimental data, bond angles, and bond distances. There is an implicit assumption that the combinatorial portion of the model, ie, the size and shape effects, can be averaged over a molecule and that these can be directly related to molecular surface area and volume. This assumption can be found in many QSAR methods and probably makes a significant contribution to the generally low accuracy of many QSAR prediction techniques. [Pg.252]

Another principal difficulty is that the precise effect of local dynamics on the NOE intensity cannot be determined from the data. The dynamic correction factor [85] describes the ratio of the effects of distance and angular fluctuations. Theoretical studies based on NOE intensities extracted from molecular dynamics trajectories [86,87] are helpful to understand the detailed relationship between NMR parameters and local dynamics and may lead to structure-dependent corrections. In an implicit way, an estimate of the dynamic correction factor has been used in an ensemble relaxation matrix refinement by including order parameters for proton-proton vectors derived from molecular dynamics calculations [72]. One remaining challenge is to incorporate data describing the local dynamics of the molecule directly into the refinement, in such a way that an order parameter calculated from the calculated ensemble is similar to the measured order parameter. [Pg.270]

The life cycle cost of a process is the net total of all expenses incurred over the entire lifetime of a process. The choice of process chemistry can dramatically affect this life cycle cost. A quantitative life cycle cost cannot be estimated with sufficient accuracy to be of practical value. There is benefit, however, in making a qualitative estimate of the life cycle costs of competing chemistries. Implicit in any estimate of life cycle cost is the estimate of risk. One alternative may seem more attractive than another until the risks associated with product liability issues, environmental concerns, and process hazards are given due consideration. Value of life concepts and cost-benefit analyses (CCPS, 1995a, pp. 23-27 and Chapter 8) are useful in predicting and comparing the life cycle costs of alternatives. [Pg.65]

The potential usefulness of x-ray emission spectrography for trace analysis is implicit in the results of approximate calculations presented in Chapter 4. Thus, it was estimated that the intensity of cobalt Ka generated under practicable conditions in a monolayer (area, 1 sq cm) of cobalt atoms might give 133 counts per second (4.16). Such a sample weighs 0.2 pg. [Pg.226]

If we decide to only estimate a finite number of basis modes we implicitly assume the coefficients of all the other modes are zero and that the covariance of the modes estimated is very large. Thus QN Q becomes large relative to C and in this case Eq. 16 simplifies to a weighted least squares formula... [Pg.381]

Truncating the plane constrains the centroid estimate to a certain region, making the variance finite. Since the truncated plane is placed where the centre is expected to be we are implicitly adding prior information (van Dam and Lane, 2000). The smaller the plane, the more the centroid is effectively localized and the more prior information is assumed. Therefore, by adding prior information, truncating the plane can improve the centroid estimate, even though some photons are lost. The optimal solution is to maximize the likelihood directly. [Pg.389]

A second approach is to compare total mining production of a metal to an estimate of its total natural flux, making the implicit assumption that all mined materials will be released to the environment in the near future (a reasonable... [Pg.379]

When one attempts to estimate some parameter, the possibility of error is implicitly assumed. What sort of errors are possible Why is it necessary to distinguish between two types of error Reality (as hindsight would show later on, but unknown at the time) could be red or blue, and by the same token, any assumptions or decisions reached at the time were either red ... [Pg.87]

Summarizing, the weighted scheme (9) is stable in the space C, provided condition (17) holds. For the purely implicit scheme with [Pg.467]

Recall that Theorem 2 has been proved in Section 2 placing a special emphasis on one particular case of explicit schemes with the identity operator B = E involved. To make our exposition more transparent, the implicit scheme transforms into the explicit scheme (17). Having stipulated the conditions y E < C < estimate... [Pg.682]


See other pages where Implicit Estimation is mentioned: [Pg.19]    [Pg.232]    [Pg.432]    [Pg.333]    [Pg.343]    [Pg.40]    [Pg.253]    [Pg.53]    [Pg.19]    [Pg.232]    [Pg.432]    [Pg.333]    [Pg.343]    [Pg.40]    [Pg.253]    [Pg.53]    [Pg.244]    [Pg.296]    [Pg.475]    [Pg.806]    [Pg.2548]    [Pg.19]    [Pg.133]    [Pg.498]    [Pg.412]    [Pg.303]    [Pg.408]    [Pg.34]    [Pg.347]    [Pg.84]    [Pg.103]    [Pg.782]    [Pg.189]    [Pg.167]   


SEARCH



Implicit

Implicit Least Squares Estimation

Implicit Maximum Likelihood Parameter Estimation

© 2024 chempedia.info