Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Explicit Least Squares Estimation

The error in variables method can be simplified to weighted least squares estimation if the independent variables are assumed to be known precisely or if they have a negligible error variance compared to those of the dependent variables. In practice however, the VLE behavior of the binary system dictates the choice of the pairs (T,x) or (T,P) as independent variables. In systems with a [Pg.233]

Assuming that the variance of the errors in the measurement of each dependent variable is known, the following explicit LS objective functions may be formulated  [Pg.234]

The calculation of y and P in Equation 14.16a is achieved by bubble point pressure-type calculations whereas that of x and y in Equation 14.16b is by isothermal-isobaric //cm-/(-type calculations. These calculations have to be performed during each iteration of the minimization procedure using the current estimates of the parameters. Given that both the bubble point and the flash calculations are iterative in nature the overall computational requirements are significant. Furthermore, convergence problems in the thermodynamic calculations could also be encountered when the parameter values are away from their optimal values. [Pg.234]


A comparison of the various fitting techniques is given in Table 5. Most of these techniques depend either explicitly or implicitly on a least-squares minimization. This is appropriate, provided the noise present is normally distributed. In this case, least-squares estimation is equivalent to maximum-likelihood estimation.147 If the noise is not normally distributed, a least-squares estimation is inappropriate. Table 5 includes an indication of how each technique scales with N, the number of data points, for the case in which N is large. A detailed discussion on how different techniques scale with N and also with the number of parameters, is given in the PhD thesis of Vanhamme.148... [Pg.112]

The rate constants (together with the model and initial concentrations) define the matrix C of concentration profiles. Earlier, we have shown how C can be computed for simple reactions schemes. For any particular matrix C we can calculate the best set of molar absorptivities A. Note that, during the fitting, this will not be the correct, final version of A, as it is only based on an intermediate matrix C, which itself is based on an intermediate set of rate constants (k). Note also that the calculation of A is a linear least-squares estimate its calculation is explicit, i.e., noniterative. [Pg.229]

Ridge estimation is suitable for situations when many correlated variables are present in the model. (Hastie et al, 2001) In these cases, the least squares estimator may be poorly determined, since the large positive coefficients may cancel out the negative coefficients on the correlated variables. (Hastie et al., 2001) Ridge regression can effectively prevent this from happening. As the unique solution to (2), the ridge estimator has explicit form ... [Pg.208]

While you will use the least squares method in most cases, do not forget that selecting an estimation criterion you make assumptions on the error structure, even without a real desire to be involved with this problem. Therefore, it is better to be explicit on this issue, for the sake of consistency in tlie further steps of the estimation. [Pg.143]

Estimated standard deviations in the last place(s) are given in parentheses following the respective parameter values. These are adapted from the values given by the program orfls by taking into account the fact that the number of observations— powder-line intensities—is smaller than the number of reflections used explicitly in the least-squares treatment. [Pg.119]

This is a very popular method because it allows us to compute the regression estimates explicitly as 6 = (XrX) X v (where the design matrix X is enlarged with a column of ones for the intercept term and y = (yt..., v,)7 and, moreover, the least-squares method is optimal if the errors are normally distributed. [Pg.177]

The standard method of least-square fitting the model parameters from experimental data is only applicable if the deviation of estimated parameter-based model results from measured data can be explicitly calculated. To obtain the matrix of model error changes with the individual changes of each parameter for series of time dependent data points as measured in a... [Pg.158]

In the original version of the r0-method, ground state inertial moments calculated for all isotopomers in terms of internal coordinates are least-squares fitted to the experimental moments 1°. The internal coordinates represent a reference system which is identical for all isotopomers and the resulting restructure is obtained as the final set of internal coordinates determined by the criterion of optimum fit. All atomic positions must either be included in the list of those to be determined, or estimated values must be supplied and then kept fixed in the fit. The result depends on these assumed values. Schwendeman has suggested a useful r0-derived variant [6], the p-Kr method , where the isotopic differences between the calculated inertial moments of the isotopomers and the parent species are fitted to the respective experimental differences in the attempt to compensate (the isotopomer-independent) part of the rovib contribution. The same result is achieved explicitly by the r/e-method, a r0-derived variant which is presented later in this chapter, where the calculated inertial moments plus three isotopomer-independent rovib contributions eg are fitted to the experimental ground state moments I°g. ... [Pg.66]

Unlike linear models where normal equations can be solved explicitly in terms of the model parameters, Eqs. (3.14) and (3.15) are nonlinear in the parameter estimates and must be solved iteratively, usually using the method of nonlinear least squares or some modification thereof. The focus of this chapter will be on nonlinear least squares while the problem of weighted least squares, data transformations, and variance models will be dealt with in another chapter. [Pg.95]

An important finding is that if one has initial estimates of the basic parameters one can determine local identifiability numerically at the initial estimates directly without having to generate the observational parameters as explicit functions of the basic parameters. That is the approach used in the IDENT programs which use the method of least squares (Jacquez and Perry, 19W Perry, 1991). It is important to realize that the method works for linear and nonlinear systems, compartmental or noncompartmental. Furthermore, for linear systems it gives structural local identifiability. [Pg.318]


See other pages where Explicit Least Squares Estimation is mentioned: [Pg.233]    [Pg.17]    [Pg.254]    [Pg.233]    [Pg.17]    [Pg.254]    [Pg.74]    [Pg.312]    [Pg.232]    [Pg.134]    [Pg.197]    [Pg.99]    [Pg.168]    [Pg.507]    [Pg.33]    [Pg.253]    [Pg.99]    [Pg.26]    [Pg.194]    [Pg.212]    [Pg.12]    [Pg.1755]    [Pg.238]    [Pg.1004]    [Pg.1576]    [Pg.226]    [Pg.223]   


SEARCH



Estimate least squares

Explicit Estimation

Explicitness

Least estimate

© 2024 chempedia.info