Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parameter estimation squares

Unfortunately, many commonly used methods for parameter estimation give only estimates for the parameters and no measures of their uncertainty. This is usually accomplished by calculation of the dependent variable at each experimental point, summation of the squared differences between the calculated and measured values, and adjustment of parameters to minimize this sum. Such methods routinely ignore errors in the measured independent variables. For example, in vapor-liquid equilibrium data reduction, errors in the liquid-phase mole fraction and temperature measurements are often assumed to be absent. The total pressure is calculated as a function of the estimated parameters, the measured temperature, and the measured liquid-phase mole fraction. [Pg.97]

The diagonal elements of this matrix approximate the variances of the corresponding parameters. The square roots of these variances are estimates of the standard errors in the parameters and, in effect, are a measure of the uncertainties of those parameters. [Pg.102]

Xjj is the ith observation of variable Xj. yi is the ith observation of variable y. y, is the ith value of the dependent variable calculated with the model function and the final least-squares parameter estimates. [Pg.42]

If we consider the relative merits of the two forms of the optimal reconstructor, Eq. s 16 and 17, we note that both require a matrix inversion. Computationally, the size of the matrix inversion is important. Eq. 16 inverts an M x M (measurements) matrix and Eq. 17 a P x P (parameters) matrix. In a traditional least squares system there are fewer parameters estimated than there are measurements, ie M > P, indicating Eq. 16 should be used. In a Bayesian framework we are hying to reconstruct more modes than we have measurements, ie P > M, so Eq. 17 is more convenient. [Pg.380]

For the data the squared correlation coefficient was 0.93 with a root mean square error of 2.2. The graph of predicted versus actual observed MS(1 +4) along with the summary of fit statistics and parameter estimates is shown in Figure 16.7. [Pg.494]

Parameter estimation to fit the data is carried out with VARY YM Y1 Y2, FIT M, and OPTIMIZE. The result is optimized values for Ym (0.7835), Y1 (0.6346), and Y2 (1.1770). The statistical summary shows that the residual sum of squares decreases from 0.494 to 0.294 with the parameter optimization compared to that with starting values (Ym=Yl=Y2=l. 0. ) The values of after optimization of Ym, Yl, and Y2 are shown in Figure 2, which illustrates the anchor-pivot method and forced linearization with optimization of the initiator parameters through Yl and Y2. [Pg.314]

The standard way to answer the above question would be to compute the probability distribution of the parameter and, from it, to compute, for example, the 95% confidence region on the parameter estimate obtained. We would, in other words, find a set of values h such that the probability that we are correct in asserting that the true value 0 of the parameter lies in 7e is 95%. If we assumed that the parameter estimates are at least approximately normally distributed around the true parameter value (which is asymptotically true in the case of least squares under some mild regularity assumptions), then it would be sufficient to know the parameter dispersion (variance-covariance matrix) in order to be able to compute approximate ellipsoidal confidence regions. [Pg.80]

Parameter estimation. Integral reactor behavior was used for the interpretation of the experimental data, using N2O conversion levels up to 70%. The temperature dependency of the rate parameters was expressed in the Arrhenius form. The apparent rate parameters have been estimated by nonlinear least-squares methods, minimizing the sum of squares of the residual N2O conversion. Transport limitations could be neglected. [Pg.643]

The application of optimisation techniques for parameter estimation requires a useful statistical criterion (e.g., least-squares). A very important criterion in non-linear parameter estimation is the likelihood or probability density function. This can be combined with an error model which allows the errors to be a function of the measured value. A simple but flexible and useful error model is used in SIMUSOLV (Steiner et al., 1986 Burt, 1989). [Pg.114]

Figure 2.40. Parameter estimation example using ESL to fit kinetic data to a model, smooth curve with squares is the fitted model and the irregular curve is the data. Figure 2.40. Parameter estimation example using ESL to fit kinetic data to a model, smooth curve with squares is the fitted model and the irregular curve is the data.
The parameter values found by the two methods differ slightly owing to the different criteria used which were the least squares method for ESL and the maximum-likelihood method for SIMUSOLV and because the T=10 data point was included with the ESL run. The output curve is very similar and the parameters agree within the expected standard deviation. The quality of parameter estimation can also be judged from a contour plot as given in Fig. 2.41. [Pg.122]

Parameter estimation and identification are an essential step in the development of mathematical models that describe the behavior of physical processes (Seinfeld and Lapidus, 1974 Aris, 1994). The reader is strongly advised to consult the above references for discussions on what is a model, types of models, model formulation and evaluation. The paper by Plackett that presents the history on the discovery of the least squares method is also recommended (Plackett, 1972). [Pg.2]

The structure of such models can be exploited in reducing the dimensionality of the nonlinear parameter estimation problem since, the conditionally linear parameters, kl5 can be obtained by linear least squares in one step and without the need for initial estimates. Further details are provided in Chapter 8 where we exploit the structure of the model either to reduce the dimensionality of the nonlinear regression problem or to arrive at consistent initial guesses for any iterative parameter search algorithm. [Pg.10]

The determinant criterion is very powerful and it should be used to refine the parameter estimates obtained with least squares estimation if our assumptions about the covariance matrix are suspect. [Pg.19]

The computation of the parameter estimates is accomplished by minimizing the least squares (LS) objective function given by Equation 3.8 which is shown next... [Pg.27]

The least squares estimator has several desirable properties. Namely, the parameter estimates are normally distributed, unbiased (i.e., (k )=k) and their covariance matrix is given by... [Pg.32]

The condition number is always greater than one and it represents the maximum amplification of the errors in the right hand side in the solution vector. The condition number is also equal to the square root of the ratio of the largest to the smallest singular value of A. In parameter estimation applications. A is a positive definite symmetric matrix and hence, the cond ) is also equal to the ratio of the largest to the smallest eigenvalue of A, i.e.,... [Pg.142]

When the Gauss-Newton method is used to estimate the unknown parameters, we linearize the model equations and at each iteration we solve the corresponding linear least squares problem. As a result, the estimated parameter values have linear least squares properties. Namely, the parameter estimates are normally distributed, unbiased (i.e., (k )=k) and their covariance matrix is given by... [Pg.177]

Murray and Reiff(1984) showed that the use of an optimally selected square wave for the input variables can offer considerable improvement in parameter estimation. Of course, it should be noted that the use of constant inputs is often... [Pg.200]

It is well known that cubic equations of state may predict erroneous binary vapor liquid equilibria when using interaction parameter estimates from an unconstrained regression of binary VLE data (Schwartzentruber et al.. 1987 Englezos et al. 1989). In other words, the liquid phase stability criterion is violated. Modell and Reid (1983) discuss extensively the phase stability criteria. A general method to alleviate the problem is to perform the least squares estimation subject to satisfying the liquid phase stability criterion. In other... [Pg.236]

If incorrect phase behavior is predicted by the EOS then constrained least squares (CLS) estimation should be performed and new parameter estimates be obtained. Subsequently, the phase behavior should be computed again and if the fit is found to be acceptable for the intended applications, then the CLS estimates should suffice. This was found to be the case for the carbon dioxide-n-hexane system presented later in this chapter. [Pg.243]

Prior work on the use of critical point data to estimate binary interaction parameters employed the minimization of a summation of squared differences between experimental and calculated critical temperature and/or pressure (Equation 14.39). During that minimization the EoS uses the current parameter estimates in order to compute the critical pressure and/or the critical temperature. However, the initial estimates are often away from the optimum and as a consequence, such iterative computations are difficult to converge and the overall computational requirements are significant. [Pg.261]

Kittrell et al. (1965a) also performed two types of estimation. First the data at each isotherm were used separately and subsequently all data were regressed simultaneously. The regression of the isothermal data was also done with linear least squares by linearizing the model equation. In Tables 16.7 and 16.8 the reported parameter estimates are given together with the reported standard error. Ayen and Peters (1962) have also reported values for the unknown parameters and they are given here in Table 16.9. [Pg.290]

Subsequently, Watts performed a parameter estimation by using the data from all temperatures simultaneously and by employing the formulation of the rate constants as in Equation 16.19. The parameter values that they found as well as their standard errors are reported in Table 16.18. It is noted that they found that the residuals from the fit were well behaved except for two at 375°C. These residuals were found to account for 40% of the residual sum of squares of deviations between experimental data and calculated values. [Pg.299]

In this example the number of measured variables is less than the number of state variables. Zhu et al. (1997) minimized an unweighted sum of squares of deviations of calculated and experimental concentrations of HPA and PD. They used Marquardt s modification of the Gauss-Newton method and reported the parameter estimates shown in Table 16.24. [Pg.308]

Procedures on how to make inferences on the parameters and the response variables are introduced in Chapter 11. The design of experiments has a direct impact on the quality of the estimated parameters and is presented in Chapter 12. The emphasis is on sequential experimental design for parameter estimation and for model discrimination. Recursive least squares estimation, used for on-line data analysis, is briefly covered in Chapter 13. [Pg.448]

These problems are provided to afford an opportunity for the reader to analyze binding data of different sorts. The problems do not require nonlinear least squares analysis, but this would be recommended to those with access to appropriate facilities. It must be emphasized that, while linearizing transformations allow binding data to be clearly visualized, parameter estimation should... [Pg.174]

Parameter estimation including nonlinear least-squares methods... [Pg.177]

A method is described for fitting the Cole-Cole phenomenological equation to isochronal mechanical relaxation scans. The basic parameters in the equation are the unrelaxed and relaxed moduli, a width parameter and the central relaxation time. The first three are given linear temperature coefficients and the latter can have WLF or Arrhenius behavior. A set of these parameters is determined for each relaxation in the specimen by means of nonlinear least squares optimization of the fit of the equation to the data. An interactive front-end is present in the fitting routine to aid in initial parameter estimation for the iterative fitting process. The use of the determined parameters in assisting in the interpretation of relaxation processes is discussed. [Pg.89]


See other pages where Parameter estimation squares is mentioned: [Pg.743]    [Pg.230]    [Pg.52]    [Pg.307]    [Pg.309]    [Pg.74]    [Pg.83]    [Pg.91]    [Pg.306]    [Pg.312]    [Pg.579]    [Pg.87]    [Pg.135]    [Pg.201]    [Pg.218]    [Pg.232]    [Pg.247]    [Pg.261]    [Pg.302]    [Pg.91]   


SEARCH



Parameter estimation

Parameter estimation weighted least-squares method

Residual Variance Model Parameter Estimation Using Weighted Least-Squares

Variances and covariances of the least-squares parameter estimates

© 2024 chempedia.info