The unknown parameter vector k is obtained by minimizing the corresponding least squares objective function where the weighting matrix Q, is chosen based on the statistical characteristics of the error term e, as already discussed in Chapter 2. [Pg.169]

The computation of the parameter estimates is accomplished by minimizing the least squares (LS) objective function given by Equation 3.8 which is shown next [Pg.27]

Firstly, various criteria for estimation, different from the least square E [P(x)-P (x)], may now be considered. Consider a general loss function L(e), function of the error of estimation e p(x) -p (x). The objective Is to build an estimator that would minimize the expected value of that loss function, and more precisely. Its conditional expectation given the N data values and configuration. [Pg.113]

Given N measurements of the output vector, the parameters can be obtained by minimizing the Least Squares (LS) objective function which is given below as the weighted sum ofsquares of the residuals, namely, [Pg.14]

Referring to the earlier treatment of linear least-squares regression, we saw that the key step in obtaining the normal equations was to take the partial derivatives of the objective function with respect to each parameter, setting these equal to zero. The general form of this operation is [Pg.49]

Having computed (5yT/ 5k)T we can proceed and obtain a linear equation for Ak°H) by substituting Equation 10.9 into the least squares objective function and using the stationary criterion (5S/5k(l+l)) = 0. The resulting equation is of the form [Pg.172]

As was indicated in Section 7.2, the vector of measurement adjustments, e, has a multivariate normal distribution with zero mean and covariance matrix V. Thus, the objective function value of the least square estimation problem (7.21), ofv = eT l> 1 e, has a central chi-square distribution with a number of degrees of freedom equal to the rank of A. [Pg.144]

Furthermore, as a first approximation one can use implicit least squares estimation to obtain very good estimates of the parameters (Englezos et al., 1990). Namely, the parameters are obtained by minimizing the following Implicit Least Squares (ILS) objective function, [Pg.21]

Given a set of data points (x y,), i=l,...,N and a mathematical model of the form, y = f(x,k), the objective is to determine the unknown parameter vector k by minimizing the least squares objective function subject to the equality constraint, namely [Pg.159]

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]

Consider, in general, the overall problem consisting of m balances and divide it into m smaller subproblems, that is, we will be processing one equation at a time. Then, after the i th balance has been processed, a new value of the least squares objective (test function) can be computed. Let J, denote the value of the objective evaluated after the i th equation has been considered. The approach for the detection of a gross error in this balance is based on the fact that fa is a random variable whose probability distribution can be calculated. [Pg.137]

Extended Kalman filtering has been a popular method used in the literature to solve the dynamic data reconciliation problem (Muske and Edgar, 1998). As an alternative, the nonlinear dynamic data reconciliation problem with a weighted least squares objective function can be expressed as a moving horizon problem (Liebman et al., 1992), similar to that used for model predictive control discussed earlier. [Pg.577]

The preceding results are applied to develop a strategy that allows us to isolate the source of gross errors from a set of constraints and measurements. Different least squares estimation problems are resolved by adding one equation at a time to the set of process constraints. After each incorporation, the least square objective function value is calculated and compared with the critical value. [Pg.145]

It is well known that cubic equations of state have inherent limitations in describing accurately the fluid phase behavior. Thus our objective is often restricted to the determination of a set of interaction parameters that will yield an "acceptable fit" of the binary VLE data. The following implicit least squares objective function is suitable for this purpose [Pg.236]

The point where the constraint is satisfied, (x0,yo), may or may not belong to the data set (xj,yj) i=l,...,N. The above constrained minimization problem can be transformed into an unconstrained one by introducing the Lagrange multiplier, to and augmenting the least squares objective function to form the La-grangian, [Pg.159]

Historically, treatment of measurement noise has been addressed through two distinct avenues. For steady-state data and processes, Kuehn and Davidson (1961) presented the seminal paper describing the data reconciliation problem based on least squares optimization. For dynamic data and processes, Kalman filtering (Gelb, 1974) has been successfully used to recursively smooth measurement data and estimate parameters. Both techniques were developed for linear systems and weighted least squares objective functions. [Pg.577]

The first factor k. 1 = 35, is expected to be temperature dependent via an Arrhenius fjfpe relationship the second factor defines functionality dependence on molecular size the third factor indicates that smaller molecules are more likely to react than larger species, perhaps due to steric hindrance potentials and molecular mobility. The last term expresses a bulk diffusional effect on the inherent reactivity of all polymeric species. The specific constants were obtained by reducing a least squares objective function for the cure at 60°C. Representative data are presented by Figure 5. The fit was good. [Pg.285]

It is assumed that there are available NCP experimental binary critical point data. These data include values of the pressure, Pc, the temperature, Tc, and the mole fraction, xc, of one of the components at each of the critical points for the binary mixture. The vector k of interaction parameters is determined by fitting the EoS to the critical data. In explicit formulations the interaction parameters are obtained by the minimization of the following least squares objective function [Pg.261]

Note in Table 5.10 that many of the integrals are common to different kinetic models. This is specific to this reaction where all the stoichiometric coefficients are unity and the initial reaction mixture was equimolar. In other words, the change in the number of moles is the same for all components. Rather than determine the integrals analytically, they could have been determined numerically. Analytical integrals are simply more convenient if they can be obtained, especially if the model is to be fitted in a spreadsheet, rather than purpose-written software. The least squares fit varies the reaction rate constants to minimize the objective function [Pg.89]

See also in sourсe #XX -- [ Pg.42 ]

© 2019 chempedia.info