SOLUTION. This is an example of linear least-squares analysis (LLSA), where the objective function is continuous. Typically, LLSA is performed on a discrete set of data points and one seeks to minimize the sum of squares of differences between the data and a continuous model function. In this case, we seek to minimize the square of the difference between two continuous functions over the complete range of reactant conversions that are possible (i.e., 0 < x < 1 for irreversible reactions). Hence, the sum of squares in the objective function to be [Pg.453]

This is the most common type of linear regression. In ordinary, least-squares regression, the objective function to be optimised is given as [Pg.93]

Answer (b) The linear least-squares prescription described in this chapter is used to replace a complex kinetic rate law by a zeroth-order rate law. Hence, the objective function that must be minimized is [Pg.459]

Referring to the earlier treatment of linear least-squares regression, we saw that the key step in obtaining the normal equations was to take the partial derivatives of the objective function with respect to each parameter, setting these equal to zero. The general form of this operation is [Pg.49]

With normal weighted non-linear regression, the objective is to minimize this objective function. As it is described later additional terms may be added to the objective function when performing extended least-squares Eq. (23) or Bayesian analyses Eq. (24). [Pg.2758]

The numerical value of the rate constant can now be estimated by linear regression, by the method of least squares. The model is compressed to Equation A10.30, and the objective function thus becomes [Pg.595]

Having computed (5y Vak) we can proceed and obtain a linear equation for by substituting Equation 10.9 into the least squares objective function and using the stationary criterion (5S/9k ) = 0. The resulting equation is of the form [Pg.193]

Given N measurements of the response variables (output vector), the parameters are obtained by minimizing the Linear Least Squares (LS) objective function which is given below as the weighted stun of squares of the residuals, namely, [Pg.26]

For models in which the dependent variables are linear functions of the parameters, the solution to the above-mentioned optimization problems can be obtained in closed form when the least squares objective functions (3.22) and (3.24) are considered. However, in chemical kinetics, linear problems are encountered only in very simple cases, so that optimization techniques for nonlinear models must be considered. [Pg.48]

Once the selection of a possible kinetic model and suitable reactor model are complete (equation (8-1)), a non-linear, least square method can be adopted to determine the kinetic and adsorption parameters. This can be achieved by minimizing an objective function representing the sum of the differences between the model concentration estimates and the measured experimental concentrations. This non-linear, least square fit can be performed using the curve fit functions available in Matlab, as recommended by Ibrahim (2(X)1). [Pg.151]

The response of many instruments is linear as a function of the measured variable, if variations due to experimental conditions or the instrument are taken into account. The objective is to determine the parameters of the linear equation that best represents the observations. The primary hypothesis in using the method of least squares is that one of the two variables should be without error while the second one is subject to random errors. This is the most frequently applied method. The coefficients a and b of the linear equation y = ax + b, as well as the standard deviation on a and on the estimation of y have been obtained in the past using a variety of similar equations. The choice of which formula to use depended on whether calculations were carried out manually, with calculator or using a spreadsheet. However, appropriate computer software is now widely used. [Pg.394]

In a strict sense parameter estimation is the procedure of computing the estimates by localizing the extremum point of an objective function. A further advantage of the least squares method is that this step is well supported by efficient numerical techniques. Its use is particularly simple if the response function (3.1) is linear in the parameters, since then the estimates are found by linear regression without the inherent iteration in nonlinear optimization problems. [Pg.143]

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]

This comparison is performed on the basis of an optimality criterion, which allows one to adapt the model to the data by changing the values of the adjustable parameters. Thus, the optimality criteria and the objective functions of maximum likelihood and of weighted least squares are derived from the concept of conditioned probability. Then, optimization techniques are discussed in the cases of both linear and nonlinear explicit models and of nonlinear implicit models, which are very often encountered in chemical kinetics. Finally, a short account of the methods of statistical analysis of the results is given. [Pg.4]

Oiur objective is to find the PID controller parameters such that the actual closed-loop frequency response is in some sense close to the desired closed-loop fi quency response Gr- y jw). However, the direct approach to this problem leads to a nonlinear optimization problem. Instead, we choose to work with the equivalent open-loop transfer function because, in this case, the problem becomes linear in the controller parameters, enabling us to consider a linear least squares approach to solving this problem. [Pg.143]

Historically, treatment of measurement noise has been addressed through two distinct avenues. For steady-state data and processes, Kuehn and Davidson (1961) presented the seminal paper describing the data reconciliation problem based on least squares optimization. For dynamic data and processes, Kalman filtering (Gelb, 1974) has been successfully used to recursively smooth measurement data and estimate parameters. Both techniques were developed for linear systems and weighted least squares objective functions. [Pg.577]

© 2019 chempedia.info