Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Least-Squares Minimization Regression Analysis

The most common way in which models are fitted to data is by using least-squares minimization procedures (regression analysis). All these procedures, linear or nonlinear, seek to find estimates of the equation parameters (a, y6, y.) by determining parameter values for which the sum of squared residuals is at a minimum, and therefore [Pg.29]

At this point it is necessary to discuss differences between uniresponse and multiresponse modeling. Take, for example, the reaction A B - C. Usually, equations in differential or algebraic form are fitted to individual data sets. A, B, and C and a set of parameter estimates obtained. [Pg.29]

However, if changes in the concentrations of A, B, and C as a function of time are determined, it is possible to use the entire data set (A, B, C) simultaneously to obtain parameter estimates. This procedure entails fitting the functions that describe changes in the concentration of A, B, and C to the experimental data simultaneously, thus obtaining one global estimate of the rate constants. This multivariate response modeling helps increase the precision of the parameter estimates by using all available information from the various responses. [Pg.30]

A determinant criterion is used to obtain least-squares estimates of model parameters. This entails minimizing the determinant of the matrix of cross products of the various residuals. The maximum likehhood estimates of the model parameters are thus obtained without knowledge of the variance-covariance matrix. The residuals e, , and correspond to the difference between predicted and actual values of the dependent variables at the different values of the Mth independent variable (m = to to u = tn), for the ith, 7th, and kth experiments (A, B, and C), respectively. It is possible to constmct an error covariance matrix with elements v,y  [Pg.30]

The determinant of this matrix needs to be minimized with respect to the parameters. The diagonal of this matrix corresponds to the sums of squares for each response (vu, vjj, v k)- [Pg.30]


The total residual sum of squares, taken over all elements of E, achieves its minimum when each column Cj separately has minimum sum of squares. The latter occurs if each (univariate) column of Y is fitted by X in the least-squares way. Consequently, the least-squares minimization of E is obtained if each separate dependent variable is fitted by multiple regression on X. In other words the multivariate regression analysis is essentially identical to a set of univariate regressions. Thus, from a methodological point of view nothing new is added and we may refer to Chapter 10 for a more thorough discussion of theory and application of multiple regression. [Pg.323]

These two equations present the extension of the Frumkin model to the adsorption of one-surfactant system with two orientational states at the interface. The model equations now contain four free parameters, including cou co2, and b. The equations are highly nonUnear, and regression used in the analysis of surface tension data involves special combinations of Eqs. 23 and 24, which produces a special model fimction used in the least-square minimization with measured surface tension data. Since the model function also contains surface... [Pg.32]

The estimation of RRs is usually performed by ordinary least-square linear regression. This should be done by preferably testing at least six different concentration levels within the linear range plus the blank (if this last exists or it is measurable). To minimize eventual experimental drifts, test samples should be examined in random order (not in order of increasing concentration values). Possibly, standard or test samples should be analyzed in duplicate (or more), especially when it is not possible to analyze at least six standards. If repeating the analysis of some... [Pg.424]

The rate expressions Rj — Rj(T,ck,6m x) typically contain functional dependencies on reaction conditions (temperature, gas-phase and surface concentrations of reactants and products) as well as on adaptive parameters x (i.e., selected pre-exponential factors k0j, activation energies Ej, inhibition constants K, effective storage capacities i//ec and adsorption capacities T03 1 and Q). Such rate parameters are estimated by multiresponse non-linear regression according to the integral method of kinetic analysis based on classical least-squares principles (Froment and Bischoff, 1979). The objective function to be minimized in the weighted least squares method is... [Pg.127]

Assumptions of a multiple regression analysis are identical to those for linear regression except for the p independent variables in this case. To reach regression coefficient estimates b by the method of least squares, we again have to minimize... [Pg.136]

Excel provides some built-in tools for fitting models to data sets. By far the most common routine method for experimental data analysis is linear regression, from which the best-fit model is obtained by minimizing the least-squares error between the y-test data and an array of predicted y data calculated according to a linear... [Pg.23]

In regression analysis, we identify the best fitting straight line through the observed data points. This is selected on the basis that it minimizes the sum of the squared vertical deviations of the points from the line - the least squares fit . The goodness of fit is reported as an -squared value which can vary between 0 (no fit) and 100 per cent (perfect fit of line to points). [Pg.192]

The calibration model referred to a partial least squares regression (PLSR) is a relatively modem technique, developed and popularized in analytical science by Wold. The method differs from PCR by including the dependent variable in the data compression and decomposition operations, i.e. both y and x data are actively used in the data analysis. This action serves to minimize the potential effects of jc variables having large variances but which are irrelevant to the calibration model. The simultaneous use of X and y information makes the method more complex than PCR as two loading vectors are required to provide orthogonality of the factors. [Pg.197]

In nonlinear regression analysis, we search for those parameter t alues that minimize the sum of the squares of the differences beiw een the measured values and the calculated values for all the data points.- Not only can nonlinear regression find the best estimates of parameter values, it can al,so be used to discriminate between different rate law models, such as the Langmutr-Hin-shelw ood models discussed in Chapter 10. Many software programs are available to find these parameter values so that all one has to do is enter the data, The Polymath software will be used to illustrate this technique. In order to carry out the search efficiently, in some cases one has to enter initial estimates of the parameter -alues close to the actual values. These estimates can be obtained using Ihe linear-least-squares technique discussed on the CD-ROM Professional Reference Shelf. [Pg.271]

Regression analysis. Assume that the process is originally at steady state with T = T and F = Fss. Introduce a unit step change in the coolant flow rate. Then Fn = 1 for n = 1, 2, , N. Let T be the measured response of the liquid s temperature at the sampling instant n = 1, 2, , N. The objective of the regression analysis is to find the values of r, r2, and Kp which minimize the following objective of least squares ... [Pg.340]

Reduced major axis regression is a more appropriate form of regression analysis for geochemistry than the more popular ordinary least squares regression. The method. (Kermack and Haldane, 1950) is based upon minimizing the areas of the triangles f/ I between points a ... [Pg.29]

However, the regression theory requires that the errors be normally distributed around (—7 a). and not around f as in the linearized version just described. Hence use the values determined as initial estimates to obtain more accurate values of the constants by minimizing the sum of squares of the residuals of the rates directly from the raw rate equation by nonlinear least squares analysis. [Pg.178]

A note about outliers is appropriate here. Many data analysis methods, including least-squares regression and backpropagation ANNs, are sensitive to outliers that is, the methods are not robust. This is because they rely on minimizing a function of squared errors, so the outliers are too influential. Some recent work attempts to make backpropagation methods robust.218-221 (There are also robust statistical methods.) Nonetheless, we strongly encourage you to study your data and make appropriate transformations. [Pg.103]

Regression analysis is one of the main tools in generating mathematical models by fitting a model equation to experimental data. In general, the regression analysis is based on the application of the least squares method for the estimation of unknown coefficients in the model equation. This method minimizes the sum of squares of the differences between the experimental values of the dependent variable, and those estimated by the model, y . Polynomials of various degrees are often used to describe complex non-linear relationships between the dependent and independent variables because the model equation is linear with respect to the unknown coefficients, and therefore the procedure for the calculation of the coefficients reduces to the solution of a system of linear simultaneous equations. [Pg.14]

When the assumption of error-free x-values is not valid, either in method comparisons or, in a conventional calibration analysis, because the standards are unreliable (this problem sometimes arises with solid reference materials), an alternative comparison method is available. This technique is known as the functional relationship by maximum likelihood (FREML) method, and seeks to minimize and estimate both x- and y-dlrection errors. (The conventional least squares approach can be regarded as a special and simple case of FREML.) FREML involves an iterative numerical calculation, but a macro for Minitab now offers this facility (see Bibliography), and provides standard errors for the slope and intercept of the calculated line. The method is reversible (i.e. in a method comparison it does not matter which method is plotted on the x-axis and which on the y-axis), and can also be used in weighted regression calculations (see Section 5.10). [Pg.130]

Often a relationship is sought between corrosion j>er-formance and some controllable variable, such as exposure time. Random error can produce enough scatter in the data to make visual curve fitting imprecise. What is desired is the best data fit. One way used frequently to accomplish a good fit is to use regression analysis to minimize the sum of the square of the data deviations about the fitted curve. This is the method of least squares. Many physical relationships are not linear functions. For example. [Pg.53]


See other pages where Least-Squares Minimization Regression Analysis is mentioned: [Pg.79]    [Pg.34]    [Pg.162]    [Pg.118]    [Pg.168]    [Pg.161]    [Pg.198]    [Pg.39]    [Pg.108]    [Pg.168]    [Pg.118]    [Pg.216]    [Pg.699]    [Pg.168]    [Pg.452]    [Pg.136]    [Pg.84]    [Pg.826]    [Pg.404]    [Pg.346]    [Pg.314]    [Pg.620]    [Pg.217]    [Pg.323]    [Pg.34]    [Pg.50]    [Pg.79]    [Pg.126]   


SEARCH



Least squares minimization

Least squares regression

Least-squares analysis

Regression analysis

Regression analysis, least-squares

Squares Analysis

© 2024 chempedia.info