Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression ordinary

We could write the regression as y, = (a + 7) + Px, + (e, - 7) = a + Px, + s,. Then, we know that E[z ] = 0, and that it is independent of x,. Therefore, the second form of the model satisfies all of our assumptions for the classical regression. Ordinary least squares will give unbiased estimators of a and p. As long as 7 is not zero, the constant term will differ from a. [Pg.8]

Another problem is to determine the optimal number of descriptors for the objects (patterns), such as for the structure of the molecule. A widespread observation is that one has to keep the number of descriptors as low as 20 % of the number of the objects in the dataset. However, this is correct only in case of ordinary Multilinear Regression Analysis. Some more advanced methods, such as Projection of Latent Structures (or. Partial Least Squares, PLS), use so-called latent variables to achieve both modeling and predictions. [Pg.205]

The partial differential equations describing the catalyst particle are discretized with central finite difference formulae with respect to the spatial coordinate [50]. Typically, around 10-20 discretization points are enough for the particle. The ordinary differential equations (ODEs) created are solved with respect to time together with the ODEs of the bulk phase. Since the system is stiff, the computer code of Hindmarsh [51] is used as the ODE solver. In general, the simulations progressed without numerical problems. The final values of the rate constants, along with their temperature dependencies, can be obtained with nonlinear regression analysis. The differential equations were solved in situ with the backward... [Pg.172]

One of the earliest interpretations of latent vectors is that of lines of closest fit [9]. Indeed, if the inertia along v, is maximal, then the inertia from all other directions perpendicular to v, must be minimal. This is similar to the regression criterion in orthogonal least squares regression which minimizes the sum of squared deviations which are perpendicular to the regression line (Section 8.2.11). In ordinary least squares regression one minimizes the sum of squared deviations from the regression line in the direction of the dependent measurement, which assumes that the independent measurement is without error. Similarly, the plane formed by v, and Vj is a plane of closest fit, in the sense that the sum of squared deviations perpendicularly to the plane is minimal. Since latent vectors v, contribute... [Pg.106]

We have seen that PLS regression (covariance criterion) forms a compromise between ordinary least squares regression (OLS, correlation criterion) and principal components regression (variance criterion). This has inspired Stone and Brooks [15] to devise a method in such a way that a continuum of models can be generated embracing OLS, PLS and PCR. To this end the PLS covariance criterion, cov(t,y) = s, s. r, is modified into a criterion T = r. (For... [Pg.342]

M. Stone and R.J. Brooks, Continuum regression cross-validated sequentially constructed prediction embracing ordinary least sqaures, partial least squares, and principal component regression. J. Roy. Stat. Soc. B52 (1990) 237-269. [Pg.347]

Ordinary least squares regression of MV upon MX produces a slope of 9.32 and an intercept of 2.36. From these we derive the parameters of the simple Michaelis-Menten reaction (eq. (39.116)) ... [Pg.504]

Non-linear models, such as described by the Michaelis-Menten equation, can sometimes be linearized by a suitable transformation of the variables. In that case they are called intrinsically linear (Section 11.2.1) and are amenable to ordinary linear regression. This way, the use of non-linear regression can be obviated. As we have pointed out, the price for this convenience may have to be paid in the form of a serious violation of the requirement for homoscedasticity, in which case one must resort to non-parametric methods of regression (Section 12.1.5). [Pg.505]

The expression x (J)P(j - l)x(j) in eq. (41.4) represents the variance of the predictions, y(j), at the value x(j) of the independent variable, given the uncertainty in the regression parameters P(/). This expression is equivalent to eq. (10.9) for ordinary least squares regression. The term r(j) is the variance of the experimental error in the response y(J). How to select the value of r(j) and its influence on the final result are discussed later. The expression between parentheses is a scalar. Therefore, the recursive least squares method does not require the inversion of a matrix. When inspecting eqs. (41.3) and (41.4), we can see that the variance-covariance matrix only depends on the design of the experiments given by x and on the variance of the experimental error given by r, which is in accordance with the ordinary least-squares procedure. [Pg.579]

The sequence of the innovation, gain vector, variance-covariance matrix and estimated parameters of the calibration lines is shown in Figs. 41.1-41.4. We can clearly see that after four measurements the innovation is stabilized at the measurement error, which is 0.005 absorbance units. The gain vector decreases monotonously and the estimates of the two parameters stabilize after four measurements. It should be remarked that the design of the measurements fully defines the variance-covariance matrix and the gain vector in eqs. (41.3) and (41.4), as is the case in ordinary regression. Thus, once the design of the experiments is chosen... [Pg.580]

The species are inseparable by ordinary analytical measures. Further work is being done to more clearly understand the role of regressive reactions in low severity liquefaction. In addition, recent work has resulted in techniques for obtaining high SRC recoveries from less desirable feedstocks. [Pg.210]

The linearity of a method is defined as its ability to provide measurement results that are directly proportional to the concentration of the analyte, or are directly proportional after some type of mathematical transformation. Linearity is usually documented as the ordinary least squares (OLS) curve, or simply as the linear regression curve, of the measured instrumental responses (either peak area or height) as a function of increasing analyte concentration [22, 23], The use of peak areas is preferred as compared to the use of peak heights for making the calibration curve [24],... [Pg.249]

Now, what is interesting about this situation is that ordinary regression theory and the theory of PCA and PLS specify that the model generated must be linear in the coefficients. Nothing is specified about the nature of the data (except that it be noise-free, as our simulated data is) the data may be non-linear to any degree. Ordinarily this is not a problem because any data transform may be used to linearize the data, if that is desirable. [Pg.132]

In recent years Pagani and coworkers have made detailed studies of the problem. In the space available we can only outline their work and interested readers should consult the very detailed papers. The authors have developed special scales of substituent constants for dealing with contiguous functionalities 193. These new substituent constants are a (which seems to be related fairly closely to the ordinary a ), aIB (which bears some relationship to o/, but not that close), and (Tr-, a special delocalization parameter. It is claimed194 that these scales are appropriate for describing interactions between contiguous functionalities, as opposed to literature values which account for remote interactions . Various C—H acidities in gas phase and in solution were successfully correlated by means of multiple regressions on am and chemical shifts for the central carbon in the carbanions. [Pg.509]

Regression can be performed directly with the values of the variables (ordinary least-squares regression, OLS) but in the most powerful methods, such as principal component regression (PCR) and partial least-squares regression (PLS), it is done via a small set of intermediate linear latent variables (the components). This approach has important advantages ... [Pg.118]

For only one x-variable and one y-variable—simple x/y-regression—the basic equations are summarized in Section 4.3.1. Ordinary least-squares (OLS) regression is the classical method, but also a number of robust... [Pg.119]

Ordinary least squares regression requires constant variance across the range of data. This has typically not been satisfied with chromatographic data ( 4,9,10 ). Some have adjusted data to constant variance by a weighted least squares method ( ) The other general adjustment method has been by transformation of data. The log-log transformation is commonly used ( 9,10 ). One author compares the robustness of nonweighted, weighted linear, and maximum likelihood estimation methods ( ). Another has... [Pg.134]

The transformed response values were regressed on the transformed amount values using the simple linear regression model and ordinary least squares estimation. The standard deviation of the response values (about the regression line) was calculated, and plots were formed of the transformed response values and of the residuals versus transformed amounts. [Pg.136]

To model the relationship between PLA and PLR, we used each of these in ordinary least squares (OLS) multiple regression to explore the relationship between the dependent variables Mean PLR or Mean PLA and the independent variables (Berry and Feldman, 1985).OLS regression was used because data satisfied OLS assumptions for the model as the best linear unbiased estimator (BLUE). Distribution of errors (residuals) is normal, they are uncorrelated with each other, and homoscedastic (constant variance among residuals), with the mean of 0. We also analyzed predicted values plotted against residuals, as they are a better indicator of non-normality in aggregated data, and found them also to be homoscedastic and independent of one other. [Pg.152]


See other pages where Regression ordinary is mentioned: [Pg.8]    [Pg.87]    [Pg.8]    [Pg.87]    [Pg.714]    [Pg.133]    [Pg.19]    [Pg.99]    [Pg.89]    [Pg.96]    [Pg.517]    [Pg.294]    [Pg.342]    [Pg.371]    [Pg.582]    [Pg.447]    [Pg.33]    [Pg.26]    [Pg.43]    [Pg.80]    [Pg.499]    [Pg.133]    [Pg.134]    [Pg.164]    [Pg.203]    [Pg.219]    [Pg.300]    [Pg.308]   
See also in sourсe #XX -- [ Pg.100 ]




SEARCH



© 2024 chempedia.info