Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Transformed regression

On the other hand the non-transformed regression gave close control at the highest end of the regression which is expected since the large numbers control the direction of the regression... [Pg.161]

In accord with the performed transformation, regression coefficient b0 is determined from the expression ... [Pg.351]

Solutions to the system of equations are Xis=+1.37 X2s=+1.53 and X3s=-0.55. By replacing these values in the initial regression we get Ys=+73.0. The transformed regression equation looks like this ... [Pg.440]

Step 3 Evaluate the transformed regression equation by using the Durbin-Watson test to determine if it is still significantly serially correlated. If the test shows no serial correlation, the procedure stops. If not, the residuals from the fitted equation are used to repeat the entire process again, and the new regression that results is tested using the Durbin-Watson test, and so on. [Pg.127]

Table 3.26B provides the data manipulation for determining y and x , which are, in turn, used to perform a regression analysis. Table 3.27 provides the transformed regression analysis. Therefore, the transformation was successful. The new Durbin-Watson value is 2.29 > ifu = l-45, which is not significant for serial correlation at a = 0.05. We can now transform the data back to the original scale... [Pg.144]

It may look weird to treat the Singular Value Decomposition SVD technique as a tool for data transformation, simply because SVD is the same as PCA. However, if we recall how PCR (Principal Component Regression) works, then we are really allowed to handle SVD in the way mentioned above. Indeed, what we do with PCR is, first of all, to transform the initial data matrix X in the way described by Eqs. (10) and (11). [Pg.217]

On the other hand, techniques like Principle Component Analysis (PCA) or Partial Least Squares Regression (PLS) (see Section 9.4.6) are used for transforming the descriptor set into smaller sets with higher information density. The disadvantage of such methods is that the transformed descriptors may not be directly related to single physical effects or structural features, and the derived models are thus less interpretable. [Pg.490]

The following paper discusses some of the problems that maybe encountered when using linear regression to model data that have been mathematically transformed into a linear form. [Pg.134]

Numeric-to-numeric transformations are used as empirical mathematical models where the adaptive characteristics of neural networks learn to map between numeric sets of input-output data. In these modehng apphcations, neural networks are used as an alternative to traditional data regression schemes based on regression of plant data. Backpropagation networks have been widely used for this purpose. [Pg.509]

Figure 4.49 Linear regression for the 2-parameter Weibull transformed load frequency data... Figure 4.49 Linear regression for the 2-parameter Weibull transformed load frequency data...
Figure 4.60 Linear regression for the Normal distribution transformed hardness data... Figure 4.60 Linear regression for the Normal distribution transformed hardness data...
Thus, Tis a linear function of the new independent variables, X, X2,. Linear regression analysis is used to ht linear models to experimental data. The case of three independent variables will be used for illustrative purposes, although there can be any number of independent variables provided the model remains linear. The dependent variable Y can be directly measured or it can be a mathematical transformation of a directly measured variable. If transformed variables are used, the htting procedure minimizes the sum-of-squares for the differences... [Pg.255]

The various independent variables can be the actual experimental variables or transformations of them. Dilferent transformations can be used for different variables. The independent variables need not be actually independent. For example, linear regression analysis can be used to fit a cubic equation by setting X, X and Z as the independent variables. [Pg.256]

However, it can be shown easily that a regression line of a given set of points does not remain the regression line after the transformation, and also that the correlation coefficient is altered. Let us denote in the original coordinates log k2 versus logki r, the correlation coefficient b2.i and l/bi.2 the slopes of... [Pg.434]

One can further compute the slopes b2.i and l/bi.2 of the real regression lines, drawn in the log k2 versus log ki plane and transformed into the E versus log A plane ... [Pg.435]

However, it is not proper to apply the regression analysis in the coordinates AH versus AS or AS versus AG , nor to draw lines in these coordinates. The reasons are the same as in Sec. IV.B., and the problem can likewise be treated as a coordinate transformation. Let us denote rcH as the correlation coefficient in the original (statistically correct) coordinates AH versus AG , in which sq and sh are the standard deviations of the two variables from their averages. After transformation to the coordinates TAS versus AG or AH versus TAS , the new correlation coefficients ros and rsH. respectively, are given by the following equations. (The constant T is without effect on the correlation coefficient.)... [Pg.453]

It can further be shown how the slopes of regression lines are changed during the transformation. Let shTch/sg be the slope of the real... [Pg.455]

Both aspects are combined in Fig. (2.20) and Table 2.16, where the linear coordinates are x resp. y, the logarithmic ones , resp. v. Regression coefficients established for the lin/lin plot are a, b, whereas those for the transformed coordinates are p, q. [Pg.130]

LEGEND NN linear regression without data transformation LL idem, using logarithmically transformed axes not interpretable... [Pg.259]

To benchmark our learning methodology with alternative conventional approaches, we used the same 500 (x, y) data records and followed the usual regression analysis steps (including stepwise variable selection, examination of residuals, and variable transformations) to find an approximate empirical model, / (x), with a coefficient of determination = 0.79. This model is given by... [Pg.127]

Because of peak overlappings in the first- and second-derivative spectra, conventional spectrophotometry cannot be applied satisfactorily for quantitative analysis, and the interpretation cannot be resolved by the zero-crossing technique. A chemometric approach improves precision and predictability, e.g., by the application of classical least sqnares (CLS), principal component regression (PCR), partial least squares (PLS), and iterative target transformation factor analysis (ITTFA), appropriate interpretations were found from the direct and first- and second-derivative absorption spectra. When five colorant combinations of sixteen mixtures of colorants from commercial food products were evaluated, the results were compared by the application of different chemometric approaches. The ITTFA analysis offered better precision than CLS, PCR, and PLS, and calibrations based on first-derivative data provided some advantages for all four methods. ... [Pg.541]

The plot of pH against titrant volume added is called a potentiometric titration curve. The latter curve is usually transformed into a Bjerrum plot [8, 24, 27], for better visual indication of overlapping pKiS or for pffjS below 3 or above 10. The actual values of pKa are determined by weighted nonlinear regression analysis [25-27]. [Pg.60]

It must be emphasized that Procrustes analysis is not a regression technique. It only involves the allowed operations of translation, rotation and reflection which preserve distances between objects. Regression allows any linear transformation there is no normality or orthogonality restriction to the columns of the matrix B transforming X. Because such restrictions are released in a regression setting Y = XB will fit Y more closely than the Procrustes match Y = XR (see Section 35.3). [Pg.314]


See other pages where Transformed regression is mentioned: [Pg.161]    [Pg.124]    [Pg.126]    [Pg.146]    [Pg.284]    [Pg.640]    [Pg.161]    [Pg.124]    [Pg.126]    [Pg.146]    [Pg.284]    [Pg.640]    [Pg.889]    [Pg.891]    [Pg.127]    [Pg.222]    [Pg.366]    [Pg.61]    [Pg.211]    [Pg.256]    [Pg.411]    [Pg.435]    [Pg.439]    [Pg.455]    [Pg.542]    [Pg.545]    [Pg.64]    [Pg.313]    [Pg.321]    [Pg.345]   


SEARCH



Regression transformations

© 2024 chempedia.info