Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Regression analysis, 5.24

Regression equations do not indicate the accuracy and spread of the data. Consequently, they are normally accompanied by additional data, which as a minimum requirement should include the number of observations used (n), the standard deviation of the observations (s) and the correlation coefficient (r). [Pg.250]

The value of the correlation coefficient is a measure of how closely the data matches the equation. It varies from zero to one. A value of r = 1 indicates a perfect match. In medicinal chemistry r values greater than 0.9 are usually regarded as representing an acceptable degree of accuracy, provided they are obtained using a reasonable number of results with a suitable standard deviation. [Pg.250]

Enzymes are large protein molecules (apoenzymes), which act as catalysts for almost all the chemical reactions that occur in living organisms. The structures of a number of enzymes contains groups of metal ions, known as metal clusters, coordinated to the peptide chain. These enzymes are often referred to as metalloenzymes. Many enzymes require the presence of organic compounds (co-enzymes) and/or metal ions and inorganic compounds (co-factors) in order to function. These composite active enzyme systems are known as holoenzymes. [Pg.252]

Enzymes are found embedded in cell walls and membranes as well as occuring in the various fluids found in living organisms. A number of enzymes are produced in specific areas of the body by the metabolism of inactive protein precursors known as proenzymes or zymogens. [Pg.252]

This allows the body to produce the active form of the enzyme only in the region of the body where it is required. [Pg.252]

Regression is a highly useful statistical technique for developing a quantitative relationship between a dependent variable (response) and one or more independent variables (factors). It utilizes experimental data on the pertinent variables to develop a numerical relationship showing the influence of the independent variables on a dependent variable of the system. [Pg.120]

Throughout engineering, regression may be applied to correlating data in a wide variety of problems ranging from the simple correlation of physical properties to the analysis of a complex industrial system. For example, in a catalytic reactor involving [Pg.120]

In this case experimental data will be used for determining constants b and n by applying regression analysis. [Pg.121]

Regression analysis has been the favoured statistical method for QSAR analyses because it is both simple and illustrative. Regression analysis is an exact mathematical procedure (Draper and Smith, 1981 Weisberg, 1985) to correlate independent X variables (the descriptors Xj, X2.) with dependent Y variables (the activity measures Y) in the form of [Pg.66]

To comply with good statistical practice, several criteria have to be met for a significant regression model based on either the physico-chemical descriptors (Hansch approach) or the indicator variables (Free-Wilson method)  [Pg.67]

The minimum of data points per independent X variable is 4-5 the desirable range is 10-15. [Pg.67]

The significance of a derived QSAR model is indicated by the statistical terms  [Pg.67]

The correlation coefficient r is a relative measure of the goodness of fit of the data points to the model, and takes the value of 1.0 for the perfect relationship. [Pg.67]

Regression analysis is commonly applied in comparing the results of analytical method comparisons. Typically an experiment is carried out in which a series of paired values is collected when comparing a new method with an established method. This series of paired observations (xl,-, x2,) is then used to establish the nature and strength of the rela- [Pg.378]

As outlined previously, we distinguish between the measured value (x/) and the target value (Xxargct ) of a sample subjected [Pg.378]

With Positive Intercept (oo - 20) and Slope Below Unity (b - 0.80). N - 50 (xl,x2) Measurements [Pg.378]

This model is generally useful when the systematic difference between Xl rget and depends on the measured [Pg.379]

The systematic difference is thus composed of a fixed part and a proportional part. [Pg.379]

Regression analysis is an exact mathematical procedure, despite the fact that it derives correlations from data containing experimental error (Table 18). [Pg.91]

113 and 114 describe a regression model containing two X variables, which is the simplest case of a linear multiple regression analysis. [Pg.91]

Most often these equations are written in the form of a matrix  [Pg.92]

Inversion of the symmetrical matrix as shown below or via triangularisation [572] gives the values Cjj, from which the coefficients a, b, and c of the regression model (eq. 114) are calculated by eqs. 121 — 123. matrix inverted matrix [Pg.93]

The correlation coefficient r (eq. 124) is a relative measure of the quality of fit of the model because its value depends on the overall variance of the dependent variable (this is illustrated by eqs. 58 — 60, chapter 3.8 while the correlation coefficients r of the two subsets are relatively small, the correlation coefficient derived from the combined set is much larger, due to the increase in the overall variance). The squared correlation coefficient r is a measure of the explained variance, most often presented as a percentage value. The overall (total) variance is defined by eq. 125, the unexplained variance (SSQ = sum of squared error residual variance variance not explained by the model) by eq. 126. [Pg.93]

Correlation analysis only asks whether there is a relationship between two sets of data. Regression goes a step further and asks how are they related More specifically it derives a mathematical equation that will allow us to predict one of the parameters if we know the value of the other. [Pg.178]

Regression analysis produces an equation by which the value of the dependentvariable can be predicted from the independent variable. [Pg.178]

The box above emphasizes the fact that the equation operates in a specific direction. In order to undertake regression analysis, we have to decide which is the dependent and which the independent variable. [Pg.179]

2 An example of regression - fungal toxin contamination and rainfall [Pg.179]

3 Identifying the line of best fit for the data - the least squares fit [Pg.179]

We can eliminate all the false dependent variables from the statistical model thanks to the correlation analysis. When we obtain = 1 for a process with two dependent variables (yj, y2), we have a linear dependence between these variables. Then, in this case, both variables exceed the independence required by the output process variables. Therefore, yj or can be eliminated from the list of the dependent process variables. [Pg.353]

Regression analysis is the statistical computing procedure that begins when the [Pg.353]

The items described above have already been introduced in Fig. 5.3 where the steps of the development of the statistical model of a process are presented. It should be pointed out that throughout the regression analysis, attention is commonly concentrated on the first and second aspects, despite the fact that virgin [Pg.353]

A linear regression occurs when a process has only one input (x) and one output variable (y) and both variables are correlated by a linear relationship  [Pg.354]

This relation is a particularization of the general relation (5.3). Indeed, polynomial regression presents the limitation of being first order. In accordance with Eq. (5.46), the system of equations (5.9) results in the following system for the identification of Po and P  [Pg.354]

The most common statistical procedure for deriving correlations involves regression analysis as mentioned earlier. We discuss it here in some detail. Basically, it is a least-squares method for more than one variable and is suitable for small descriptor sets. Other methods for handling large descriptor sets exist, and some of them are mentioned later along with appropriate references providing more detail. The reader is directed to almost any statistical textbook (e.g., Belesley, Kuh, and Welsh ) for further elaboration. [Pg.227]

A bulk property, Y, is measured for a set of n compounds leading to a set of values, Y,, 1 i . For each of the n compounds, a set of m molecular descriptors, Xj, l j m, with the requirements that m+ l) n, is obtained by empirical or computational methods. The (m+ 1) arises because of the possibility of a nonzero intercept appearing in a relationship. A minimum of m measurements are required, one for each parameter. The regression coefficients have greater statistical validity if there exist more measurements than coefficients a common rule of thumb is n 5m (i.e., at least five [Pg.227]

If the relationship between dependent (Y) and independent (X) variables were perfect, the Y values for any m + 1) compounds could be used resulting in a square matrix. If the m parameters, X, are independent (orthogonal, i.e., have no intercorrelation), the matrix may be inverted and the coefficients, Uj, calculated. However, the relationships are seldom perfect so using another set of compounds would lead to another set of coefficients with values different from the previous set. This process could be repeated until all combinations had been tried giving, ultimately, a range of values (a distribution) for each coefficient. For even a medium sized data set, this is a daunting task  [Pg.228]

Fortunately, statistical methods exist that may be used to help derive the coefficients, thus minimizing the work. The full data matrix is employed to find the set of coefficient values, ai, using the requirement that the variance, s, (Eq. [15]) is a minimum. [Pg.228]

Here 8, is the difference between the observed and calculated values for Y,  [Pg.228]

If the rate law depends on the concentration of more than one component, and it is not possible to use the method of one component being in excess, a linearized least squares method can be used. The purpose of regression analysis is to determine a functional relationship between the dependent variable (e.g., the reaction rate) and the various independent variables (e.g., the concentrations). [Pg.171]

Consider a mole balanee on a eonstant volume bateh reaetor represented by [Pg.172]

Taking the logarithms of both sides of Equation 3-228 gives [Pg.172]

Another method for determining rate law parameters is to employ a search for those parameter values that minimize the sum of the squared difference of measured reaction rate and the calculated reaction rate. In performing N experiments, the parameter values can be determined (e.g., E, Cg, Cj, and C2) that minimize the quantity  [Pg.173]

K = number of parameters to be determined = measured reaction rate for run i 1 = calculated reaction rate for run i [Pg.173]

Consider a mole balance on a constant volume batch reactor represented by [Pg.172]

Calibration is one of the most important tasks in quantitative spectrochemical analysis. The subject continues to be extensively examined and discussed in the chemometrics literature as ever more complex chemical systems are studied. The computational procedures discussed in this chapter are concerned with describing quantitative relationships between two or more variables. In particular we are interested in studying how measured independent or response variables vary as a function a single so-called dependent variable. The class of techniques studied is referred to as regression analysis. [Pg.155]

The principal aim in undertaking regression analysis is to develop a suitable mathematical model for descriptive or predictive purposes. The model can be used to confirm some idea or theory regarding the relationship between variables or it can be used to predict some general, continuous response function from discrete and possibly relatively few measurements. [Pg.155]

Not all relationships can be adequately described using the simple linear model, however, and more complex functions, such as quadratic and higher-order polynomial equations, may be required to fit the experimental data. Finally, more than one variable may be measured. For example, multiwavelength calibration procedures are finding increasing applications in analytical spectrometry and multivariate regression analysis forms the basis for many chemometric methods reported in the literature. [Pg.155]

Determination of the model parameters in Equation (7.7) usually requires numerical minimization of the sum-of-squares, but an analytical solution is possible when the model is a linear function of the independent variables. Take the logarithm of Equation (7.4) to obtain [Pg.255]

Tis a linear function of the new independent variables, X, X2,. Linear regression analysis is used to fit linear models to experimental data. The case of three independent variables will be used for illustrative purposes, although there can be any number of independent variables provided the model remains linear. The dependent variable Y can be directly measured or it can be a mathematical transformation of a directly measured variable. If transformed variables are used, the fitting procedure minimizes the sum-of-squares for the differences [Pg.255]

The various independent variables can be the actual experimental variables or transformations of them. Different transformations can be used for different variables. The independent variables need not be actually independent. For example, linear regression analysis can be used to fit a cubic equation by setting X, X 2, and X3 as the independent variables. [Pg.256]

We now regard the experimental data as fixed and treat the model parameters as the variables. The goal is to choose C, m, n, and r such that S2 0 achieves its minimum possible value. A necessary condition for S2 to be a minimum is that [Pg.256]

Example 7.20 Use linear regression analysis to determine k, m, and n for the data taken at 1 atm total pressure for the ethane iodination reaction in Problem 7.1. [Pg.257]

Thus F is a linear function of the new independent variables X, X2, Linear [Pg.274]

The variables used to model each of the properties in this study were TIs, HBi and three geometry-related parameters, volume (Vw) and the two 3D Wiener numbers [Pg.107]

All subsets regression was used for the development of the models. The criteria used for defining the best model were and Mallow s For each of the properties examined, initial models used only the TIs and HBi as potential variables. Subsequently, we added the three geometric variables to examine the improvement provided by the addition of geometric information. [Pg.108]

The researcher ordinarily will not know the populatiOTi values of /3o or /3i. They have to be estimated by a bo and b computation, termed the method of least squares. In this design, two types of data are collected the response or dependent variable (y,) and the independent variable (x,). The x, values are usually preset and not random variables hence, they are considered to be measured without error (Kutner et al., 2005 Neter et al., 1983). [Pg.29]

In experimental designs, usually the values of x are preselected at specific levels, and the y values corresponding to these are dependent on the x levels set. This provides y or x values, and a controlled regimen or process is implemented. Generally, multiple observations of y at a specific x value are taken to increase the precision of the error term estimate. [Pg.29]

On the other hand, in completely randomized regression design, the designated values of x are selected randomly, not specifically set. Hence, both X and y are random variables. Although this is a useful design, it is not as common as the other two. [Pg.29]

There is no particular method that is ideal for all problems. The choice of an algorithm should be based on the nature of the data, and also whether the final goal is to build a predictive or interpretative model. [Pg.116]

Various statistical methods are nowadays available to build models that describe the empirical relationship between the structure and property of interest. Classical [Pg.116]

Therefore, according with the number of data point available in the data set, the simple or multiple linear regressions remain as popular choices for QSPR studies of glasses, since they allow an easier interpretation of the phenomena that determine the variation in the observable properties. [Pg.117]

The final model built from the optimal parameters will then undergoes validation with a testing set of glasses to ensure that the model is appropriate and useful for prediction and/or interpretation. [Pg.117]


Draper, N. R., Smith, H. "Applied Regression Analysis," John Wiley, New York (1966). [Pg.80]

ORAPPER,N.R., AND H.SMITH, ApQLIEC REGRESSION ANALYSIS, WILEY, N.Y., 1966. [Pg.241]

Another problem is to determine the optimal number of descriptors for the objects (patterns), such as for the structure of the molecule. A widespread observation is that one has to keep the number of descriptors as low as 20 % of the number of the objects in the dataset. However, this is correct only in case of ordinary Multilinear Regression Analysis. Some more advanced methods, such as Projection of Latent Structures (or. Partial Least Squares, PLS), use so-called latent variables to achieve both modeling and predictions. [Pg.205]

To gain insight into chemometric methods such as correlation analysis, Multiple Linear Regression Analysis, Principal Component Analysis, Principal Component Regression, and Partial Least Squares regression/Projection to Latent Structures... [Pg.439]

Furthermore, QSPR models for the prediction of free-energy based properties that are based on multilinear regression analysis are often referred to as LFER models, especially, in the wide field of quantitative structure-activity relationships (QSAR). [Pg.489]

In order to obtain the fragmental constants, sets of molecules for which experimental log P values are available are cut into the predefined fragments and the numerical contribution of each fragment is determined by multiple hnear regression analysis. [Pg.492]

Step S Building a Multiple Linear Regression Analysis (MLRA) Model... [Pg.500]

Multiple linear regression analysis is a widely used method, in this case assuming that a linear relationship exists between solubility and the 18 input variables. The multilinear regression analy.si.s was performed by the SPSS program [30]. The training set was used to build a model, and the test set was used for the prediction of solubility. The MLRA model provided, for the training set, a correlation coefficient r = 0.92 and a standard deviation of, s = 0,78, and for the test set, r = 0.94 and s = 0.68. [Pg.500]

The models are applicable to large data sets with a rapid calculation speed, a wide range of compounds can be processed. Neural networks provided better models than multilinear regression analysis. [Pg.504]

Montgomery D C and A A Peck 1992. Introduction to Linear Regression Analysis. New York, John Wiley Sons. [Pg.735]

Chatterjee, S. and Price, B., 1977. Regression Analysis by Example. Wiley, New York. [Pg.334]

Although equations 5.13 and 5.14 appear formidable, it is only necessary to evaluate four summation terms. In addition, many calculators, spreadsheets, and other computer software packages are capable of performing a linear regression analysis based on this model. To save time and to avoid tedious calculations, learn how to use one of these tools. For illustrative purposes, the necessary calculations are shown in detail in the following example. [Pg.119]

There is an obvious similarity between equation 5.15 and the standard deviation introduced in Chapter 4, except that the sum of squares term for Sr is determined relative toy instead of y, and the denominator is - 2 instead of - 1 - 2 indicates that the linear regression analysis has only - 2 degrees of freedom since two parameters, the slope and the intercept, are used to calculate the values ofy . [Pg.121]

A linear regression analysis should not be accepted without evaluating the validity of the model on which the calculations were based. Perhaps the simplest way to evaluate a regression analysis is to calculate and plot the residual error for each value of x. The residual error for a single calibration standard, r , is given as... [Pg.124]

Standardizations using a single standard are common, but also are subject to greater uncertainty. Whenever possible, a multiple-point standardization is preferred. The results of a multiple-point standardization are graphed as a calibration curve. A linear regression analysis can provide an equation for the standardization. [Pg.130]

Construct an appropriate standard additions calibration curve, and use a linear regression analysis to determine the concentration of analyte in the original sample and its 95% confidence interval. [Pg.133]

Two additional methods for determining the composition of a mixture deserve mention. In multiwavelength linear regression analysis (MLRA) the absorbance of a mixture is compared with that of standard solutions at several wavelengths. If Asx and Asy are the absorbances of standard solutions of components X and Y at any wavelength, then... [Pg.401]

Although this experiment is written as a dry-lab, it can be adapted to the laboratory. Details are given for the determination of the equilibrium constant for the binding of the Lewis base 1-methylimidazole to the Lewis acid cobalt(II)4-trifluoromethyl-o-phenylene-4,6-methoxysalicylideniminate in toluene. The equilibrium constant is found by a linear regression analysis of the absorbance data to a theoretical equilibrium model. [Pg.447]

Blanco and co-workers" reported several examples of the application of multiwavelength linear regression analysis for the simultaneous determination of mixtures containing two components with overlapping spectra. For each of the following, determine the molar concentration of each analyte in the mixture. [Pg.453]

In a curve-fitting method the concentration of a reactant or product is monitored continuously as a function of time, and a regression analysis is used to fit an appropriate differential or integral rate equation to the data. Eor example, the initial concentration of analyte for a pseudo-first-order reaction, in which the concentration of a product is followed as a function of time, can be determined by fitting a rearranged form of equation 13.12... [Pg.631]

If a standard method is available, the performance of a new method can be evaluated by comparing results with those obtained with an approved standard method. The comparison should be done at a minimum of three concentrations to evaluate the applicability of the new method for different amounts of analyte. Alternatively, we can plot the results obtained by the new method against those obtained by the approved standard method. A linear regression analysis should give a slope of 1 and ay-intercept of 0 if the results of the two methods are equivalent. [Pg.687]

The reaction of H2O2 and H2SO4 generates a reddish brown solution whose absorbance is measured at a wavelength of 450 nm. A regression analysis on their data yielded the following uncoded equation for the response (Absorbance X 1000). [Pg.703]

Statistical analysis can range from relatively simple regression analysis to complex input/output and mathematical models. The advent of the computer and its accessibiUty in most companies has broadened the tools a researcher has to manipulate data. However, the results are only as good as the inputs. Most veteran market researchers accept the statistical tools available to them but use the results to implement their judgment rather than uncritically accepting the machine output. [Pg.535]


See other pages where Regression analysis, 5.24 is mentioned: [Pg.323]    [Pg.327]    [Pg.402]    [Pg.429]    [Pg.446]    [Pg.491]    [Pg.494]    [Pg.497]    [Pg.682]    [Pg.687]    [Pg.714]    [Pg.722]    [Pg.118]    [Pg.120]    [Pg.121]    [Pg.131]    [Pg.133]    [Pg.271]    [Pg.779]    [Pg.168]    [Pg.309]    [Pg.394]    [Pg.523]   
See also in sourсe #XX -- [ Pg.496 ]

See also in sourсe #XX -- [ Pg.49 , Pg.51 ]

See also in sourсe #XX -- [ Pg.41 ]

See also in sourсe #XX -- [ Pg.152 , Pg.210 ]

See also in sourсe #XX -- [ Pg.21 , Pg.127 , Pg.138 ]

See also in sourсe #XX -- [ Pg.7 , Pg.34 , Pg.421 , Pg.440 , Pg.450 , Pg.468 ]

See also in sourсe #XX -- [ Pg.296 ]

See also in sourсe #XX -- [ Pg.353 ]

See also in sourсe #XX -- [ Pg.340 ]

See also in sourсe #XX -- [ Pg.326 ]

See also in sourсe #XX -- [ Pg.340 ]

See also in sourсe #XX -- [ Pg.120 , Pg.160 ]

See also in sourсe #XX -- [ Pg.307 ]

See also in sourсe #XX -- [ Pg.152 ]

See also in sourсe #XX -- [ Pg.49 , Pg.51 ]

See also in sourсe #XX -- [ Pg.191 ]

See also in sourсe #XX -- [ Pg.160 ]

See also in sourсe #XX -- [ Pg.347 ]

See also in sourсe #XX -- [ Pg.759 , Pg.760 , Pg.761 , Pg.762 , Pg.763 , Pg.764 ]

See also in sourсe #XX -- [ Pg.152 ]

See also in sourсe #XX -- [ Pg.54 , Pg.212 , Pg.213 , Pg.217 , Pg.227 , Pg.231 ]

See also in sourсe #XX -- [ Pg.155 ]

See also in sourсe #XX -- [ Pg.143 ]

See also in sourсe #XX -- [ Pg.326 , Pg.353 ]

See also in sourсe #XX -- [ Pg.41 ]

See also in sourсe #XX -- [ Pg.553 ]

See also in sourсe #XX -- [ Pg.103 ]

See also in sourсe #XX -- [ Pg.7 , Pg.34 , Pg.421 , Pg.440 , Pg.451 , Pg.469 ]

See also in sourсe #XX -- [ Pg.378 ]

See also in sourсe #XX -- [ Pg.465 ]

See also in sourсe #XX -- [ Pg.378 , Pg.379 , Pg.380 , Pg.381 , Pg.382 , Pg.383 , Pg.384 , Pg.385 , Pg.386 , Pg.387 , Pg.388 , Pg.389 , Pg.390 , Pg.391 , Pg.392 , Pg.393 , Pg.394 ]

See also in sourсe #XX -- [ Pg.21 , Pg.127 , Pg.138 ]

See also in sourсe #XX -- [ Pg.195 , Pg.968 ]

See also in sourсe #XX -- [ Pg.155 ]

See also in sourсe #XX -- [ Pg.23 ]

See also in sourсe #XX -- [ Pg.120 , Pg.160 ]

See also in sourсe #XX -- [ Pg.129 ]

See also in sourсe #XX -- [ Pg.72 ]

See also in sourсe #XX -- [ Pg.161 ]

See also in sourсe #XX -- [ Pg.11 , Pg.323 ]

See also in sourсe #XX -- [ Pg.12 , Pg.14 , Pg.25 , Pg.31 , Pg.33 , Pg.34 , Pg.36 , Pg.39 , Pg.40 , Pg.43 , Pg.107 , Pg.109 , Pg.110 , Pg.111 , Pg.113 , Pg.117 , Pg.128 , Pg.136 , Pg.139 , Pg.144 , Pg.150 , Pg.157 , Pg.159 , Pg.206 , Pg.207 , Pg.214 , Pg.216 , Pg.219 , Pg.222 , Pg.240 , Pg.244 , Pg.247 , Pg.252 , Pg.254 , Pg.275 , Pg.305 , Pg.306 , Pg.307 , Pg.313 , Pg.327 , Pg.330 , Pg.354 , Pg.377 , Pg.390 , Pg.402 , Pg.405 , Pg.409 , Pg.410 , Pg.412 , Pg.429 ]

See also in sourсe #XX -- [ Pg.6 , Pg.16 , Pg.57 , Pg.85 , Pg.91 , Pg.100 , Pg.101 ]

See also in sourсe #XX -- [ Pg.203 ]

See also in sourсe #XX -- [ Pg.262 ]

See also in sourсe #XX -- [ Pg.98 ]

See also in sourсe #XX -- [ Pg.1178 ]

See also in sourсe #XX -- [ Pg.129 ]

See also in sourсe #XX -- [ Pg.246 ]

See also in sourсe #XX -- [ Pg.153 ]

See also in sourсe #XX -- [ Pg.133 ]

See also in sourсe #XX -- [ Pg.7 , Pg.58 , Pg.81 , Pg.82 , Pg.141 ]

See also in sourсe #XX -- [ Pg.66 , Pg.67 , Pg.68 , Pg.69 , Pg.70 , Pg.71 , Pg.72 , Pg.73 , Pg.74 , Pg.75 , Pg.98 , Pg.201 ]

See also in sourсe #XX -- [ Pg.151 ]

See also in sourсe #XX -- [ Pg.42 ]

See also in sourсe #XX -- [ Pg.527 ]

See also in sourсe #XX -- [ Pg.54 ]

See also in sourсe #XX -- [ Pg.453 ]

See also in sourсe #XX -- [ Pg.82 , Pg.85 , Pg.90 ]

See also in sourсe #XX -- [ Pg.7 , Pg.14 , Pg.59 ]

See also in sourсe #XX -- [ Pg.527 ]

See also in sourсe #XX -- [ Pg.114 ]

See also in sourсe #XX -- [ Pg.4 ]

See also in sourсe #XX -- [ Pg.361 ]

See also in sourсe #XX -- [ Pg.7 , Pg.14 , Pg.59 ]

See also in sourсe #XX -- [ Pg.13 , Pg.132 , Pg.184 , Pg.199 , Pg.200 , Pg.201 , Pg.202 , Pg.337 , Pg.338 , Pg.339 , Pg.340 , Pg.359 , Pg.384 ]

See also in sourсe #XX -- [ Pg.151 ]

See also in sourсe #XX -- [ Pg.822 ]

See also in sourсe #XX -- [ Pg.312 , Pg.535 ]

See also in sourсe #XX -- [ Pg.197 , Pg.199 , Pg.203 , Pg.270 , Pg.271 , Pg.285 , Pg.290 ]

See also in sourсe #XX -- [ Pg.101 ]

See also in sourсe #XX -- [ Pg.524 ]

See also in sourсe #XX -- [ Pg.226 ]

See also in sourсe #XX -- [ Pg.276 ]

See also in sourсe #XX -- [ Pg.322 ]

See also in sourсe #XX -- [ Pg.801 ]

See also in sourсe #XX -- [ Pg.280 , Pg.281 , Pg.282 , Pg.283 ]

See also in sourсe #XX -- [ Pg.71 , Pg.450 , Pg.452 , Pg.461 , Pg.466 , Pg.471 , Pg.475 , Pg.482 , Pg.493 , Pg.495 , Pg.498 , Pg.505 , Pg.506 , Pg.522 ]

See also in sourсe #XX -- [ Pg.14 ]

See also in sourсe #XX -- [ Pg.165 ]

See also in sourсe #XX -- [ Pg.122 ]

See also in sourсe #XX -- [ Pg.670 ]

See also in sourсe #XX -- [ Pg.71 ]

See also in sourсe #XX -- [ Pg.595 ]




SEARCH



An Example of Regression Analysis on Existing Data

Analysis in Multiple Regression

Analysis of Variance for Regression Models

Analysis of regression residuals

Analysis of variance for regression

Analysis of variance regression

Analytical methods Regression analysis

Appendix 7.1 Linear Regression Analysis

Application of Regression Analysis

Arrhenius regression analysis

Arrhenius regression analysis linear

Arrhenius regression analysis nonlinear

Bayesian Regression Analysis

Bivariate data regression analysis

Calibration curve and regression analysis

Calibration regression analysis

Complex Non-Linear Regression Least-Squares (CNRLS) for the Analysis of Impedance Data

Complex Piecewise Regression Analysis

Computer regression analysis

Conditions to be met for Linear Regression Analysis

Curve fitting with nonlinear regression analysis

Curve fitting, nonlinear regression analysis

Deming regression analysis

Deming regression analysis weighted

Discriminant-regression analysis

Ethanol regression analysis

Example, regression analysis

Excel spreadsheet linear regression analysis with

Exposure regression analyses

Foam regression analysis

Initial regression analysis

Input-output analysis, process data regression

Kinetics regression analysis

Latent variable regression analysis

Least squares linear regression analysi

Least-Squares Minimization (Regression Analysis)

Least-squares linear regression analysis of variable temperature

Linear least-squares regression analysis

Linear least-squares regression analysis kinetic data

Linear regression analyses

Linear regression analysis stability constants

Linear regression analysis, calibration

Linear regression analysis, calibration graphs

Linear regression using Analysis Toolpak

Logistic regression analysis

Logistic regression analysis prediction

Measurement Regression analysis

Model regression analysis

Multilinear Regression Analysis

Multilinear regression analysis for the derivation of CLND response factors

Multilinear regression and principal component analysis

Multiple Linear Regression Analysis (MLRA)

Multiple linear regression analysis

Multiple linear regression analysis Subject

Multiple regression analyses

Multiple regression analyses product

Multiple regression analysis data

Multivariate chemometric techniques multiple linear regression analysis

Multivariate regression analysis

Multivariate regression analysis approach

Non-linear regression analysis

Nonlinear least squares regression analysis

Nonlinear least-squares regression analysis kinetic data

Nonparametric regression analysis

Normal equations, regression analysi

Optimization regression analysis

Ordinary least-squares regression analysis

Overview regression analysis (Consequences of lead userness)

Oxidation regression analyses

PCA with multiple linear regression analysis

Partial least squares-discriminant analysis vectors, regression

Partial least-squares regression analysis

Planning experiments regression analysis

Polynomial regression analysis

Principal component regression analysis

Principal component regression chemometrical analysis

Procedure Stepwise regression analysis

Qualitative Regression Analysis

Regression Analysis Framework

Regression Analysis and Parameter Estimation

Regression Analysis in Context of Error Structure

Regression Using the Analysis ToolPak

Regression analyses optimal technique

Regression analyses, computerized

Regression analysis acids

Regression analysis application

Regression analysis between methods

Regression analysis classical least squares

Regression analysis clustered activity data

Regression analysis coefficients

Regression analysis comparison studies

Regression analysis correlation coefficient

Regression analysis diagnostic statistics

Regression analysis equations

Regression analysis error models

Regression analysis highly variable activity data

Regression analysis inappropriate application

Regression analysis inverse least squares

Regression analysis linear least squares method

Regression analysis methods

Regression analysis model refinement

Regression analysis nonlinear

Regression analysis nonlinear least squares method

Regression analysis of the

Regression analysis of the initial model

Regression analysis output

Regression analysis output for

Regression analysis output response

Regression analysis parameters

Regression analysis periods

Regression analysis relationships

Regression analysis ridge

Regression analysis stepwise

Regression analysis structure-activity

Regression analysis to determine

Regression analysis using distribution coefficient

Regression analysis, calibration graphs

Regression analysis, cost data

Regression analysis, initial estimates

Regression analysis, least-squares

Regression analysis, parameters from

Regression analysis, parameters from equations)

Regression analysis, results

Regression analysis, stability constants

Regression analysis, value

Regression batch reactor data analysis

Regression influence analysis

Regression residual analysis

Regression trace analysis

Representing Data by Continuous Functions Regression Analysis

Residuals, in regression analysis

Review of Statistical Terminology Used in Regression Analysis

Simple linear regression analysis

Simulation regression analysis

Sodium sulfite regression analyses

Solvent regression analysis

Standard error, regression analysis

Statistical Formulas Used in Linear Regression (Least Squares) Analyses

Statistical analysis least-square regression

Statistical analysis linear regression

Statistical analysis nonlinear regression

Statistical analysis regression coefficient

Statistical methods multiple regression analysis

Statistics Regression analysis

Transfer factors regression analyses

Unweighted least squares regression analysis

© 2024 chempedia.info