Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Predictor variables

We can go one step further, however. Each of the above multiple regression relations is between a single variable (response) of one data set and a linear combination of the variables (predictors) from the other set. Instead, one may consider the multiple-multiple correlation, i.e. the correlation of a linear combination from one set with a linear combination of the other set. Such linear combinations of the original variables are variously called factors, components, latent variables, canonical variables or canonical variates (also see Chapters 9,17, 29, and 31). [Pg.319]

Start with all the potential predictors In a statistical package, the data are entered into five columns and then you will have to indicate that toxin is the dependent variable/response and the four meteorological factors the independent variables/ predictors. Generic output is shown in Table 14.7. [Pg.187]

Data set variables can be distinguished by their role in the models as independent and dependent variables. Independent variables (or explanatory variables, predictor variables) are those variables assumed to be capable of taking part in a function suitable to model the response variable. Dependent variables (or response variables) are variables (often obtained from experimental measurements) for which the interest is to find a statistical dependence on one or more independent variables. Independent variables constitute the data matrix X, while dependent variables are collected into a matrix Y with n rows and r columns (r = 1 when only one response variable is defined) (Figure D-2). [Pg.98]

Multiple linear regression is a direct extension of simple linear regression. In simple linear regression models, only one jr predictor variable is present, but in multiple linear regression, there are k predictor values, x X2,..., For example, a two-variable predictor model is presented in the following equation ... [Pg.153]

A four-variable predictor model is presented in the following equation ... [Pg.153]

Predictor variables Depertdent variable Predictors in final model Significance of model ... [Pg.212]

Regression PLS to correlate spectra (X matrix) to known or measured properties (Y matrix). In PLS regression, there are many x-variables (predictor variables) and one or many y-variables (response variables) [21]. [Pg.756]

This section introduces the regression theory that is needed for the establishment of the calibration models in the forthcoming sections and chapters. The multivariate linear models considered in this chapter relate several independent variables predictors , or regressors , x) to one dependent variable (or predictand , y) in the form of a first-order polynomial ... [Pg.278]

Vectors A series of scalars can be arranged in a column or in a row. Then, they are called a column or a row vector. If the elements of a column vector can be attributed to special characteristics, e.g., to compounds, then data analysis can be completed. The chemical structures of compounds can be characterized with different numbers called descriptors, variables, predictors, or factors. For example, toxicity data were measured for a series of aromatic phenols. Their toxicity can be arranged in a column arbitrarily Each row corresponds to a phenolic compound. A lot of descriptors can be calculated for each compound (e.g., molecular mass, van der Waals volume, polarity parameters, quantum chemical descriptors, etc.). After building a multivariate model (generally one variable cannot encode the toxicity properly) we will be able to predict toxicity values for phenolic compounds for which no toxicity has been measured yet. The above approach is generally called searching quantitative structure - activity relationships or simply QSAR approach. [Pg.144]

Thus we find that the choice of quaternion variables introduces barriers to efficient symplectic-reversible discretization, typically forcing us to use some off-the-shelf explicit numerical integrator for general systems such as a Runge-Kutta or predictor-corrector method. [Pg.355]

The Smith predictor is a model-based control strategy that involves a more complicated block diagram than that for a conventional feedback controller, although a PID controller is still central to the control strategy (see Fig. 8-37). The key concept is based on better coordination of the timing of manipulated variable action. The loop configuration takes into account the facd that the current controlled variable measurement is not a result of the current manipulated variable action, but the value taken 0 time units earlier. Time-delay compensation can yield excellent performance however, if the process model parameters change (especially the time delay), the Smith predictor performance will deteriorate and is not recommended unless other precautions are taken. [Pg.733]

The relationship between a criterion variable and two or more predictor variables is given by a linear multivariate model ... [Pg.106]

The standard deviation gives the accuracy of prediction. If Y is related to one or more predictor variables, the error of prediction is reduced to the standard error of estimate S, (the standard deviation of the errors), where... [Pg.107]

Another classification technique is logistic regression [76], which is based on the assumption that a sigmoidal dependency exists between the probability of group membership and one or more predictor variables. It has been used [72] to model eye irritation data. [Pg.482]

Temperature Is used at the first splitting variable In Figure 1 because numerical calculations show that temperature Is a better predictor of life than either relative humidity or ultraviolet radiation at this stage. For both the low and high temperature branches of the tree, the numerical calculations show that the second most Important predictor Is relative humidity. Because no other variables remain, the final splits are necessarily based on ultraviolet radiation. [Pg.74]

The set of possible dependent properties and independent predictor variables, i.e. the number of possible applications of predictive modelling, is virtually boundless. A major application is in analytical chemistry, specifically the development and application of quantitative predictive calibration models, e.g. for the simultaneous determination of the concentrations of various analytes in a multi-component mixture where one may choose from a large arsenal of spectroscopic methods (e.g. UV, IR, NIR, XRF, NMR). The emerging field of process analysis,... [Pg.349]

The ultimate goal of multivariate calibration is the indirect determination of a property of interest (y) by measuring predictor variables (X) only. Therefore, an adequate description of the calibration data is not sufficient the model should be generalizable to future observations. The optimum extent to which this is possible has to be assessed carefully when the calibration model chosen is too simple (underfitting) systematic errors are introduced, when it is too complex (oveifitting) large random errors may result (c/. Section 10.3.4). [Pg.350]

Let us assume that we have collected a set of calibration data (X, Y), where the matrix X (nxp) contains the p > 1 predictor variables (columns) measured for each of n samples (rows). The data matrix Y (nxq) contains the q variables which depend on the X-data. The general model in calibration reads... [Pg.351]

Often, it is not quite feasible to control the calibration variables at will. When the process under study is complex, e.g. a sewage system, it is impossible to produce realistic samples that are representative of the process and at the same time optimally designed for calibration. Often, one may at best collect representative samples from the population of interest and measure both the dependent properties Y and the predictor variables X. In that case, both Y and X are random, and one may just as well model the concentrations X, given the observed Y. This case of natural calibration (also known as random calibration) is compatible with the linear regression model... [Pg.352]

We will see that CLS and ILS calibration modelling have limited applicability, especially when dealing with complex situations, such as highly correlated predictors (spectra), presence of chemical or physical interferents (uncontrolled and undesired covariates that affect the measurements), less samples than variables, etc. More recently, methods such as principal components regression (PCR, Section 17.8) and partial least squares regression (PLS, Section 35.7) have been... [Pg.352]

This GLS estimator is akin to inverse variance-weighted regression discussed in Section 8.2.3. Again there is a limitation V can be inverted only when the number of calibration samples is larger than the number of predictor variables, i.e. spectral wavelengths. Thus, one either has to work with a limited set of selected wavelengths or one must apply other solutions which have been proposed for tackling this problem [5]. [Pg.356]

The application of principal components regression (PCR) to multivariate calibration introduces a new element, viz. data compression through the construction of a small set of new orthogonal components or factors. Henceforth, we will mainly use the term factor rather than component in order to avoid confusion with the chemical components of a mixture. The factors play an intermediary role as regressors in the calibration process. In PCR the factors are obtained as the principal components (PCs) from a principal component analysis (PC A) of the predictor data, i.e. the calibration spectra S (nxp). In Chapters 17 and 31 we saw that any data matrix can be decomposed ( factored ) into a product of (object) score vectors T(nxr) and (variable) loadings P(pxr). The number of columns in T and P is equal to the rank r of the matrix S, usually the smaller of n or p. It is customary and advisable to do this factoring on the data after columncentering. This allows one to write the mean-centered spectra Sq as ... [Pg.358]


See other pages where Predictor variables is mentioned: [Pg.609]    [Pg.82]    [Pg.138]    [Pg.609]    [Pg.82]    [Pg.138]    [Pg.2270]    [Pg.351]    [Pg.75]    [Pg.106]    [Pg.82]    [Pg.74]    [Pg.131]    [Pg.61]    [Pg.307]    [Pg.322]    [Pg.323]    [Pg.323]    [Pg.324]    [Pg.325]    [Pg.325]    [Pg.328]    [Pg.329]    [Pg.338]    [Pg.345]    [Pg.349]    [Pg.349]    [Pg.357]    [Pg.360]    [Pg.366]    [Pg.367]   
See also in sourсe #XX -- [ Pg.76 ]




SEARCH



Predictors

© 2024 chempedia.info