Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Y block

As we will soon see, the nature of the work makes it extremely convenient to organize our data into matrices. (If you are not familiar with data matrices, please see the explanation of matrices in Appendix A before continuing.) In particular, it is useful to organize the dependent and independent variables into separate matrices. In the case of spectroscopy, if we measure the absorbance spectra of a number of samples of known composition, we assemble all of these spectra into one matrix which we will call the absorbance matrix. We also assemble all of the concentration values for the sample s components into a separate matrix called the concentration matrix. For those who are keeping score, the absorbance matrix contains the independent variables (also known as the x-data or the x-block), and the concentration matrix contains the dependent variables (also called the y-data or the y-block). [Pg.7]

In addition to the set of new coordinate axes (basis space) for the spectral data (the x-block), we also find a set of new coordinate axes (basis space) for the concentration data (the y-block). [Pg.131]

PLS is more complex than PCR because we are simultaneously using degrees of fieedom in both the x-block and the y-block data. In the absence of a rigourous derivation of the proper number of degrees of freedom to use for PLS a simple approximation is the number of samples, n, minus the number of factors (latent variables), f, minus 1. [Pg.170]

The decision whether or not to variance scale the x-block data is independent from the decision about scaling the y-block data. We can decide to scale either, both, or neither. [Pg.176]

It is often helpful to examine the regression errors for each data point in a calibration or validation set with respect to the leverage of each data point or its distance from the origin or from the centroid of the data set. In this context, errors can be considered as the difference between expected and predicted (concentration, or y-block) values for the regression, or, for PCA, PCR, or PLS, errors can instead be considered in terms of the magnitude of the spectral... [Pg.185]

Variance (cont) of prediction, 167 Variance scaling, 100, 174 Vectors basis, 94 Weighting of data, 100 Whole spectrum method, 71 x-block data, 7 x-data, 7 XE, 94 y-block data, 7 y-data, 7... [Pg.205]

Partial least squares regression (PLS). Partial least squares regression applies to the simultaneous analysis of two sets of variables on the same objects. It allows for the modeling of inter- and intra-block relationships from an X-block and Y-block of variables in terms of a lower-dimensional table of latent variables [4]. The main purpose of regression is to build a predictive model enabling the prediction of wanted characteristics (y) from measured spectra (X). In matrix notation we have the linear model with regression coefficients b ... [Pg.544]

Figure 1 Schematic sequence of the direct and indirect competitive ELISA. The principle difference is that for direct competitive immunoassay, the well is coated with primary antibody directly, and for indirect competitive immunoassay, the well is coated with antigen. Primary antibody (Y), blocking protein (Y), analyte (T), analyte-tracer ( ), enzyme labeled secondary antibody ), color development ( J)... Figure 1 Schematic sequence of the direct and indirect competitive ELISA. The principle difference is that for direct competitive immunoassay, the well is coated with primary antibody directly, and for indirect competitive immunoassay, the well is coated with antigen. Primary antibody (Y), blocking protein (Y), analyte (T), analyte-tracer ( ), enzyme labeled secondary antibody ), color development ( J)...
PLS should have, in principle, rejected a portion of the non-linear variance resulting in a better, although not completely exact, fit to the data with just 1 factor. The PLS does tend to reject (exclude) those portions of the x-data which do not correlate linearly to the y-block. (Richard Kramer)... [Pg.153]

In principle, in the absence of noise, the PLS factor should completely reject the nonlinear data by rotating the first factor into orthogonality with the dimensions of the x-data space which are spawned by the nonlinearity. The PLS algorithm is supposed to find the (first) factor which maximizes the linear relationship between the x-block scores and the y-block scores. So clearly, in the absence of noise, a good implementation of PLS should completely reject all of the nonlinearity and return a factor which is exactly linearly related to the y-block variances. (Richard Kramer)... [Pg.153]

B program, PLS-2, uses the partial least squares (PLS) method. This method has been proposed by H. Wold (37) and was discussed by S. Wold (25). In such a problem there are two blocks of data, T and X. It is assumed that T is related to X by latent variables u and t is derived from the X block and u is derived from the Y block. [Pg.209]

This example demonstrates that the PLS method gives a stable estimate of the Y-block, even though there are many more X" variables than samples, a condition that removes the possibility of applying multiple regression. Another advantage of the method... [Pg.221]

Kataoka K, Kwon GS, Yokoyama M, Okano T, Sakurai Y. Block-copolymer micelles as vehicles for drug delivery. J Controlled Release 1993 24 119-132. [Pg.201]

At first, optimization methods are not interested in the relationships between the two blocks of variables, but only in the X value that produces the best value of Y (often this Y block contains only one variable), according to some kind of requirement. This is generally a search for a maximum (or a minimum) of the hypersurface y = f(X). Obviously, when the equation of this surface is known, this maximum search is much easier, so that the spread of correlation techniques will be profitable to otpimization problems too. [Pg.135]

To test the potential of PLS to predict odour quality, it was used in a QSAR study of volatile phenols. A group of trained sensory panelists used descriptive analysis (28) to provide odour profiles for 17 phenols. The vocabulary consisted of 44 descriptive terms, and a scale fiom 0 (absent) to S (very strong) was used. The panel average sensory scores for the term sweet were extracted and used as the Y-block of data, to be predicted from physico-chemical data. [Pg.105]

With the molecular descriptors as the X-block, and the senso scores for sweet as the Y-block, PLS was used to calculate a predictive model using the Unscrambler program version 3.1 (CAMO A/S, Jarleveien 4, N-7041 Trondheim, Norway). When the full set of 17 phenols was us, optimal prediction of sweet odour was shown with 1 factor. Loadings of variables and scores of compounds on the first two factors are shown in Fig es 1 and 2 respectively. Figure 3 shows predicted sweet odour score plotted against that provid by the sensory panel. Vanillin, with a sensory score of 3.3, was an obvious outlier in this set, and so the model was recalculated without it. Again 1 factor was r uired for optimal prediction, shown in Figure 4. [Pg.105]

M Y Stacking of M and Y block Length of c-axis (A) Ideal chemical composition... [Pg.179]

Thus, it has been confirmed that there are many phases on the line MY of Fig. 2.83, which originate from the ordered stacking of M and Y blocks along the c-axis. As is clear, infinite discrete compounds can exist between the M and Y phases in principle. This is a typical example of the intergrowth structure. [Pg.180]

In PLS, the spectral measurements (atomic absorbances or intensities registered at different times or wavelengths) constitute a set of independent variables (or, better, predictors) which, in general, is called the X-block. The variable that has to be predicted is the dependent variable (or predictand) and is called the Y-block. Actually, PLS can be used to predict not only one y-variable, but several, hence the term block . This was, indeed, the problem that H. Wold addressed how to use a set of variables to predict the behaviour of several others. To be as general as possible, we will consider that situation here. Obvious simplifications will be made to consider the prediction of only one dependent y variable. Although the prediction of several analytes is not common in atomic spectroscopy so far, it has sometimes been applied in molecular spectroscopy. Potential applications of this specific PLS ability may be the... [Pg.182]

Figure 4.4 Step 2 of the NIPALS algorithm scores for the X-block, taking into account the information in the concentrations (Y-block). Figure 4.4 Step 2 of the NIPALS algorithm scores for the X-block, taking into account the information in the concentrations (Y-block).
Step 4 Nevertheless, so far we only extracted information from the X- and Y-blocks. Now a regression step is needed to obtain a predictive model. This is achieved by establishing an inner relationship between u (the scores representing the concentrations of the analytes we want to predict) and t (the scores representing the spectral absorbances) that we just calculated for the X- and Y-blocks. The simplest one is an ordinary regression (note that the regression coefficient is just a scalar because we are relating two vectors) ... [Pg.188]

Step 6 In order to extract the second factor (or latent variable), the information linked to the first factor has to be subtracted from the original data and a sort of residual matrices are obtained for the X- and Y-blocks as... [Pg.189]

A very important advantage of PLS is that it supports errors in both the X- and Y-blocks. Questions on whether the y-values (the concentrations of the analyte in the standards) can be wrong may appear exotic but the real fact is that nowadays we have such powerful instruments, so precise and so sensitive that the statement that there are no errors in the concentrations can be discussed. As Brereton pointed out [18], the standards are, most of the time, prepared by weighing and/or diluting and the quality of the volumetric hardware (pipettes, flasks, etc.) has not improved as dramatically as the electronics, accessories and, in general, instruments. Therefore, the concentration of the analyte may have a non-negligible uncertainty. This is of special concern for trace and ultra-trace analyses. Thus, MLR cannot cope with such uncertainties, but PLS can because it performs a sort of averaging i.e. factors) which can remove most... [Pg.191]

A way to start evaluating how a model performs when different factors are considered is to evaluate how much of the information (i.e. variance) in the X- and Y-blocks is explained. We expect that whichever the particular number of factors is the optimum, no more relevant information would enter the model after such a value. Almost any software will calculate the amount of variance explained by the model. A typical output will appear as in Table 4.1. There, it is clear that not all information in X is useful to predict the concentration of Sb in the standards, probably because of the interfering phenomena caused by the concomitants. It is worth noting that only around 68% of the information in X is related to around 98% of the information in Y ([Sb]). This type of table is not always so clear and a fairly important number of factors may be required to model a large percentage of information in X and, more importantly, in Y. As a first gross approach, one can say that the optimal dimensionality should be... [Pg.204]

Number of factors Variance explained in X-block (%) Variance explained in Y-block (%)... [Pg.205]

The relationship amongst the X- (here, atomic spectra visualised in Figure 4.9) and Y-blocks (here, [Sb]), can be evaluated by the t-u plof. This plot shows how the different factors (latent variables) account for the relation between X and Y. Hence we can inspect whether a sample stands out because of an anomalous position in the relation. Also, we can visualise whether there is a fairly good straight relationship between the blocks. Whenever the samples follow a clear curved shape, we can reasonably assume that the model has not... [Pg.211]


See other pages where Y block is mentioned: [Pg.99]    [Pg.265]    [Pg.7]    [Pg.173]    [Pg.412]    [Pg.209]    [Pg.221]    [Pg.358]    [Pg.119]    [Pg.202]    [Pg.104]    [Pg.104]    [Pg.185]    [Pg.180]    [Pg.181]    [Pg.185]    [Pg.190]    [Pg.191]    [Pg.208]   
See also in sourсe #XX -- [ Pg.182 ]

See also in sourсe #XX -- [ Pg.281 ]




SEARCH



© 2024 chempedia.info