Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrix latent solutions

This result can be generalized into the statement that any arbitrary vector in n dimensions can always be expressed as a linear combination of re basic vectors, provided these are linearly independent. It will be shown that the latent solutions of a singular matrix provide an acceptable set of basis vectors, just like the eigen-solutions of certain differential equations provide an acceptable set of basis functions. [Pg.19]

Many solutions for getting rid of collinearity exist.31,52 Some of them make use of latent variables. This means that the K variables in X are replaced by A variables in a new matrix, called T (figure 12.15 c). The first thing that should be noticed here is that A is always smaller than or equal to N. This means that the requirements for condition 1 are fulfilled. The way of calculating the values in T also forces it to consist of orthogonal column vectors, so that also condition 2 above (no collinearity) is met. The regression equation with the use of T is expressed as follows (see also figure 12.15 d). [Pg.407]

In the SR method, temperatures are the dominant variables and are found by a Newton-Raphson solution of the stage energy balances. Compositions do not have as great an influence in calculating the temperatures as do heat effects or latent heats of vaporization. The component flow rates are found by the tridiagonal matrix method. These are summed to get the total rates, hence the name sum rates. [Pg.161]

Eor multivariate calibration in analytical chemistry, the partial least squares (PLS) method [19], is very efficient. Here, the relations between a set of predictors and a set (not just one) of response variables are modeled. In multicomponent calibration the known concentrations of / components in n calibration samples are collected to constitute the response matrix Y (n rows, / columns). Digitization of the spectra of calibration samples using p wavelengths yields the predictor matrix X (n rows, p columns). The relations between X and Y are modeled by latent variables for both data sets. These latent variables (PLS components) are constructed to exhaust maximal variance (information) within both data sets on the one hand and to be maximally correlated for the purpose of good prediction on the other hand. From the computational viewpoint, solutions are obtained by a simple iterative procedure. Having established the model for calibration samples. comp>o-nent concentrations for future mixtures can be predicted from their spectra. A survey of multi-component regression is contained in [20],... [Pg.59]

As the rank of can be at most G — 1, as evident from Equation (20), this represents the maximum number of canonical variates which can be computed, consistently with what was already discussed in the case of two classes, where only a single latent variable can be extracted. It must be stressed here that, whatever the number of categories involved, LDA requires inversion of the pooled (within-groups) covariance matrix S in order for this matrix to be invertible, the total number of training samples should be at least equal to the number of variables, otherwise its determinant is zero and no inverse exists. There are authors who indicate an even larger ratio of the number of samples to the number of variables ( 3) to obtain a meaningful solution. Therefore, these conditions pose a strict limitation to the kind of problems where LDA can be applied, or suggest the need of some form of variable selection/feature reduction prior to the classification analysis. [Pg.198]


See other pages where Matrix latent solutions is mentioned: [Pg.134]    [Pg.160]    [Pg.47]    [Pg.84]    [Pg.84]    [Pg.400]    [Pg.212]    [Pg.192]    [Pg.83]    [Pg.78]    [Pg.252]    [Pg.185]    [Pg.33]    [Pg.17]    [Pg.288]    [Pg.291]    [Pg.328]   
See also in sourсe #XX -- [ Pg.19 ]




SEARCH



Latent

Matrix solution

© 2024 chempedia.info