Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Eigenvector plot

Fig. 8. Plot of data from patients having Hver diseases A or B or unknown X (a) on two blood enzymes (b) scores of points on the first two eigenvectors obtained from an eight-dimensional enzyme space and (c) eigenvector plot of the variance weighted data. Variance weights ranged from 3.5 to 1.2 for the eight blood enzymes measured. A weight of 1.0 indicates no discrimination information (22). Fig. 8. Plot of data from patients having Hver diseases A or B or unknown X (a) on two blood enzymes (b) scores of points on the first two eigenvectors obtained from an eight-dimensional enzyme space and (c) eigenvector plot of the variance weighted data. Variance weights ranged from 3.5 to 1.2 for the eight blood enzymes measured. A weight of 1.0 indicates no discrimination information (22).
Eigenvector Plots. Our prejudice was that distinguishing classes with differing functional groups would be less difficult than distinguishing different carbon skeletons. We therefore choose the latter as a more stringent test of this means of data display. [Pg.167]

When the significant eigenvectors are more than 2 or 3, the information cannot be easily visualized by few eigenvector plots. In these cases the use of nonlinear mapping (NLM) can give a planar representation of the objects with greater fidelity to the structure of the information in the hyperspace of the variables... [Pg.104]

In practice, almost all studies on food have some prerecognized categories, and the detection of new categories in an eigenvector plot shows that some factors are unknown or that their importance has been underestimated, so that the classification problem has to be formulated again. [Pg.131]

Fig. 37. Eigenvector plot of Chivas data after feature selection (6 selected variables). (Adapted from Ref. >)... Fig. 37. Eigenvector plot of Chivas data after feature selection (6 selected variables). (Adapted from Ref. >)...
A set of 43 features was derived from physical and chemical data CIR-, UV-, NMR-spectra, molecular weight, melting point, boiling point, density, specific rotation, solubility in water and alcohol) for each of a total of 47 compounds. Feature selection and eigenvector plots indicate that there are only two meaningful axes in the 43-dimensional data space. The axes were found to relate to the molecule s electron donor ability and its directed dipole. [Pg.181]

Figure 33. The data from Figure 32 plotted together with the first eigenvector (factor) for the data. Figure 33. The data from Figure 32 plotted together with the first eigenvector (factor) for the data.
Even though two factors are all we need to span this data, we could find as many factors as there are wavelengths in the spectra. Each successive factor is identical to each successive eigenvector of the data. Each successive factor will capture the maximum variance of the data that was not yet spanned by the earlier factors. Each successive factor must be mutual orthogonal to all the factors that precede it. Let s continue on and plot the third factor for this data set. The plots are shown in Figures 37 and 38. [Pg.88]

Next, we find the first eigenvector of the noisy data set and plot it in Figures 41 and 42. We see that it is nearly identical to the first eigenvector of the noise... [Pg.91]

Continuing, we find the second eigenvector for the noisy data. Figures 43 and 44 contain plots of the first two eigenvectors for the noisy data. Again, the second eigenvector for the noisy data is nearly identical to that of the noise-free data. [Pg.92]

Completing the cycle, we calculate the third eigenvector for the noisy data. Figures 45 and 46 contain the plots of all three eigenvectors for the noisy data. [Pg.93]

Figure 45. The noisy data from Figure 39 plotted together with all three eigenvectors (factors) for the data. Figure 45. The noisy data from Figure 39 plotted together with all three eigenvectors (factors) for the data.
Thus, if we wish to compare the eigenvectors to one another, we can divide each one by equation [57] to normalize them. Malinowski named these normalized eigenvectors reduced eigenvectors, or REV". Figure 52 also contains a plot of the REV" for this isotropic data. We can see that they are all roughly equal to one another. If there had been actual information present along with the noise, the information content could not, itself, be isotropically distributed. (If the information were isotropically distributed, it would be, by definition, noise.) Thus, the information would be preferentially captured by the earliest... [Pg.106]

Again, it is clear from these plots that the errors are minimized when 5 factors are used. Thus, we will construct our calibration matrices using a basis space cromprised of the first 5 eigenvectors (factors). [Pg.116]

Now, let s calculate the eigenvectors for the spectra in Figure 66. In fact, we ve already done this in the chapter of Factor Spaces. They were plotted in Figures 37 and 38. For convenience, we reproduce these plots here as Figures 70 and 71. [Pg.134]


See other pages where Eigenvector plot is mentioned: [Pg.422]    [Pg.422]    [Pg.422]    [Pg.144]    [Pg.100]    [Pg.100]    [Pg.173]    [Pg.99]    [Pg.101]    [Pg.180]    [Pg.186]    [Pg.422]    [Pg.422]    [Pg.422]    [Pg.144]    [Pg.100]    [Pg.100]    [Pg.173]    [Pg.99]    [Pg.101]    [Pg.180]    [Pg.186]    [Pg.102]    [Pg.420]    [Pg.421]    [Pg.426]    [Pg.129]    [Pg.86]    [Pg.87]    [Pg.89]    [Pg.96]    [Pg.102]    [Pg.103]    [Pg.105]    [Pg.134]   
See also in sourсe #XX -- [ Pg.99 ]




SEARCH



Eigenvector

© 2024 chempedia.info