Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Spectral vectors

If a is a spectral vector (dimension / by 1) and A is the matrix of calibration spectra (of dimension n by /), then the Mahalanobis Distance is defined as ... [Pg.497]

In MLR, if m is the vector (dimension by 1) of the selected absorbance values obtained from a spectral vector a, and M is the matrix of selected absorbance values for the calibration samples, then the Mahalanobis Distance is defined as equation 74-6a ... [Pg.498]

The original spectra, measured at 21 wavelengths, of course are represented as 21-dimensional spectral vectors in a 21-dimensional space. [Pg.226]

An important question arises where in the 21-dimensional space (or 3-dimensional space) can all the measured vectors be found Is it possible to restrict the potential locations of the spectral vectors in a subspace A first restriction is obvious as absorbances can only be positive, only those parts of the space with positive coordinates are available to the spectral vectors. Is there anything more specific Figure 5-9 represents this question in a 3-dimensional space. [Pg.226]

To be able to represent the spectral vectors in the plane, we need a system of axes, preferably an orthonormal system. As it turns out, the two eigenvectors V form an orthonormal system of axes in that plane. This is represented in Figure 5-11. [Pg.228]

The next question arises immediately how do we determine the coordinates b of the spectral vectors y , in this new system of axes V ... [Pg.229]

The fact that the spectral vectors in a closed system lie in an further reduced sub-space, in a 2-component system they lie on a straight line, in a 3-component system, in a plane etc., suggests that we could move the origin of the system of axes into that sub-space and in this way the number of relevant dimensions is reduced by one. [Pg.240]

We subtract the mean spectrum from each measured spectrum yp and as a result, the origin of the system of axes is moved into the mean. In the above example, it is into the plane of all spectral vectors. This is called meancentring. Mean-centring is numerically superior to subtraction of one particular spectrum, e.g. the first one. The Matlab program, Main MeanCenter, m, performs mean-centring on the titration data and displays the resulting curve in such a way that we see the zero us,3-component, i.e. the fact that the origin (+) lies in the (us ,i,us >2)-plane. [Pg.240]

The path starts and ends at the origin (marked by O). Figure 5-22 reveals that there are no components eluting at the beginning and end of the chromatogram, and therefore the respective spectral vectors contain just noise. [Pg.241]

The useful aspect of this follows we can determine the regions in the series of spectra in which there is only one component. The spectral vectors are all parallel and the average over all spectra in the region is a good estimate for the pure component spectrum. The main difficulty with this approach is to decide when exactly the deviation from a straight line starts and thus, which selection of spectra we need to average. [Pg.242]

All pixel vectors in a test image are first normalized by a constant, which is a maximum value obtained from all the spectral components of the spectral vectors in the corresponding test image, so that the entries of the normalized pixel vectors fit into the interval of spectral values between zero and one. The rescaling of pixel vectors was mainly performed to effectively utilize the dynamic range of Gaussian RBF kernel. [Pg.196]

For PCA, the entire spectral data set, containing n spectra, is written as a matrix S in which each column represents one spectral vector S(v) of m intensity data points. The spectral vectors may be raw or smoothed intensities, or first or second derivatives. [Pg.180]

Because of (a) the overlaps of the spectral and retention time vectors, and (b) the noise in the data, there will in general be a r2utge of possible K s which can transform the U and V sets to sets with all-positive elements. Thus the estimates for the spectral vectors will have some degree of ambiguity in them. The exact degree of uncertainty has been shown to be dependent on the degree of overlap of the spectra and retention profiles and whether there are regions in the time, detector... [Pg.182]

As mentioned previously, one of the main advantages of PLS is that the resulting spectral vectors are directly related to the constituents of interest. This is entirely unlike PCR, where the vectors merely represent the most common spectral variations in the data, completely ignoring their relation to the constituents of interest until the final regression step. [Pg.43]

A normalized scalar product of two spectral vectors was used during matching as the similarity measure, and sequential searches through the entire library were always performed. Thus, each spectrum in turn was treated as a query. To simulate small variances in data acquisition, and/or spectral differences for very similar compounds, 1% and 5% of random white noise, has been added. Appropriate decomposition (wavelet or PCA) was then performed, and the resulting vector was compared to each of the spectral vectors in the compressed library. [Pg.296]

The appropriate SVD-derived spectral and temporal eigenvectors were selected and the temporal vectors were modeled. Ideally, the temporal vectors are the kinetic traces of individual components, each one being associated with a spectrum of a pure component Le., the spectral vector). Once the temporal vectors had been modeled the pure component spectra were reconstructed as a function of the pre-exponential multiplier obtained from the analysis, SVD determined spectral eigenvectors, and the corresponding eigenvalues. After the spectra of the component species were determined, the extinction profile was calculated and used along with the calculated decay times to construct a linear combination of the pure component species contributions to the observed... [Pg.201]

Among the spectral vectors, that fulfil the system of equations, there is an optimal one, i.e., spectrum of a signal sampled in a uniform manner. Finding it, however, is not a simple task (even if thermal noise could be neglected). Many approaches were presented, differing in type of constraints that limits the number of solutions. Some of these include the following ... [Pg.100]

In the case of a full spectra search, the complete set of spectral features (absorbance values at p wavelength positions) is compared between the spectrum of the unknown and aU spectra contained in the data bank. So-called similarity measures are computed for each individual comparison. In the case of the mostly employed similarity measure, the Euclidean distance, the spectram is regarded as a p-dimen-sional spectral vector (data points at p wavelength positions). The comparison... [Pg.1041]

In contemporary search algorithms each spectral vector is normalized to unit length (unit vector xfj. The length of a vector x (also called absolute value or norm) is given by... [Pg.1043]

The calculated result corresponds to the cosine of the angle between the spectral vector u of the unknown and the vector b of the library spectrum. In the case of congruency of both vectors u and b we obtain cos u,b) = 1, thus dj = 0. [Pg.1043]

A common problem for contemporary search algorithms, caused by varying baseline off-sets, can be overcome by centering the spectra (cf. Section 22.2). Centered spectra are obtained by calculating the average it of a spectral vector u measured at p wavelengths ... [Pg.1043]

Kullback Leibler style distances. The Kullback Leibler divergence is a measure of distance between two probability distributions. From this, we can derive an expression which calculates the distance between two spectral vectors [251], [472] ... [Pg.512]

Mean centering eliminates any spectral difference because of a fixed baseline offset that might be present in an individual spectrum. Eq. 3 further shows that the normalization of the spectral vectors to unit length is equivalent, in statistical terms, to the scaling of each spectrum by its standard deviation. [Pg.608]


See other pages where Spectral vectors is mentioned: [Pg.131]    [Pg.136]    [Pg.497]    [Pg.230]    [Pg.75]    [Pg.181]    [Pg.368]    [Pg.368]    [Pg.193]    [Pg.334]    [Pg.1118]    [Pg.497]    [Pg.181]    [Pg.181]    [Pg.185]    [Pg.186]    [Pg.203]    [Pg.1042]    [Pg.19]    [Pg.759]    [Pg.1046]    [Pg.58]    [Pg.60]    [Pg.437]   
See also in sourсe #XX -- [ Pg.220 ]




SEARCH



Spectral Representations of Vectors and Operators

© 2024 chempedia.info