Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Lower dimensional spaces

The high dimensional nature of LIBS signals can lead to several computational issues when used in conjunction with many machine learning techniques. Dimensionality reduction is the process by which the high dimensional signals are mapped into a lower dimensional space. The resulting lower dimensional space can enable more robust performance when used in conjunction with pattern recognition techniques. [Pg.278]

What PCA actually does is project the data matrix X onto a lower-dimensional space. You start with J variables, and end with R orthogonal PCs (where R J). This projection gives a simplified view of the data, highlighting the important variables. Figure 6.22 shows an example where two points are projected from a three-dimensional space onto a two-dimensional surface. [Pg.260]

When the true intrinsic rank of a data matrix (the number of factors) is properly determined, the corresponding eigenvectors form an orthonormal set of basis vectors that span the space of the original data set. The coordinates of a vector a in an m-dimensional space (for example, a 1 x m mixture spectrum measured at m wavelengths) can be expressed in a new coordinate system defined by a set of orthonormal basis vectors (eigenvectors) in the lower-dimensional space. Figure 4.14 illustrates this concept. The projection of a onto the plane defined by the basis vectors x and y is given by a. To find the coordinates of any vector on a normalized basis vector, we simply form the inner product. The new vector a, therefore, has the coordinates a, = aTx and a2 = aTy in the two-dimensional plane defined by x and y. [Pg.96]

The above intersections take place in the four-dimensional phase space. It is preferable to study them in a space of reduced dimensionality, which we introduce so as to see the intersections in a lower-dimensional space [12]. Figure 23 indicates a schematic picture of our idea, and Fig. 24 shows the results. [Pg.381]

This approach is particularly efficient when combined with the Cosine coefficient (69) and was used by Pickett et al. in combination with pharmacophore descriptors (70). In lower dimensional spaces the maxsum measure tends to force selection from the comers of diversity space (6b, 71) and hence maxmin is the preferred function in these cases. A similar conclusion was drawn from a comparison of algorithms for dissimilarity-based compound selection (72). [Pg.208]

In concluding this brief review, we remark that the principal motivation for working with the Poincare map as opposed to the solutions of (2.1), neither of which is explicitly computable in general, is that the Poincare map has its domain in Euclidean -space whereas solutions of (2.1) must be viewed in Euclidean (n-l-l)-space - that is, as (t,x(t)). The advantage of working in a lower-dimensional space is the key. We hope the utility of the Poincare map will be evident in the remaining sections of this chapter. [Pg.164]

What degree of lumping can be achieved while maintaining the accuracy of the reduced scheme, i.e., how small can the lower dimensional space be ... [Pg.343]

Concepts that are important for general n-dimensional fuzzy relations include projections to lower-dimensional spaces, cylindric extensions of projections, and cylindric closures. These concepts are simple generalizations of their classical counterparts, and it is not essential to cover them here. It is more important to introduce some key concepts regarding fuzzy binary relations, which have a broad applicability. [Pg.41]

To apply this method onto the situation instances, it was necessary to convert the data items into a vector space. For each situation, the distance to all others could be calculated, based on the signal-wise distance between the recognized patterns. This allowed to apply the FastMap method [664], to create a lower dimensional space that approximately represented the situations distance matrix. Unfortunately, this preprocessing disallowed to apply the fuzzy-ID3-based rule generation mechanism of MIDAS ]969], as no conclusions about the process parameters could be drawn from rules about the generated vector space. [Pg.690]

We can now consider a quantitative approach to chirality. Chirality can be defined for objects of n-dimensional space. When n = 2 we have the case of chirality of objects embedded in a plane. For example, benzanthracene (Figure 33) when embedded in a plane is chiral, since it cannot be brought into coincidence with its mirror image if we allow only sliding a figure within the plane. Transformation of one of the benzanthracene 2D enantiomers into the other requires one to take the molecule out of the plane in which it is embedded. Hence, an object that is chiral in a lower dimensional space will not remain chiral if placed into a space of a higher dimension. [Pg.223]

Grouping of analytical data is possible either by means of clustering methods or by projecting the high-dimensional data onto lower dimensional space. Since there is no supervisor in the sense of known membership of objects to classes, these methods are performed in an unsupervised manner. [Pg.140]

Principal component analysis A statistical technique to summarize the observed high-dimensional data in a lower dimensional space through the linear combination of observed variables. It is commonly used to summarize genetic background information across study subjects. [Pg.308]

The location of each observation in the lower dimensional space is described by the distance along the latent variables. These values (i.e., the distances from origo after mean centering) are called the scores. Since the data are mean centered prior to PCA, the scores may have positive or negative values. The location of all observations in the lower dimensional space can be visualized in score plots. These plots are most commonly viewed in two dimensions and provide a way to illustrate the structured variation in the data. [Pg.756]

The last four chapters of this book have all been concerned with methods that handle multiple independent (descriptor) variables. This has included techniques for displaying multivariate data in lower dimensional space, determining relationships between points in N dimensions and fitting models between multiple descriptors and a single response variable, continuous or discrete. Hopefully, these examples have shown the power of multivariate techniques in data analysis and have demonstrated that the information contained in a data set will often be revealed only by consideration of all of the data at once. What is true for the analysis of multiple descriptor variables is also true for the analysis of multiple... [Pg.162]

The main feature of clustering algorithms is that they reduce the amount of data cases by grouping them into classes. The projection methods described below can be used for reducing the data dimensionality. The goal of these techniques is to represent the data set in a lower-dimensional space in such a way that certain properties of the data set are preserved as much as possible. [Pg.252]

The treatment here, due to Wei and Kuo ", lumps first-order reactions with f = -Kc. It projects the system onto a lower dimensional space via a linear transformation c = Me where M is an xn lumping matrix. Thus M transforms the -tuple vector c into an h -tuple vector c of a lower rank h n [Pg.221]

Such dimensional reduction may be achieved by Multidimensional Scaling (MDS) techniques. MDS projects the n-dimensional distance in a lower-dimensional space (2- or 3-dimensions) under the constraint of maximizing the retention of the stmcture of the inter-molecular distance matrix. The representation of the n-dimensional space is optimized in the lower-dimensional projection by minimizing what is known as stress . The smaller the stress the better the projection up to the (generally unattainable) limit of zero which is a projection that preserves the distance matrix completely. The quality of the projection can be gleaned from what is known as a Shepard diagram . [Pg.76]

The nonclassical mathematical description of the world follows Eq.(l), which in practice is solved by the separation of space and time variables. Although it is a good approximation, it cannot render four-dimensional effects intelligible in three. The problem is highlighted by analogy with efforts to describe geometrical shapes in lower-dimensional space. [Pg.140]


See other pages where Lower dimensional spaces is mentioned: [Pg.278]    [Pg.48]    [Pg.285]    [Pg.759]    [Pg.160]    [Pg.239]    [Pg.84]    [Pg.76]    [Pg.343]    [Pg.288]    [Pg.309]    [Pg.577]    [Pg.579]    [Pg.319]    [Pg.211]    [Pg.175]    [Pg.394]    [Pg.187]    [Pg.123]    [Pg.300]    [Pg.679]    [Pg.756]    [Pg.756]    [Pg.82]    [Pg.213]    [Pg.394]    [Pg.321]    [Pg.75]    [Pg.214]    [Pg.598]    [Pg.1040]    [Pg.1159]   
See also in sourсe #XX -- [ Pg.160 ]




SEARCH



0-dimensional space

© 2024 chempedia.info