Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Vector characteristic

We must now mention, that traditionally it is the custom, especially in chemo-metrics, for outliers to have a different definition, and even a different interpretation. Suppose that we have a fc-dimensional characteristic vector, i.e., k different molecular descriptors are used. If we imagine a fe-dimensional hyperspace, then the dataset objects will find different places. Some of them will tend to group together, while others will be allocated to more remote regions. One can by convention define a margin beyond which there starts the realm of strong outliers. "Moderate outliers stay near this margin. [Pg.213]

The profits from using this approach are dear. Any neural network applied as a mapping device between independent variables and responses requires more computational time and resources than PCR or PLS. Therefore, an increase in the dimensionality of the input (characteristic) vector results in a significant increase in computation time. As our observations have shown, the same is not the case with PLS. Therefore, SVD as a data transformation technique enables one to apply as many molecular descriptors as are at one s disposal, but finally to use latent variables as an input vector of much lower dimensionality for training neural networks. Again, SVD concentrates most of the relevant information (very often about 95 %) in a few initial columns of die scores matrix. [Pg.217]

The idea behind this approach is simple. First, we compose the characteristic vector from all the descriptors we can compute. Then, we define the maximum length of the optimal subset, i.e., the input vector we shall actually use during modeling. As is mentioned in Section 9.7, there is always some threshold beyond which an inaease in the dimensionality of the input vector decreases the predictive power of the model. Note that the correlation coefficient will always be improved with an increase in the input vector dimensionality. [Pg.218]

Next, we select some pillar" compounds inside each or some of those subclasses, i.e., those having the highest norm of the characteristic vector. We can employ two pillars, the lowest (that with the lowest norm) along with the highest , and keep only those compounds which are reasonably dissimilar to the pillar (or to both pillars). The threshold of reasonability" is to be set by the user. [Pg.221]

First, one can check whether a randomly compiled test set is within the modeling space, before employing it for PCA/PLS applications. Suppose one has calculated the scores matrix T and the loading matrix P with the help of a training set. Let z be the characteristic vector (that is, the set of independent variables) of an object in a test set. Then, we first must calculate the scores vector of the object (Eq. (14)). [Pg.223]

The Kohonen Self-Organizing Maps can be used in a. similar manner. Suppose Xj., k = 1,. Nis the set of input (characteristic) vectors, Wy, 1 = 1,. l,j = 1,. J is that of the trained network, for each (i,j) cell of the map N is the number of objects in the training set, and 1 and j are the dimensionalities of the map. Now, we can compare each with the Wy of the particular cell to which the object was allocated. This procedure will enable us to detect the maximal (e max) minimal ( min) errors of fitting. Hence, if the error calculated in the way just mentioned above is beyond the range between e and the object probably does not belong to the training population. [Pg.223]

This can be modified, if desirable, by the introduction of a convenient scale factor from time to time. If IAJ > A2, then for sufficiently large a, the vectors approach the direction of the characteristic vector belonging to and... [Pg.69]

Then the first column of S is the characteristic vector belonging to Ax, But... [Pg.69]

Just as a known root of an algebraic equation can be divided out, and the equation reduced to one of lower order, so a known root and the vector belonging to it can be used to reduce the matrix to one of lower order whose roots are the yet unknown roots. In principle this can be continued until the matrix reduces to a scalar, which is the last remaining root. The process is known as deflation. Quite generally, in fact, let P be a matrix of, say, p linearly independent columns such that each column of AP is a linear combination of columns of P itself. In particular, this will be true if the columns of P are characteristic vectors. Then... [Pg.71]

A final word may be said about appraising the accuracy of a computed root. No usable rigorous bound to the errors is known that does not require approximations to all roots and vectors. Suppose, then, that X is a matrix whose columns are approximations to the characteristic vectors, and form... [Pg.78]

Banachiewicz method, 67 characteristic roots, 67 characteristic vectors, 67 Cholesky method, 67 Danilevskii method, 74 deflation, 71 derogatory form, 73 "equations of motion, 418 Givens method, 75 Hessenberg form, 73 Hessenberg method, 75 Householder method, 75 Jacobi method, 71 Krylov method, 73 Lanczos form, 78 method of modification, 67 method of relaxation, 62 method of successive displacements,... [Pg.778]

An eigenvector or characteristic vector is a nontrivial normalized vector v (distinct from 0) which satisfies the eigenvector relation ... [Pg.33]

A model of lamellae formation In stretched networks is proposed. Approximately one-half of the chains do not fold. Formation of such lamellae Is accompanied by declining stress. Highly folded systems (high crystallinity), however, can cause a stress Increase. In the calculations crosslinks are assigned to their most probable positions through the use of a characteristic vector. A contingent of amorphous chains Is also Included. The calculations suggest that the concept of fibrillar-lamellar transformations may be unnecessary to explain observed stress-temperature profiles In some cases. [Pg.293]

Most probable positions of the chains are determined by the use of a characteristic vector r. This vector is representative of an average network chain of N links (the average links per chain). It deforms affinely whereas the actual network chains might not, and its value depends only upon network deformation. Crystallization leaves r essentially unaltered since the miniscule volume contraction brought about by crystallization can be ignored. But real network chains are severely displaced by crystallization. These displacements, however, must be compatible with the immutability of r. So in a sense, the characteristic vector r limits the configurational variations of the chains to those consistent with a fixed network shape and size at a given deformation. [Pg.305]

Another technique that has been used in recent years to deal with medium effects on basicity determinations is factor analysis,104 also known as characteristic vector analysis.105 This technique, first developed by Reeves,100 can be used for correcting for medium effects, but only if used with considerable care. It has been shown that the basic technique developed by Edward and Wong105 does... [Pg.22]

In the generalized regression model, if the K columns of X are characteristic vectors of Q, then ordinary least squares and generalized least squares are identical. (The result is actually a bit broader X may be any linear combination of exactly K characteristic vectors. This result is Kruskal s Theorem.)... [Pg.39]

Suppose that A is an nxn matrix of the form A = (l-pl) + pii, where i is a column of Is and 0 < p < 1. Write out the format of A explicitly for n = 4. Find all of the characteristic roots and vectors of A. (Hint There are only two distinct characteristic roots, which occur with multiplicity 1 and n-1. Every c of a certain type is a characteristic vector of A.) For an application which uses a matrix of this type, see Section 14.5 on the random effects model. [Pg.120]

The problem lies in the model. The Euclidean distance calculation is inappropriate for use with correlated variables because it is based only on pairwise comparisons, without regard to the elongation of data point swarms along particular axes. In effect, Euclidean distance imposes a spherical constraint on the data set (18). When correlation has been removed from the data, (by derivation of standardized characteristic vectors) Euclidean distance and average-linkage cluster analysis return the three groups. [Pg.66]

Silvestri, A. J., Prater, C. D., and Wei, J., On the structure and analysis of complex systems of first order chemical reactions containing irreversible steps. H, Projection properties of the characteristic vectors. Chem. Eng. Set 23, 1191 (1968). [Pg.78]

Eigenanalysis An analysis to determine the characteristic vectors (eigenvectors) of a matrix. These are a measure of the principal axes of the matrix. [Pg.723]

III. A Convenient Method for Computing the Characteristic Vectors and Roots... [Pg.204]

At equilibrium, dai /dt) = 0 for all af. Therefore, Ka = 0 = 0 consequently, the equilibrium vector a is a characteristic vector of the system and has a characteristic root of zero. We shall limit our attention to reversible systems in which it is possible to go from any species A,- to any other species Ay either directly or through a sequence of other species. Such systems do not contain subsystems that are isolated from each other and each system has, therefore, a unique equilibrium point. For such systems, there can be no other characteristic vectors with X = 0 since the equilibrium vector, which does not decay, already accounts for all the mass in the system. Let this equilibrium species correspond to the species Bo then the first equation of Eqs. (24) is replaced by... [Pg.223]


See other pages where Vector characteristic is mentioned: [Pg.70]    [Pg.71]    [Pg.72]    [Pg.78]    [Pg.301]    [Pg.12]    [Pg.14]    [Pg.14]    [Pg.208]    [Pg.95]    [Pg.39]    [Pg.56]    [Pg.121]    [Pg.121]    [Pg.173]    [Pg.321]    [Pg.185]    [Pg.478]    [Pg.35]    [Pg.68]    [Pg.240]    [Pg.219]    [Pg.224]    [Pg.224]    [Pg.225]    [Pg.226]    [Pg.227]   
See also in sourсe #XX -- [ Pg.33 ]

See also in sourсe #XX -- [ Pg.121 , Pg.122 ]




SEARCH



© 2024 chempedia.info