Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrices characteristic vectors

The profits from using this approach are dear. Any neural network applied as a mapping device between independent variables and responses requires more computational time and resources than PCR or PLS. Therefore, an increase in the dimensionality of the input (characteristic) vector results in a significant increase in computation time. As our observations have shown, the same is not the case with PLS. Therefore, SVD as a data transformation technique enables one to apply as many molecular descriptors as are at one s disposal, but finally to use latent variables as an input vector of much lower dimensionality for training neural networks. Again, SVD concentrates most of the relevant information (very often about 95 %) in a few initial columns of die scores matrix. [Pg.217]

First, one can check whether a randomly compiled test set is within the modeling space, before employing it for PCA/PLS applications. Suppose one has calculated the scores matrix T and the loading matrix P with the help of a training set. Let z be the characteristic vector (that is, the set of independent variables) of an object in a test set. Then, we first must calculate the scores vector of the object (Eq. (14)). [Pg.223]

Just as a known root of an algebraic equation can be divided out, and the equation reduced to one of lower order, so a known root and the vector belonging to it can be used to reduce the matrix to one of lower order whose roots are the yet unknown roots. In principle this can be continued until the matrix reduces to a scalar, which is the last remaining root. The process is known as deflation. Quite generally, in fact, let P be a matrix of, say, p linearly independent columns such that each column of AP is a linear combination of columns of P itself. In particular, this will be true if the columns of P are characteristic vectors. Then... [Pg.71]

A final word may be said about appraising the accuracy of a computed root. No usable rigorous bound to the errors is known that does not require approximations to all roots and vectors. Suppose, then, that X is a matrix whose columns are approximations to the characteristic vectors, and form... [Pg.78]

The final matrix characteristic covered here involves differentiation of function of a vector with respect to a vector. Suppose/(x) is a scalar function of n variables (xx, x2, xn). The first partial derivative of/(x) with respect to x is... [Pg.592]

Suppose that A is an nxn matrix of the form A = (l-pl) + pii, where i is a column of Is and 0 < p < 1. Write out the format of A explicitly for n = 4. Find all of the characteristic roots and vectors of A. (Hint There are only two distinct characteristic roots, which occur with multiplicity 1 and n-1. Every c of a certain type is a characteristic vector of A.) For an application which uses a matrix of this type, see Section 14.5 on the random effects model. [Pg.120]

Generally, a set of coupled diffusion equations arises for multiple-component diffusion when N > 3. The least complicated case is for ternary (N = 3) systems that have two independent concentrations (or fluxes) and a 2 x 2 matrix of interdiffusivities. A matrix and vector notation simplifies the general case. Below, the equations are developed for the ternary case along with a parallel development using compact notation for the more extended general case. Many characteristic features of general multicomponent diffusion can be illustrated through specific solutions of the ternary case. [Pg.134]

Eigenanalysis An analysis to determine the characteristic vectors (eigenvectors) of a matrix. These are a measure of the principal axes of the matrix. [Pg.723]

The vectors Xi and X2 could have been chosen to have a direction opposite to the choice made in Fig. 9a. This choice corresponds to the displaced vectors x"i and x"i in Fig. 9b. They form with x l and x 2 two straight lines that extend across the reaction triangle and that intersect at the equilibrium point as shown. Either of the unit characteristic vectors corresponding to x l and x"i may be combined with either of the unit characteristic vectors corresponding to x 2 and x"i and the vector Xo to give a matrix X for making the required transformations. [Pg.229]

The principle of detailed balancing provides the means for making a further transformation to a third coordinate system in which the characteristic directions are orthogonal to each other. The transformation is discussed in detail in Appendix I, but we have already made use of this orthogonal B system in obtaining the inverse matrix (Section II,B,2,c). The n/2) n — 1) relations provided by the principle of detailed balancing are the requirements that the unit characteristic vectors Xy must be orthogonal to each other after this transformation. [Pg.239]

Although only four figures are obtained in the experimental characteristic composition, we shall make the characteristic vectors self-consistent to six figures since the accuracy of the method for obtaining the inverse matrix given in Section II,B,2,c depends on the self-consistency of the characteristic vectors. In addition, the use of six figures will reduce the accumulation of errors caused by the computation procedure. Using Eq. (85) to calculate Xi from Eq. (134), we have... [Pg.262]

Up to this point we have used the set of characteristic vectors obtained by multiplying column matrices from the left hy the matrix K. On the other... [Pg.281]

The length of the column vectors that compose the matrix X are arbitrary insofar as they are defined by Eqs. (59) and (62) and the particular choice is governed by other considerations (Section II,B,2, i). Thus, except for an arbitrary choice in lengths, the left characteristic vectors are the rows of the inverse matrix X, and the characteristic roots corresponding to the left and right characteristic vectors are the same. [Pg.282]

For this matrix, the left characteristic vectors with X = 0 are... [Pg.283]

The same results may be obtained more easily by calculating the characteristic vectors and roots (Appendix III) for the original and perturbed rate constant matrix K and then comparing the compositions calculated by means of the matrix [Eq. (78)] corresponding to each rate constant matrix. But the same results may be obtained still more easily by means of a first order perturbation calculation when the changes in the values of the rate constants are relatively small. The equations needed for this perturbation calculation will now be derived. Since almost all monomolecular... [Pg.303]

To summarize, the characteristic vectors and roots of the matrix K = A + are given, to first order terms in e, by... [Pg.306]

Acrivos and Amundson (SO, 51) applied matrix algebra to the unsteady-state behavior of stagewise operations in chemical processes. Instead of using a characteristic vector expansion, they emphasize the use of the Sylvester-Lagrange-Buchheim formula (52). Even though this formula is equivalent to the characteristic vector expansion, it is more difficult to manipulate and is not easily related to physical concepts such as straight line reaction paths. [Pg.357]

Thus, the matrix P KP has the same characteristic roots — Xi as K even though the characteristic vectors differ. [Pg.366]

It will be convenient to determine the left rather than the right characteristic vectors. Except for a possible discrepancy in length, which determines the size of unit amounts of the various species these vectors form the rows of the inverse matrix used to transform compositions from the A to the B system of coordinates (see footnote Section IV,A,4,a). These ... [Pg.374]

III. A Convenient Method for Computing the Characteristic Vectors and Roots of the Rate Constant Matrix K... [Pg.376]

The vector gi only changes its length under the action of the matrix G within the accuracy limits set and is the characteristic vector sought. Since... [Pg.379]

The method given above always converges to the characteristic vector with the largest decay constant. Hence, to determine the characteristic vector with the second largest decay constant, a matrix must be determined for which the effects of the vectors with the largest roots are removed. To do this, we shall return to the use of the rate constant matrix K. Furthermore, we shall use the rate constant matrix K for the orthogonal system, which is related to the rate constant matrix K by (Appendix I,A)... [Pg.379]

A diagonal matrix with diagonal elements equal to a< The equilibrium point in composition space The equilibrium point corresponding to the rth choice of the characteristic vector Xo(r)... [Pg.383]

Gibbs free energy Gibbs free energy at equilibrium Characteristic directions of the matrix G Characteristic vectors of the matrix G The constant... [Pg.383]

The diagonal matrix with diagonal elements equal to the lengths of the characteristic vectors Zy The inverse of the matrix... [Pg.384]

A vector terminating rth of the distance between two characteristic vectors with zero characteristic roots The characteristic matrix for the unperturbed rate constant matrix ... [Pg.386]


See other pages where Matrices characteristic vectors is mentioned: [Pg.71]    [Pg.72]    [Pg.78]    [Pg.39]    [Pg.121]    [Pg.321]    [Pg.68]    [Pg.240]    [Pg.219]    [Pg.225]    [Pg.226]    [Pg.233]    [Pg.234]    [Pg.276]    [Pg.281]    [Pg.282]    [Pg.282]    [Pg.285]    [Pg.304]    [Pg.304]    [Pg.305]    [Pg.376]    [Pg.379]    [Pg.380]    [Pg.380]   
See also in sourсe #XX -- [ Pg.30 , Pg.209 , Pg.229 , Pg.300 ]




SEARCH



Vector matrices

© 2024 chempedia.info