Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Eigenvalue algorithms

CPSA Charged Partial Surface FEVA First Eigenvalue Algorithm... [Pg.1214]

M. GERADIN, N. KILL, Eigenvalue Algorithms for Stability and Critical Speeds Analysis of Rotating Systems, 6th. International Modal Analysis Conference, February 1-4, 1988. [Pg.130]

In order to determine whether the system is stable or unstable, the two polynomials are combined, as shown in the Method of Solution, using as the multiplier of the polynomial from the numerator of the transfer function. Function NRsdivision (which uses the Newton-Raphson method with synthetic division algorithm) or function roots (which uses the eigenvalue algorithm) is called to calculate the roots of the overall polynomial function and the sign of all roots is checked for positive real parts. A flag named stbl indicates that the system is stable (all negative roots stbl = 1) or unstable (positive root stbl = 0). [Pg.39]

The problem is then reduced to the representation of the time-evolution operator [104,105]. For example, the Lanczos algorithm could be used to generate the eigenvalues of H, which can be used to set up the representation of the exponentiated operator. Again, the methods are based on matrix-vector operations, but now much larger steps are possible. [Pg.259]

The basic scheme of this algorithm is similar to cell-to-cell mapping techniques [14] but differs substantially In one important aspect If applied to larger problems, a direct cell-to-cell approach quickly leads to tremendous computational effort. Only a proper exploitation of the multi-level structure of the subdivision algorithm (also for the eigenvalue problem) may allow for application to molecules of real chemical interest. But even this more sophisticated approach suffers from combinatorial explosion already for moderate size molecules. In a next stage of development [19] this restriction will be circumvented using certain hybrid Monte-Carlo methods. [Pg.110]

Fig. 7. Eigenmeasure V2 of the Frobenius-Perron operator to the second largest eigenvalue A2 = 0.9963 for the test system (15) with 7 = 3. iV2 was computed via our new subdivision algorithm (cf. Section 4). Fig. 7. Eigenmeasure V2 of the Frobenius-Perron operator to the second largest eigenvalue A2 = 0.9963 for the test system (15) with 7 = 3. iV2 was computed via our new subdivision algorithm (cf. Section 4).
Various methods have been developed for a unique and unambiguous numbering of the atoms of a molecule and thus for deriving a canonical code for this molecule [76]. Besides eigenvalues of adjacency matrices [77], it is mainly the Morgan Algorithm that is used [79]. [Pg.59]

It uses a linear or quadratic synchronous transit approach to get closer to the quadratic region of the transition state and then uses a quasi-Newton or eigenvalue-following algorithm to complete the optimization. [Pg.46]

The determinants can be developed into a polynomial equation of degree r of which the r positive roots are the eigenvalues "k, where rpractical methods for computing the eigenvalues will be discussed in Section 31.4 on algorithms. [Pg.94]

Fig. 31.13. Schematic example of three common algorithms for singular value and eigenvalue decomposition. Fig. 31.13. Schematic example of three common algorithms for singular value and eigenvalue decomposition.
A comparison of the performance of the three algorithms for eigenvalue decomposition has been made on a PC (IBM AT) equipped with a mathematical coprocessor [38]. The results which are displayed in Fig. 31.14 show that the Householder-QR algorithm outperforms Jacobi s by a factor of about 4 and is superior to the power method by a factor of about 20. The time for diagonalization of a square symmetric value required by Householder-QR increases with the power 2.6 of the dimension of the matrix. [Pg.140]

Fig. 31.14. Performance of three computer algorithms for eigenvalue decomposition as a function of the dimension of the input matrix. The horizontal and vertical scales are scaled logarithmically. Execution time is proportional to a power 2.6 of the dimension. Fig. 31.14. Performance of three computer algorithms for eigenvalue decomposition as a function of the dimension of the input matrix. The horizontal and vertical scales are scaled logarithmically. Execution time is proportional to a power 2.6 of the dimension.
The PLS algorithm is relatively fast because it only involves simple matrix multiplications. Eigenvalue/eigenvector analysis or matrix inversions are not needed. The determination of how many factors to take is a major decision. Just as for the other methods the right number of components can be determined by assessing the predictive ability of models of increasing dimensionality. This is more fully discussed in Section 36.5 on validation. [Pg.335]

The ratio of the largest to the smallest eigenvalue of the Hessian matrix at the minimum is defined as the condition number. For most algorithms the larger the condition number, the larger the limit in Equation 5.5 and the more difficult it is for the minimization to converge (Scales, 1985). [Pg.72]


See other pages where Eigenvalue algorithms is mentioned: [Pg.52]    [Pg.211]    [Pg.686]    [Pg.185]    [Pg.187]    [Pg.604]    [Pg.52]    [Pg.211]    [Pg.686]    [Pg.185]    [Pg.187]    [Pg.604]    [Pg.406]    [Pg.66]    [Pg.308]    [Pg.193]    [Pg.154]    [Pg.158]    [Pg.159]    [Pg.162]    [Pg.163]    [Pg.66]    [Pg.308]    [Pg.157]    [Pg.242]    [Pg.250]    [Pg.109]    [Pg.788]    [Pg.89]    [Pg.102]    [Pg.80]    [Pg.93]    [Pg.188]    [Pg.34]    [Pg.134]    [Pg.139]    [Pg.139]    [Pg.139]    [Pg.140]    [Pg.140]   
See also in sourсe #XX -- [ Pg.46 ]




SEARCH



Eigenvalue

© 2024 chempedia.info