Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrix multiplication algorithm

The PLS algorithm is relatively fast because it only involves simple matrix multiplications. Eigenvalue/eigenvector analysis or matrix inversions are not needed. The determination of how many factors to take is a major decision. Just as for the other methods the right number of components can be determined by assessing the predictive ability of models of increasing dimensionality. This is more fully discussed in Section 36.5 on validation. [Pg.335]

Interestingly, the spectral transform Lanczos algorithm can be made more efficient if the filtering is not executed to the fullest extent. This can be achieved by truncating the Chebyshev expansion of the filter,76,81 or by terminating the recursive linear equation solver prematurely.82 In doing so, the number of vector-matrix multiplications can be reduced substantially. [Pg.302]

Thus, four center integrals may be obtained as simple matrix multiplications of the three-index fields. This allows one to combine RI with most approximations without needing major modifications to existing algorithms and codes. [Pg.9]

Figure 8. Pictorial representation of the outer product matrix multiplication algorithm. ... Figure 8. Pictorial representation of the outer product matrix multiplication algorithm. ...
However, Harrison and Zarrabian suggest that for parallel-vector machines, it is better to revert to a matrix multiplication such as that used by Knowles and Handy.109 This algorithm is produced in Fig. 11. These loops are run... [Pg.204]

A general way of exploiting this sparseness in both the gradient and the Hessian construction involves the use of outer product algorithms to perform the matrix element assembly. In the case of the matrix multiplications required in the F matrix construction, this simply means that the innermost DO loop is over X in Eq. (260). (If t were the innermost DO loop, the result would be a series of dot products or an inner product algorithm.) When an outer product algorithm is used, the magnitude of the density matrix elements may be tested and the innermost DO loop is only performed for non-zero elements. (In the case of Hessian matrix construction, the test may occur outside of the two... [Pg.176]

T. Akutsu, S. Miyano, and S. Kuhara, Algorithms for identifying Boolean networks and related biological networks based on matrix multiplication and fingerprint fnnction. / Comput Biol 7(3 ) 331-343 (2000). [Pg.505]

A major advance in the efficiency of FCI calculations was the introduction of a factorized direct Cl algorithm by Siegbahn. This involves formulating the FCI calculation as a series of matrix multiplications an ideal algorithm for exploiting the power of current vector supercomputers. This algorithm is fundamental to our present ability to perform FCI benchmarks, and we discuss it in detail. We consider only the two-electron contribution to a given by Eq. (11), which can be written as... [Pg.112]

Because every integral and amplitude contributes in multiple contractions reordering steps are required to guarantee optimal performance in the inner loops of the algorithm. In practice one needs to make a trade-off to balance the work in integral reordering and matrix multiplication. [Pg.325]

Because the Thomas algorithm can be applied to the block tridiagonal structure (15-67) of (15-70), submatrices of partial derivatives are computed only as needed. The solution of (15-65) follows the scheme in Section 15.3, given by 15-13) to (15-18) and represented in Fig. 15.4, where matrices and vectors A), Bj, C/,-Fy, and AX,- correspond to variables A By, Cy, Dy, and x, respectively. However, the simple multiplication and division operations in Section 15.3 are changed to matrix multiplication and inversion, respectively. The steps are as follows... [Pg.313]

A more detailed analysis of the scalability of an algorithm that can reveal the rate at which the problem size must grow to maintain a constant efficiency requires an explicit functional form for E (p, n). We will show an example of the derivation of an efficiency function in section 5.3.2. Additionally, it is important to clearly define the problem size. In computational complexity theory, the problem size is usually defined to be a measure for the size of the input required for an algorithm. For example, the problem size for a matrix-matrix multiplication involving matrices of dimensions m x m would be m. This definition for the problem size is also consistent with the one usually employed in computational chemistry, where the problem size is defined as the size of the molecule being studied, expressed, for example, in terms of the number of atoms, electrons, or basis functions. [Pg.80]

These matrix multiplications can be avoided using a combination of the CG and BiCG algorithms to bypass construction of K [19]. In the first stage, the BiCG algorithm is used to solve the equation... [Pg.395]

In order to evaluate the Variables techniques, four algorithms were used a 6 x 6 matrix multiplication, a bubble sort, a tiny encryption, and a run length algorithms. We used the HPCT tool to automatically harden each apphcation according to the Variables transformation. [Pg.47]

Tables 4.3 and 4.4 show the original and modified program s execution time, code size, and data size for the matrix multiplication and bubble sort algorithms, respectively. They present results for the original unhardened program, as well as the version hardened with PODER and hardened with PODER combined with the Inverted Branches and Variables software-based techniques (Combined Techniques). Tables 4.3 and 4.4 show the original and modified program s execution time, code size, and data size for the matrix multiplication and bubble sort algorithms, respectively. They present results for the original unhardened program, as well as the version hardened with PODER and hardened with PODER combined with the Inverted Branches and Variables software-based techniques (Combined Techniques).
In order to evaluate HETA, two case-study applications based on a 6 x 6 matrix multiplication algorithm and a bubble sort classification algorithm were chosen to be hardened. [Pg.74]

The recent development of computer technology greatly increased the already high popularity and importance of block matrix algorithms (and consequently, of matrix multiplication and inversion) for solving linear systems, because block matrix computations turned out to be... [Pg.184]

Numerical stability of all such fast nxn matrix multipliers has been both theoretically proven in the most general case (by D. Bini and G. Lotti in 1980) and experimentally confirmed, but only the two algorithms (by Strassen and by Winograd), both supporting the upper bounds became practical so far and compete with the classical algorithm for matrix multiplication [see Pan (1984) Golub and Van Loan (1996) Higham (1996)]. [Pg.191]

The algorithms for matrix multiplication and inversion are applied to the solution of linear systems via block algorithms. The popularity and importance of such application are growing because of high efficiency of the implementation of matrix multiplication on modern supercomputers, particularly on loosely coupled multiprocessors. [Pg.191]

If A is a well-conditioned n x n matrix, then the choice Bo=tA T,f = l/( A i A oo) ensures that A -Bfc is already sufficiently small, when ft = O(logn). Matrix multiplications can be performed rapidly on some parallel computers, so Newton s algorithm can be useful for solving linear systems on some parallel machines, although as a sequential algorithm it is certainly inferior to Gaussian elimination. [Pg.196]

T. Risset. Linear systolic arrays for matrix multiplication comparisons of existing methods and new results. In Proc. 2nd Workshop on Algorithms and VLSI parallel architecture, pages 163-174, 1991. [Pg.68]


See other pages where Matrix multiplication algorithm is mentioned: [Pg.12]    [Pg.303]    [Pg.203]    [Pg.48]    [Pg.141]    [Pg.83]    [Pg.83]    [Pg.47]    [Pg.26]    [Pg.367]    [Pg.248]    [Pg.282]    [Pg.178]    [Pg.179]    [Pg.197]    [Pg.238]    [Pg.1957]    [Pg.189]    [Pg.3]    [Pg.214]    [Pg.430]    [Pg.395]    [Pg.59]    [Pg.64]    [Pg.75]    [Pg.85]    [Pg.95]    [Pg.190]    [Pg.43]   
See also in sourсe #XX -- [ Pg.225 ]




SEARCH



Matrix multiplication

Multiple algorithm

© 2024 chempedia.info