Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sparse matrix methods

Sparse Matrix Methods. In order to get around the limitations of the sequential modular architecture for use in design and optimization, alternate approaches to solving flowsheeting problems have been investigated. Attempts to solve all or many of the nonlinear equations simultaneously has led to considerable interest in sparse matrix methods generally as a result of using the Newton-Raphson method or Broyden s method (22, 23, 24 ). ... [Pg.11]

Distillation Calculations, Work done with flash calculations and sparse matrix methods was extended to distillation calculations. Holland and Gallun (51) explored the use of Broyden s method coupled with sparse updating procedures to distillation calculations with highly non-ideal solutions. Shah and Boston (52), and Ross and Seider (53j discuss the case of multiple liquids phases on a tray. [Pg.14]

Stadherr, M. A., "A New Sparse Matrix Method for Process Design", Paper presented at Miami AIChE Meeting, November 1978. [Pg.36]

In the years since the 2nd Edition, much has happened in electrochemical digital simulation. Problems that ten years ago seemed insurmountable have been solved, such as the thin reaction layer formed by very fast homogeneous reactions, or sets of coupled reactions. Two-dimensional simulations are now commonplace, and with the help of unequal intervals, conformal maps and sparse matrix methods, these too can be solved within a reasonable time. Techniques have been developed that make simulation much more efficient, so that accurate results can be achieved in a short computing time. Stable higher-order methods have been adapted to the electrochemical context. [Pg.345]

The Newton/sparse matrix methods now used by electrical engineers have become the solution method of choice. Hutchison and his students at Cambridge were among the first chemical engineers to publish this approach, in the early 1970s. They used a quasi-linear model rather than a Newton one, but the ideas were really very similar. (It appears that the COPE flowsheeting system of Exxon was Newton based it existed in the mid-1960s but slowly evolved into a sequential modular system. One must assume the Newton method failed to compete.)... [Pg.512]

S. C. Eisenstat, M. H. Shultz, and A. H. Sherman, in Advances in Computer Methods for Partial Differential Equations, R. Vichnevetsky, Ed., pp. 40-45, AICA, New Brunswick, N.J., 1975. Application of Sparse Matrix Methods to Partial Differential Equations. [Pg.70]

A method for solving individual models such as Newton or quasi-Newton methods combined with sparse matrix methods to convert the nonlinear alge-... [Pg.557]

For N parameters the least-squares equations lead to an V x AT normal matrix. Because the restraints involve only near neighbour atoms, the matrix is sparse, with the majority of non-diagonal terms being zero and less than 1% of the elements nonzero [117]. For n atoms and m distance restraints the number of elements to be stored is 6n-l-9m. For example, with a small protein of 812 atoms and 2030 restraints (approximately 3 x the number of atoms), the number of elements is 23 142. For phosphorylase b with 6640 atoms there are 26 561 parameters and some 229451 nonzero elements on the normal matrix, which is still only 0.03% of the total matrix elements. In the restrained least-squares refinement (and many of the other refinement methods) the normal equations are solved by the conjugate-gradient algorithm [129]. [Pg.375]

One method is Non-Negative Matrix Factorization (NMF), which learns to recognize semantic features of the text [34]. A corpus of documents can be summarized by a matrix of words versus documents. This matrix is sparse, with many zero values. The algorithm extracts a set of semantic features, combinations of which can... [Pg.164]

The expansion coefficients are eigenvectors of the interaction matrix. Sparse matrix methods are used since, as the size of the expansion increases, more and more matrix elements are zero. An implementation of the Davidson method [14] is used for large cases. Since it is based on the multiplication of the interaction matrix by a vector, the method can readily be parallelized [15]. [Pg.119]

There are a number of methods available to solve for the solution of a given set of linear algebraic equations. One class is the direct method (i.e., requires no iteration) and the other is the iterative method, which requires iteration as the name indicates. For the second class of method, an initial guess must be provided. We will first discuss the direct methods in Section B.5 and the iterative methods will be dealt with in Section B.6. The iterative methods are preferable when the number of equations to be solved is large, the coefficient matrix is sparse and the matrix is diagonally dominant (Eqs. B.8 and B.9). [Pg.651]

When dealing with large sets of equations, especially if the coefficient matrix is sparse, the iterative methods provide an attractive option in getting the solution. In the iterative methods, an initial solution vector is assumed, and the process is iterated to reduce the error between the iterated solution and the exact solution x, where k is the iteration number. Since the exact solution is not known, the iteration process is stopped by using the difference Ax, = -... [Pg.659]

A third class of methods is the sparse matrix methods (60), which solve the Hartree-Fock equations for large systems. Disadvantages include that it is limited to the ground state, formulated (much less implemented ) only for semi-enq)irical methods, and intrinsically incapable of treating electron transport and optical properties such as two-photon absorption. [Pg.287]

The inner loop starts at this step. In this loop the variables Sp RpRj, and Qj are calculated to satisfy Equations 13.49,13.50,13.51, and 13.52, using the simple thermodynamic models. Begin by computing from Equation set 13.49. Eor each component i, these are N linear equations represented by a tridiagonal matrix which can be solved by a special sparse matrix method, the Thomas algorithm, described further along in this section. Next, VJ, are calculated from Equation set 13.50, and Lj, Vj, and Xj are calculated directly ... [Pg.336]

Unlike the global approaches previously described. Locally Linear Embedding (LLE) is a local approach and as described in Sect. 2.3.4 the x feature matrix is sparse. This sparse feature matrix is beneficial as it lowers the computational cost of the eigendecomposition. Specifically, the computational cost of eigendecomposition for a sparse nxn matrix, F, is 0 rn ) when using specific sparse analysis methods [8]. Here, r is the ratio of nonzero elements in F to the total number of elements n. [Pg.72]

The conceptually simplest approach to solve for the -matrix elements is to require the wavefimction to have the fonn of equation (B3.4.4). supplemented by a bound function which vanishes in the asymptote [32, 33, 34 and 35] This approach is analogous to the fiill configuration-mteraction (Cl) expansion in electronic structure calculations, except that now one is expanding the nuclear wavefimction. While successfiti for intennediate size problems, the resulting matrices are not very sparse because of the use of multiple coordinate systems, so that this type of method is prohibitively expensive for diatom-diatom reactions at high energies. [Pg.2295]

An alternative to split operator methods is to use iterative approaches. In these metiiods, one notes that the wavefiinction is fomially "tt(0) = exp(-i/7oi " ), and the action of the exponential operator is obtained by repetitive application of //on a function (i.e. on the computer, by repetitive applications of the sparse matrix... [Pg.2301]

The LIN method (described below) was constructed on the premise of filtering out the high-frequency motion by NM analysis and using a large-timestep implicit method to resolve the remaining motion components. This technique turned out to work when properly implemented for up to moderate timesteps (e.g., 15 Is) [73] (each timestep interval is associated with a new linearization model). However, the CPU gain for biomolecules is modest even when substantial work is expanded on sparse matrix techniques, adaptive timestep selection, and fast minimization [73]. Still, LIN can be considered a true long-timestep method. [Pg.245]

Note that in equation system (2.64) the coefficients matrix is symmetric, sparse (i.e. a significant number of its members are zero) and banded. The symmetry of the coefficients matrix in the global finite element equations is not guaranteed for all applications (in particular, in most fluid flow problems this matrix will not be symmetric). However, the finite element method always yields sparse and banded sets of equations. This property should be utilized to minimize computing costs in complex problems. [Pg.48]


See other pages where Sparse matrix methods is mentioned: [Pg.2105]    [Pg.1281]    [Pg.81]    [Pg.123]    [Pg.650]    [Pg.1104]    [Pg.48]    [Pg.468]    [Pg.81]    [Pg.438]    [Pg.457]    [Pg.267]    [Pg.202]    [Pg.2105]    [Pg.258]    [Pg.106]    [Pg.190]    [Pg.104]    [Pg.113]    [Pg.224]    [Pg.99]    [Pg.176]    [Pg.1457]    [Pg.270]    [Pg.1]    [Pg.89]    [Pg.922]    [Pg.203]    [Pg.467]   
See also in sourсe #XX -- [ Pg.5 ]




SEARCH



Newton/sparse matrix methods

Selected Topics in Matrix Operations and Numerical Methods for Solving Multivariable 15- 1 Storage of Large Sparse Matrices

Sparse

Sparse matrix

© 2024 chempedia.info