Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear sparse systems

In principle, the task of solving a linear algebraic systems seems trivial, as with Gauss elimination a solution method exists which allows one to solve a problem of dimension N (i.e. N equations with N unknowns) at a cost of O(N ) elementary operations [85]. Such solution methods which, apart from roundoff errors and machine accuracy, produce an exact solution of an equation system after a predetermined number of operations, are called direct solvers. However, for problems related to the solution of partial differential equations, direct solvers are usually very inefficient Methods such as Gauss elimination do not exploit a special feature of the coefficient matrices of the corresponding linear systems, namely that most of the entries are zero. Such sparse matrices are characteristic of problems originating from the discretization of partial or ordinary differential equations. As an example, consider the discretization of the one-dimensional Poisson equation... [Pg.165]

Minimizing Round-Off Error in Direct Solution of Large Sparse Systems of Linear Equations... [Pg.266]

The improved algorithm for solving sparse systems of linear equations follows ... [Pg.272]

The round-off error propagation associated with the use of Shacham and Kehat s direct method for the solution of large sparse systems of linear equations is investigated. A reordering scheme for reducing error propagation is proposed as well as a method for iterative refinement of the solution. Accurate solutions for linear systems, which contain up to 500 equations, have been obtained using the proposed method, in very short computer times. [Pg.274]

Shacham, M. Kehat, E., "A Direct Method for the Solution of Large Sparse Systems of Linear Equations", Comp. J., 1976, 1 ( 0, 353. [Pg.275]

The new coefficient matrix is symmetric as M lA can be written as M 1/2AM 1/Z. Preconditioning aims to produce a more clustered eigenvalue structure for M A and/or lower condition number than for A to improve the relevant convergence ratio however, preconditioning also adds to the computational effort by requiring that a linear system involving M (namely, Mz = r) be solved at every step. Thus, it is essential for efficiency of the method that M be factored very rapidly in relation to the original A. This can be achieved, for example, if M is a sparse component of the dense A. Whereas the solution of an n X n dense linear system requires order of 3 operations, the work for sparse systems can be as low as order n.13-14... [Pg.33]

The extensive experience with Gaussian-type basis sets shows that basis set sequences that increase rapidly in accuracy can be constructed in a systematic way [9]. At the same time, a compact description of the wave functions is maintained, and this opens the way for efficient methods to solve for the self consistent field (SCF) equations. Furthermore, as Gaussian functions are localized, the representations of the KS, overlap and density matrix in this basis become sparse with increasing system size [11]. This eventually allows for solving the KS equations using computational resources that scale linearly with system size. [Pg.290]

For stiff differential equations, the backward difference algorithm should be preferred to the Adams-Moulton method. The well-known code LSODE with different options was published in 1980 s by Flindmarsh for the solution of stiff differential equations with linear multistep methods. The code is very efficient, and different variations of it have been developed, for instance, a version for sparse systems (LSODEs). In the international mathematical and statistical library, the code of Hindmarsh is called IVPAG and DIVPAG. [Pg.439]

In Equation-Oriented (EO) approach all the modelling equations are assembled in a large sparse system producing Non-linear Algebraic Equations (NAE) in steady state simulation, and stiff Differential Algebraic Equations (DAE) in dynamic simulation. Thus, the solution is obtained by solving simultaneously all the modelling equations. [Pg.47]

Hestenes and Stiefel (1952) explained their approach for the solution of linear equation systems with sparse symmetric positive definite matrices ... [Pg.103]

From an appropriate process modelling it follows that the system (1.1) has the differential index 1. Usually it is a stiff system, whose discretization and linearization yields systems of equations with sparse nonsymmetric Jacobian matrices. The systems can comprise several 10 000 equations and are hierarchically structured into subsystems in accordance with the functional units of the chemical plant ... [Pg.68]

Thus we obtain Ayn by solving (2.4) for w. This is a linear equation system of dimension s and the matrix VyQ can be generated using the sparse structure of Vy, This already reduces the essential part of the computation, i.e., the decomposition of the matrix of the linear equation system. [Pg.125]

In the former option, the nser supplies a sparse matrix S whose sparsity pattern (location of nonzero elements) matches that of the Jacobian. That is, even though the Jacobian may be difficult to compute analytically, the user can at least specify that only a small subset of Jacobian elements are known to be nonzero, fsolve can use this information to reduce the computational burden and memory requirement when generating an approximate Jacobian. With JacobMult , the user supplies the name of a routine that returns the product of the Jacobian matrix with an input vector. The usefulness of this option will become clearer after our discussion of iterative methods for solving linear algebraic systems in Chapter 6. [Pg.99]

Sparse matrices are ones in which the majority of the elements are zero. If the structure of the matrix is exploited, the solution time on a computer is greatly reduced. See Duff, I. S., J. K. Reid, and A. M. Erisman (eds.), Direct Methods for Sparse Matrices, Clarendon Press, Oxford (1986) Saad, Y., Iterative Methods for Sparse Linear Systems, 2d ed., Society for Industrial and Applied Mathematics, Philadelphia (2003). The conjugate gradient method is one method for solving sparse matrix problems, since it only involves multiplication of a matrix times a vector. Thus the sparseness of the matrix is easy to exploit. The conjugate gradient method is an iterative method that converges for sure in n iterations where the matrix is an n x n matrix. [Pg.42]

Saad, Y. Iterative Methods for Sparse Linear Systems Available online http //www-users.cs.umn.edu/ saad/books.html, 2000. [Pg.107]

Spectral data are highly redundant (many vibrational modes of the same molecules) and sparse (large spectral segments with no informative features). Hence, before a full-scale chemometric treatment of the data is undertaken, it is very instructive to understand the structure and variance in recorded spectra. Hence, eigenvector-based analyses of spectra are common and a primary technique is principal components analysis (PC A). PC A is a linear transformation of the data into a new coordinate system (axes) such that the largest variance lies on the first axis and decreases thereafter for each successive axis. PCA can also be considered to be a view of the data set with an aim to explain all deviations from an average spectral property. Data are typically mean centered prior to the transformation and the mean spectrum is used a base comparator. The transformation to a new coordinate set is performed via matrix multiplication as... [Pg.187]

Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd edn, Society for Industrial and Applied Mathematics (2003). [Pg.47]

The use of this algorithm requires (nH-1) times of computation of the system (3a) and (3b) and solution of an (mxm) linear system of equations. (In Eq. (4)). The storage requirements are mainly the sparse storage for the matrix A and storage for the mxm dense matrix of the derivatives. [Pg.269]

There are four methods for solving systems of linear equations. Cramer s rule and computing the inverse matrix of A are inefficient and produce inaccurate solutions. These methods must be absolutely avoided. Direct methods are convenient for stored matrices, i.e. matrices having only a few zero elements, whereas iterative methods generally work better for sparse matrices, i.e. matrices having only a few non-zero elements (e.g. band matrices). Special procedures are used to store and fetch sparse matrices, in order to save memory allocations and computer time. [Pg.287]


See other pages where Linear sparse systems is mentioned: [Pg.81]    [Pg.150]    [Pg.266]    [Pg.153]    [Pg.81]    [Pg.438]    [Pg.185]    [Pg.262]    [Pg.68]    [Pg.71]    [Pg.295]    [Pg.330]    [Pg.434]    [Pg.119]    [Pg.47]    [Pg.87]    [Pg.171]    [Pg.270]    [Pg.280]    [Pg.249]    [Pg.252]    [Pg.87]    [Pg.321]    [Pg.24]    [Pg.247]    [Pg.39]    [Pg.245]    [Pg.640]    [Pg.113]    [Pg.274]   
See also in sourсe #XX -- [ Pg.269 , Pg.270 , Pg.271 , Pg.272 , Pg.273 , Pg.274 , Pg.275 , Pg.276 , Pg.277 , Pg.278 , Pg.279 ]




SEARCH



Linear systems

Linearized system

Sparse

Sparse system

© 2024 chempedia.info