Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sparse structured matrices

In case of sparse structural matrices, these were normally rearranged into independent blocks where R-groups from one block would not cross over with other blocks, and statistical analysis was applied to each block separately to estimate the activity contribution for each R-group. Blocks whose R-group activity contributions could not be estimated due to a lack in R-group crossovers were further eliminated. The block separation and compound removal procedure maximized the total number of R-group activity contributions that could be estimated. [Pg.107]

Thus we obtain Ayn by solving (2.4) for w. This is a linear equation system of dimension s and the matrix VyQ can be generated using the sparse structure of Vy, This already reduces the essential part of the computation, i.e., the decomposition of the matrix of the linear equation system. [Pg.125]

In addition, the Jacobian matrix of F2 in (2.3) has a characteristic block-sparse structure due to the special form of the continuity and consistency conditions (2.2). This multiple shooting structure can be exploited by a condensing algorithm to considerably reduce the size of the QP subproblem, which is then solved by a standard QP solver. Alternatively, the original QP subproblem can be directly solved by specialized, large-scale QP solvers, see e.g. [18]. [Pg.145]

The conceptually simplest approach to solve for the -matrix elements is to require the wavefimction to have the fonn of equation (B3.4.4). supplemented by a bound function which vanishes in the asymptote [32, 33, 34 and 35] This approach is analogous to the fiill configuration-mteraction (Cl) expansion in electronic structure calculations, except that now one is expanding the nuclear wavefimction. While successfiti for intennediate size problems, the resulting matrices are not very sparse because of the use of multiple coordinate systems, so that this type of method is prohibitively expensive for diatom-diatom reactions at high energies. [Pg.2295]

Note that although the bounds on the distances satisfy the triangle inequalities, particular choices of distances between these bounds will in general violate them. Therefore, if all distances are chosen within their bounds independently of each other (the method that is used in most applications of distance geometry for NMR strucmre determination), the final distance matrix will contain many violations of the triangle inequalities. The main consequence is a very limited sampling of the conformational space of the embedded structures for very sparse data sets [48,50,51] despite the intrinsic randomness of the tech-... [Pg.258]

For medium and large networks, the occurrence matrix that is of the same structure (isomorphic) as the coefficient matrix of the governing equations is usually quite sparse. For example, Stoner (S5) showed a 155-vertex network with a density of 3.2% for the occurrence matrix (i.e., 775 nonzeros out of a total of 1552 entries) using formulation C. Still lower densities have been observed on larger networks. In these applications it is of paramount importance that the data structure and data manipulations take full advantage of the sparsity of the governing equations. Sparse computation techniques are also needed in order to capture the full benefit of cycle selection and row and column reordering. [Pg.166]

Sparse matrices are ones in which the majority of the elements are zero. If the structure of the matrix is exploited, the solution time on a computer is greatly reduced. See Duff, I. S., J. K. Reid, and A. M. Erisman (eds.), Direct Methods for Sparse Matrices, Clarendon Press, Oxford (1986) Saad, Y., Iterative Methods for Sparse Linear Systems, 2d ed., Society for Industrial and Applied Mathematics, Philadelphia (2003). The conjugate gradient method is one method for solving sparse matrix problems, since it only involves multiplication of a matrix times a vector. Thus the sparseness of the matrix is easy to exploit. The conjugate gradient method is an iterative method that converges for sure in n iterations where the matrix is an n x n matrix. [Pg.42]

SQP. This is a sister code to GRG2 and available from the same source. The interfaces to SQP are very similar to those of GRG2. SQP is useful for small problems as well as large sparse ones, employing sparse matrix structures throughout. The implementation and performance of SQP are documented in Fan, et al. (1988). [Pg.321]

Matrix D has I s along the diagonal, reflecting the use of a normalized discrete Laplace equation with a = —k /[2 h + 1 )], and B is a multiple of the identity matrix with j8 = —h /[2(h -I-1 )]. Matrix A displays a sparse block structure whose off-diagonal coefficients must be less than 1 to converge to a solution. [Pg.255]

One of the most popular refinement programs is the state-of-the-art package Refmac (Murshudov et ah, 1997). Refmac uses atomic parameters (xyz, B, occ) but also offers optimization of TLS and anisotropic displacement parameters. The objective function is a maximum likelihood derived residual that is available for structure factor amplitudes but can also include experimental phase information. Refmac boasts a sparse-matrix approximation to the normal matrix and also full matrix calculation. The program is extremely fast, very robust, and is capable of delivering excellent results over a wide range of resolutions. [Pg.164]

Figure 7.4 Structure of the CI matrix as blocked by classes of determinants. The HF block is the (1,1) position, the matrix elements between the HF and singly excited determinants are zero by Brillouin s theorem, and between the HF and triply excited determinants are zero by the Condon-Slater rules. In a system of reasonable size, remaining regions of the matrix become increasingly sparse, but the number of determinants in each block grows to be extremely large. Thus, the (1,1) eigenvalue is most affected by the doubles, then by the singles, then by the triples, etc... Figure 7.4 Structure of the CI matrix as blocked by classes of determinants. The HF block is the (1,1) position, the matrix elements between the HF and singly excited determinants are zero by Brillouin s theorem, and between the HF and triply excited determinants are zero by the Condon-Slater rules. In a system of reasonable size, remaining regions of the matrix become increasingly sparse, but the number of determinants in each block grows to be extremely large. Thus, the (1,1) eigenvalue is most affected by the doubles, then by the singles, then by the triples, etc...
Spectral data are highly redundant (many vibrational modes of the same molecules) and sparse (large spectral segments with no informative features). Hence, before a full-scale chemometric treatment of the data is undertaken, it is very instructive to understand the structure and variance in recorded spectra. Hence, eigenvector-based analyses of spectra are common and a primary technique is principal components analysis (PC A). PC A is a linear transformation of the data into a new coordinate system (axes) such that the largest variance lies on the first axis and decreases thereafter for each successive axis. PCA can also be considered to be a view of the data set with an aim to explain all deviations from an average spectral property. Data are typically mean centered prior to the transformation and the mean spectrum is used a base comparator. The transformation to a new coordinate set is performed via matrix multiplication as... [Pg.187]

Crystallization additives are a common method to improve crystal quality in proteins (McPherson and Cudney, 2006). With RNA, it is common to screen a series of different cations, as their role in RNA structure and catalysis is well documented for many RNA and RNA-protein systems (Pyle, 2002). Since various cations will interact with the RNA differently, our lab uses a cation screen comprised simple metal cations and polyamines (Table 6.1). Each solution in this table is a lOx stock that is added to an optimized condition (or one found in a sparse matrix) and the set of 24 conditions assessed for potential improvement in crystal quality. [Pg.127]

The matrix elements Hkk follow directly from (1) and correspond to directional cosines in a vector space. Transfers between adjacent sites are proportional to tpq. The local structure of VB diagrams limits the outcome to possibilities for Hkki that can readily be enumerated [13]. Spin problems in the covalent basis have even simpler[28] Hkki. The matrix H is not symmetric when the basis is not orthogonal, but it is extremely sparse. This follows because N sites yield about N bonds and each transfer integral gives at most two diagrams. There are consequently 2N off-diagonal Hkk> in matrices of order Ps N, Ne)/4 for systems with inversion and... [Pg.649]

We see that because of the structure of the equations the coefficient matrix is very sparse. For this reason iterative methods of solution may be very efficient. [Pg.94]

There are several general characteristics of a matrix that are particularly useful for analysis of minimization algorithms. Density of a matrix is a measurement given by the ratio of the nonzero to zero matrix components. A matrix is said to be dense when this ratio is large and sparse when it is small. A sparse matrix may be structured (e.g., block diagonal, band) or unstructured (Figure 2). [Pg.4]

Figure 2 Sample matrix patterns for (a) block diagonal and (b-e) sparse unstructured. Pattern (b) corresponds to the Hessian approximation (preconditioner) for a potential energy model from the local energy terms (bond length, bond angle, and dihedral angle terms), and (c) is a reordered matrix pattern that reduces fill-in during the factorization. Pattern (d) comes from a molecular dynamics simulation of super-coiled DNA36 and describes pairs of points along a ribbonlike model of the duplex that come in close contact during the dynamics trajectory pattern (e) is the associated reordered structure that reduces fill-in. Figure 2 Sample matrix patterns for (a) block diagonal and (b-e) sparse unstructured. Pattern (b) corresponds to the Hessian approximation (preconditioner) for a potential energy model from the local energy terms (bond length, bond angle, and dihedral angle terms), and (c) is a reordered matrix pattern that reduces fill-in during the factorization. Pattern (d) comes from a molecular dynamics simulation of super-coiled DNA36 and describes pairs of points along a ribbonlike model of the duplex that come in close contact during the dynamics trajectory pattern (e) is the associated reordered structure that reduces fill-in.
The new coefficient matrix is symmetric as M lA can be written as M 1/2AM 1/Z. Preconditioning aims to produce a more clustered eigenvalue structure for M A and/or lower condition number than for A to improve the relevant convergence ratio however, preconditioning also adds to the computational effort by requiring that a linear system involving M (namely, Mz = r) be solved at every step. Thus, it is essential for efficiency of the method that M be factored very rapidly in relation to the original A. This can be achieved, for example, if M is a sparse component of the dense A. Whereas the solution of an n X n dense linear system requires order of 3 operations, the work for sparse systems can be as low as order n.13-14... [Pg.33]


See other pages where Sparse structured matrices is mentioned: [Pg.211]    [Pg.211]    [Pg.213]    [Pg.215]    [Pg.80]    [Pg.288]    [Pg.334]    [Pg.381]    [Pg.132]    [Pg.145]    [Pg.279]    [Pg.411]    [Pg.395]    [Pg.16]    [Pg.17]    [Pg.401]    [Pg.467]    [Pg.1286]    [Pg.160]    [Pg.166]    [Pg.321]    [Pg.177]    [Pg.255]    [Pg.114]    [Pg.309]    [Pg.538]    [Pg.566]    [Pg.615]    [Pg.640]    [Pg.267]    [Pg.481]    [Pg.91]    [Pg.104]    [Pg.54]    [Pg.54]    [Pg.45]   
See also in sourсe #XX -- [ Pg.193 ]




SEARCH



Sparse

Sparse matrix

Structure matrix

© 2024 chempedia.info