Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrix in computers

Matrix in computation of multicomponent transport properties Momentum flux factor N/m3... [Pg.869]

S. Nikolid, N. Trinajstid, and B. Zhou, On the eigenvalues of the ordinary and reciprocal resistance-distance matrix, in Computational methods in science and engineering, Vol. I, ed. G. Marouhs and T.E. Simos, American Institute of Physics, Melville, NY, 2009, pp. 205-214. [Pg.111]

Flead and Silva used occupation numbers obtained from a periodic FIF density matrix for the substrate to define localized orbitals in the chemisorption region, which then defines a cluster subspace on which to carry out FIF calculations [181]. Contributions from the surroundings also only come from the bare slab, as in the Green s matrix approach. Increases in computational power and improvements in minimization teclmiques have made it easier to obtain the electronic properties of adsorbates by supercell slab teclmiques, leading to the Green s fiinction methods becommg less popular [182]. [Pg.2226]

Baker J and Hehre W J 1991 Geometry optimization In Cartesian coordinates The end of the Z-matrIx J. Comput. Chem. 12 606... [Pg.2357]

The profits from using this approach are dear. Any neural network applied as a mapping device between independent variables and responses requires more computational time and resources than PCR or PLS. Therefore, an increase in the dimensionality of the input (characteristic) vector results in a significant increase in computation time. As our observations have shown, the same is not the case with PLS. Therefore, SVD as a data transformation technique enables one to apply as many molecular descriptors as are at one s disposal, but finally to use latent variables as an input vector of much lower dimensionality for training neural networks. Again, SVD concentrates most of the relevant information (very often about 95 %) in a few initial columns of die scores matrix. [Pg.217]

Note that in equation system (2.64) the coefficients matrix is symmetric, sparse (i.e. a significant number of its members are zero) and banded. The symmetry of the coefficients matrix in the global finite element equations is not guaranteed for all applications (in particular, in most fluid flow problems this matrix will not be symmetric). However, the finite element method always yields sparse and banded sets of equations. This property should be utilized to minimize computing costs in complex problems. [Pg.48]

Note that the definite integrals in the members of the elemental stiffness matrix in Equation (2.77) are given, uniformly, between the limits of -1 and +1. This provides an important facility for the evaluation of the members of the elemental matrices in finite element computations by a systematic numerical integration procedure (see Section 1.8). [Pg.53]

A diagonal matrix has nonzero elements only on the principal diagonal and zeros elsewhere. The unit matrix is a diagonal matrix. Large matrices with small matrices symmetrically lined up along the principal diagonal are sometimes encountered in computational chemistry. [Pg.40]

The orbitals used for methane, for example, are four Is Slater orbitals of hydrogen and one 2s and three 2p Slater orbitals of carbon, leading to an 8 x 8 secular matrix. Slater orbitals are systematic approximations to atomic orbitals that are widely used in computer applications. We will investigate Slater orbitals in more detail in later chapters. [Pg.221]

The most recent developments in computational stmctural analysis are almost all based on the direct stiffness matrix method. As a result, piping stress computer programs such as SIMPLEX, ADLPIPE, NUPIPE, PIPESD, and CAESAR, to name a few, use the stiffness method. [Pg.63]

The simple-minded approach for minimizing a function is to step one variable at a time until the function has reached a minimum, and then switch to another variable. This requires only the ability to calculate the function value for a given set of variables. However, as tlie variables are not independent, several cycles through tlie whole set are necessary for finding a minimum. This is impractical for more than 5-10 variables, and may not work anyway. Essentially all optimization metliods used in computational chemistry tlius assume that at least the first derivative of the function with respect to all variables, the gradient g, can be calculated analytically (i.e. directly, and not as a numerical differentiation by stepping the variables). Some metliods also assume that tlie second derivative matrix, the Hessian H, can be calculated. [Pg.316]

A few comments on the layout of the book. Definitions or common phrases are marked in italic, these can be found in the index. Underline is used for emphasizing important points. Operators, vectors and matrices are denoted in bold, scalars in normal text. Although I have tried to keep the notation as consistent as possible, different branches in computational chemistry often use different symbols for the same quantity. In order to comply with common usage, I have elected sometimes to switch notation between chapters. The second derivative of the energy, for example, is called the force constant k in force field theory, the corresponding matrix is denoted F when discussing vibrations, and called the Hessian H for optimization purposes. [Pg.443]

If we consider the relative merits of the two forms of the optimal reconstructor, Eq. s 16 and 17, we note that both require a matrix inversion. Computationally, the size of the matrix inversion is important. Eq. 16 inverts an M x M (measurements) matrix and Eq. 17 a P x P (parameters) matrix. In a traditional least squares system there are fewer parameters estimated than there are measurements, ie M > P, indicating Eq. 16 should be used. In a Bayesian framework we are hying to reconstruct more modes than we have measurements, ie P > M, so Eq. 17 is more convenient. [Pg.380]

Steady-state solutions are found by iterative solution of the nonlinear residual equations R(a,P) = 0 using Newton s methods, as described elsewhere (28). Contributions to the Jacobian matrix are formed explicitly in terms of the finite element coefficients for the interface shape and the field variables. Special matrix software (31) is used for Gaussian elimination of the linear equation sets which result at each Newton iteration. This software accounts for the special "arrow structure of the Jacobian matrix and computes an LU-decomposition of the matrix so that qu2usi-Newton iteration schemes can be used for additional savings. [Pg.309]

The standard way to answer the above question would be to compute the probability distribution of the parameter and, from it, to compute, for example, the 95% confidence region on the parameter estimate obtained. We would, in other words, find a set of values h such that the probability that we are correct in asserting that the true value 0 of the parameter lies in 7e is 95%. If we assumed that the parameter estimates are at least approximately normally distributed around the true parameter value (which is asymptotically true in the case of least squares under some mild regularity assumptions), then it would be sufficient to know the parameter dispersion (variance-covariance matrix) in order to be able to compute approximate ellipsoidal confidence regions. [Pg.80]

Within esqjlicit schemes the computational effort to obtain the solution at the new time step is very small the main effort lies in a multiplication of the old solution vector with the coeflicient matrix. In contrast, implicit schemes require the solution of an algebraic system of equations to obtain the new solution vector. However, the major disadvantage of explicit schemes is their instability [84]. The term stability is defined via the behavior of the numerical solution for t —> . A numerical method is regarded as stable if the approximate solution remains bounded for t —> oo, given that the exact solution is also bounded. Explicit time-step schemes tend to become unstable when the time step size exceeds a certain value (an example of a stability limit for PDE solvers is the von-Neumann criterion [85]). In contrast, implicit methods are usually stable. [Pg.156]

The NIPALS algorithm is easy to program, particularly with a matrix-oriented computer notation, and is highly efficient when only a few latent vectors are required, such as for the construction of a two-dimensional biplot. It is also suitable for implementation in personal or portable computers with limited hardware resources. [Pg.136]

J.J. Dongarra, C.B. Moler, J.R. Bunch and G.W. Stewart, LINPACK. SIAM, Philadelphia, 1979. B.T. Smith, Matrix Eigensystem Routines — EISPACK Guide. 2nd edn.. Lecture Notes in Computer Science, Vol. 6. Springer, New York, 1976. [Pg.159]

Furthermore, the implementation of the Gauss-Newton method also incorporated the use of the pseudo-inverse method to avoid instabilities caused by the ill-conditioning of matrix A as discussed in Chapter 8. In reservoir simulation this may occur for example when a parameter zone is outside the drainage radius of a well and is therefore not observable from the well data. Most importantly, in order to realize substantial savings in computation time, the sequential computation of the sensitivity coefficients discussed in detail in Section 10.3.1 was implemented. Finally, the numerical integration procedure that was used was a fully implicit one to ensure stability and convergence over a wide range of parameter estimates. [Pg.372]

What we have not discussed so far is how the contribution of the final components of the Kohn-Sham matrix in equation (7-12), i. e., the exchange-correlation part, can be computed. What we need to solve are terms such as... [Pg.121]

The dynamics of inter- vs intrastrand hole transport has also been the subject of several theoretical investigations. Bixon and Jortner [38] initially estimated a penalty factor of ca. 1/30 for interstrand vs intrastrand G to G hole transport via a single intervening A T base pair, based on the matrix elements computed by Voityuk et al. [56]. A more recent analysis by Jortner et al. [50] of strand cleavage results reported by Barton et al. [45] led to the proposal that the penalty factor depends on strand polarity, with a factor of 1/3 found for a 5 -GAC(G) sequence and 1/40 for a 3 -GAC(G) sequence (interstrand hole acceptor in parentheses). The origin of this penalty is the reduced electronic coupling between bases in complementary strands. [Pg.70]


See other pages where Matrix in computers is mentioned: [Pg.192]    [Pg.623]    [Pg.533]    [Pg.623]    [Pg.192]    [Pg.623]    [Pg.533]    [Pg.623]    [Pg.2352]    [Pg.352]    [Pg.265]    [Pg.273]    [Pg.200]    [Pg.206]    [Pg.70]    [Pg.265]    [Pg.273]    [Pg.414]    [Pg.188]    [Pg.306]    [Pg.75]    [Pg.77]    [Pg.311]    [Pg.711]    [Pg.134]    [Pg.136]    [Pg.139]    [Pg.140]    [Pg.7]    [Pg.127]    [Pg.166]    [Pg.31]    [Pg.339]   
See also in sourсe #XX -- [ Pg.44 ]




SEARCH



Matrix computations

© 2024 chempedia.info