Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Inverse-iteration method

Tan, T.B. and J.P. Letkeman, "Application of D4 Ordering and Minimization in an Effective Partial Matrix Inverse Iterative Method", paper SPE 10493 presented at the 1982 SPE Symposium on Reservoir Siumulation, San Antonio, TX (1982). [Pg.401]

The diagonal elements of the matrix A are af1 and the off-diagonal elements of Aij are Ty. Equation (9-21) determines how the dipoles are coupled to the static electric field. There are three major methods to determine the dipoles matrix inversion, iterative methods and predictive methods. [Pg.225]

Einding the inducible dipoles requires a self-consistent method, because the field that each dipole feels depends on all of the other induced dipoles. There exist three methods for determining the dipoles matrix inversion, iterative methods, and predictive methods. We describe each of these in turn. [Pg.97]

The method established by iterating (11.5.19) is known as the inverse-iteration method with the Rayleigh quotient or simply as the Rayleigh method [7,9]. As should be clear from our discussion, the Rayleigh method is just Newton s method applied to the eigenvalue problem (11.4.28). [Pg.23]

The methods of simple and of inverse iteration apply to arbitrary matrices, but many steps may be required to obtain sufficiently good convergence. It is, therefore, desirable to replace A, if possible, by a matrix that is similar (having the same roots) but having as many zeros as are reasonably obtainable in order that each step of the iteration require as few computations as possible. At the extreme, the characteristic polynomial itself could be obtained, but this is not necessarily advisable. The nature of the disadvantage can perhaps be made understandable from the following observation in the case of a full matrix, having no null elements, the n roots are functions of the n2 elements. They are also functions of the n coefficients of the characteristic equation, and cannot be expressed as functions of a smaller number of variables. It is to be expected, therefore, that they... [Pg.72]

For the solution of Equation 10.25 the inverse of matrix A is computed by iterative techniques as opposed to direct methods often employed for matrices of low order. Since matrix A is normally very large, its inverse is more economically found by an iterative method. Many iterative methods have been published such as successive over-relaxation (SOR) and its variants, the strongly implicit procedure (SIP) and its variants, Orthomin and its variants (Stone, 1968), nested factorization (Appleyard and Chesire, 1983) and iterative D4 with minimization (Tan and Let-keman. 1982) to name a few. [Pg.176]

Although a direct comparison between the iterative and the extended Lagrangian methods has not been published, the two methods are inferred to have comparable computational speeds based on indirect evidence. The extended Lagrangian method was found to be approximately 20 times faster than the standard matrix inversion procedure [117] and according to the calculation of Bernardo et al. [208] using different polarizable water potentials, the iterative method is roughly 17 times faster than direct matrix inversion to achieve a convergence of 1.0 x 10-8 D in the induced dipole. [Pg.242]

We have noted the noise-sensitivity problem of the simple inverse filter and introduced modifications to alleviate these difficulties. Modifications yielded different functional forms for y(co). The convenient single-step property of the basic method was nevertheless retained. This property contrasts with the need for possibly arbitrary stopping criteria when we use iterative methods, which are computationally more expensive. The iterative methods do, however, allow the user to control the signal-to-noise versus resolution tradeoff by stopping the process when the growth of spurious... [Pg.86]

However, there are important advantages to iterative methods when the number of equations to be solved is large. Once the coefficients have been obtained, they may be converted to complex form and added to the original spectrum. Taking the inverse DFT would then yield the restored function. However, if the number of solved coefficients is small, it may be quicker simply to substitute the coefficients into the series representation for v(k) and add this series to u(k). [Pg.280]

The determination of eigenvalues and eigenvectors of the matrix A is based on a routine by Grad and Brebner (1968). The matrix is first scaled by a sequence of similarity transformations and then normalized to have the Euclidian norm equal to one. The matrix is reduced to an upper Hessenberg form by Householder s method. Then the QR double-step iterative process is performed on the Hessenberg matrix to compute the eigenvalues. The eigenvectors are obtained by inverse iteration. [Pg.174]

It may appear as if this is no great improvement, since finding a solution to a linear equation system with direct methods requires about n3 operations, about half as many as the inversion. However, the solution of the linear equation system can be accomplished by iterative methods where, in each step, some product jv is formed. Superficially, this cuts down, the number of operations, but still requires the Jacobian to be computed and stored. However, for a very large class of important problems, such a product can be efficiently computed without the need of precalculating or storing the Jacobian. [Pg.31]

Venkat Venkatasubramanian No. There are two ways we handle the knowledge-based guidance, ft can be done in real time with the modeler going back and forth between iterations and then guiding the iterations either in the forward model development method, for which the modeler proposes different scenarios, or in the inverse model method, for which the algorithm does the search. The modeler can actually stop the search and force it to go some other direction, based on intuition and experience in how the molecular structure evolves. [Pg.88]

The iterative method is very sensitive to the cavity quality, especially for CPCM and IEFPCM in which the interaction between two tesserae depends on the inverse of the distance. Some unpublished tests performed by the author on slowly convergent iterative calculations have shown that in the last steps almost all the error norm is due to a few charges that still have very large variations with respect the previous iteration cycle, whereas all the other charge variations are several orders of magnitude smaller. [Pg.61]

A method similar to the iterative, is the partial closure method [37], It was formulated originally as an approximated extrapolation of the iterative method at infinite number of iterations. A subsequent more general formulation has shown that it is equivalent to use a truncated Taylor expansion with respect to the nondiagonal part of T instead of T-1 in the inversion method. An interpolation of two sets of charges obtained at two consecutive levels of truncations (e.g. to the third and fourth order) accelerates the convergence rate of the power series [38], This method is no longer in use, because it has shown serious numerical problems with CPCM and IEFPCM. [Pg.61]

Monte Carlo simulation can involve several methods for using a pseudo-random number generator to simulate random values from the probability distribution of each model input. The conceptually simplest method is the inverse cumulative distribution function (CDF) method, in which each pseudo-random number represents a percentile of the CDF of the model input. The corresponding numerical value of the model input, or fractile, is then sampled and entered into the model for one iteration of the model. For a given model iteration, one random number is sampled in a similar way for all probabilistic inputs to the model. For example, if there are 10 inputs with probability distributions, there will be one random sample drawn from each of the 10 and entered into the model, to produce one estimate of the model output of interest. This process is repeated perhaps hundreds or thousands of times to arrive at many estimates of the model output. These estimates are used to describe an empirical CDF of the model output. From the empirical CDF, any statistic of interest can be inferred, such as a particular fractile, the mean, the variance and so on. However, in practice, the inverse CDF method is just one of several methods used by Monte Carlo simulation software in order to generate samples from model inputs. Others include the composition and the function of random variable methods (e.g. Ang Tang, 1984). However, the details of the random number generation process are typically contained within the chosen Monte Carlo simulation software and thus are not usually chosen by the user. [Pg.55]

There are four methods for solving systems of linear equations. Cramer s rule and computing the inverse matrix of A are inefficient and produce inaccurate solutions. These methods must be absolutely avoided. Direct methods are convenient for stored matrices, i.e. matrices having only a few zero elements, whereas iterative methods generally work better for sparse matrices, i.e. matrices having only a few non-zero elements (e.g. band matrices). Special procedures are used to store and fetch sparse matrices, in order to save memory allocations and computer time. [Pg.287]

It is worth noting that corrections 5(fe) are computed by Gaussian elimination or iterative methods and not by direct inversion of J(xik)) (see Sect. 4.2). [Pg.290]

A defining feature of the models discussed in the previous section, regardless of whether they are implemented via matrix inversion, iterative techniques, or predictive methods, is that they all treat the polarization response in each polarizable center using point dipoles. An alternative approach is to model the polarizable centers using dipoles of finite length, represented by a pair of point charges. A variety of different models of polarizability have used this approach, but especially noteworthy are the shell models frequently used in simulations of solid-state ionic materials. [Pg.99]

In most electronegativity equalization models, if the energy is quadratic in the charges (as in Eq. [36]), the minimization condition (Eq. [41]) leads to a coupled set of linear equations for the charges. As with the polarizable point dipole and shell models, solving for the charges can be done by matrix inversion, iteration, or extended Lagrangian methods. [Pg.113]

Information on the particle size distribution can be found by measuring the scattered light flux at several radial locations to characterize the series of annular rings from the particle s diffraction pattern. It is then necessary to invert the relationship between the scattering pattern and the particle size distribution. This can be done using iterative methods or analytical inversion techniques. [Pg.550]

In general cases, the inverse problem (4.1) is an ill-posed problem. Therefore, the linear operator L = A A on the left-hand side of the Euler equation (4.4) may not satisfy condition (4.35) required for the convergence of the iterative methods. In this case, one should use the jrrindples of regularization theor, outlined in Chapter 3. Note that we assume again that both M and D arc some real or com])lcx Hilbert s )accs with corresponding inner )U oduct o[)crations, and (.)/ . [Pg.113]

Matrix A is a 3N x 3N dense matrix. For a small number of unknowns, direct solvers are practical, especially in the case of multiple sources. One can use different types of iterative methods, discussed in Chapter 4, for the solution of this problem. However, if N is large, the storage of A is extremely memory consuming, not to mention the complexity of direct matrix inversion. [Pg.274]

A powerful tool for EM modeling and inversion is the integral equation (IE) method and the corresponding linear and nonlinear approximations, introduced in the previous chapter. One important advantage which the IE method has over the finite difference (FD) and finite element (FE) methods is its greater suitability for inversion. Integral equation formulation readily contains a sensitivity matrix, which can be recomputed at each inversion iteration at little expense. With finite differences, however, this matrix has to be established anew on each iteration at a cost at least equal to the cost of the full forward simulation. [Pg.288]

We denote by the element of the vector corresponding to the r, -th receiver position. It can be treated as an electric field, generated by an electric source Sae, located in the cell Vq. The Frechet derivative matrix, F, is formed by the components 6ej /6a. Therefore, the direct, brute force method of computing the Frechet matrix would require Nm forward modeling solutions for each inversion iteration. [Pg.387]

If we work at a fixed nuclear conformation (clamped nuclei), Ve( will be a function of pMe, i.e. Ve ( k 5 ). Schrodinger equation (36) is not linear as the Hamiltonian depends on the eigenfunction 4>. To solve this equation three methods have been devised, i.e. a) iterative solution, b) closure solution, and c) matrix inversion. All methods are of current us. According to the type of problem, it is convenient to adopt a different procedure, a) Iterative solution. [Pg.31]


See other pages where Inverse-iteration method is mentioned: [Pg.417]    [Pg.53]    [Pg.55]    [Pg.294]    [Pg.127]    [Pg.417]    [Pg.53]    [Pg.55]    [Pg.294]    [Pg.127]    [Pg.206]    [Pg.78]    [Pg.225]    [Pg.241]    [Pg.241]    [Pg.290]    [Pg.323]    [Pg.343]    [Pg.53]    [Pg.80]    [Pg.113]    [Pg.150]    [Pg.300]    [Pg.36]    [Pg.164]    [Pg.419]   
See also in sourсe #XX -- [ Pg.294 ]




SEARCH



ITER

Inverse iteration

Inverse methods

Inversion method

Iterated

Iteration

Iteration iterator

Iteration method

Iterative

Iterative Boltzmann inversion methods

Iterative methods

© 2024 chempedia.info