Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

An Iterative Algorithm

The first illustrative problem comes from quantum mechanics. An equation in radiation density can be set up but not solved by conventional means. We shall guess a solution, substitute it into the equation, and apply a test to see whether the guess was right. Of course it isn t on the first try, but a second guess can be made and tested to see whether it is closer to the solution than the first. An iterative routine can be set up to cany out very many guesses in a methodical way until the test indicates that the solution has been approximated within some narrow limit. [Pg.2]


Statistical and algebraic methods, too, can be classed as either rugged or not they are rugged when algorithms are chosen that on repetition of the experiment do not get derailed by the random analytical error inherent in every measurement,i° 433 is, when similar coefficients are found for the mathematical model, and equivalent conclusions are drawn. Obviously, the choice of the fitted model plays a pivotal role. If a model is to be fitted by means of an iterative algorithm, the initial guess for the coefficients should not be too critical. In a simple calculation a combination of numbers and truncation errors might lead to a division by zero and crash the computer. If the data evaluation scheme is such that errors of this type could occur, the validation plan must make provisions to test this aspect. [Pg.146]

In real practice, the location m and the variance have to be estimated from real data. An iterative algorithm, similar to the one used in Chapter 10 for the robust covariance estimation, is used to calculate the trust function. The main advantage of using this algorithm is that the convergence is warranted. [Pg.235]

As expected in an iterative algorithm, we start from an initial guess for the parameters. This parameter vector is subsequently improved by the addition of an appropriate parameter shift vector 8p, resulting in a better, but probably still not perfect, fit. From this new parameter vector the process is repeated until the optimum is reached. [Pg.148]

It is possible to use an iterative algorithm to determine the exact positions of the minima. Again, in such a program the rate constants can be fitted individually, irrespective of the others. [Pg.257]

This, however, means that both the y-data and the scores have to be multiplied by the appropriate weights /wi and then the classical OLS-based procedure can be applied. Practically, starting values for the weights have to be determined, and they are updated using an iterative algorithm. [Pg.177]

Here we synthesize the concepts of the last four sections, (i) CSE, (ii) reconstruction, (iii) purification, and (iv) a contracted power method, to obtain an iterative algorithm for the direct calculation of the 2-RDM. [Pg.193]

The rest of the exchange and correlation effects will be taken into account to the first two orders of PT by the total interelectron interaction [13-19], The electron density is determined by an iteration algorithm [11, 14], In the first iteration we... [Pg.290]

The advance of the PLS method is the nonproblematic handling of multicollinearities. In contrast with the other methods of multivariate data analysis the PLS algorithm is an iterative algorithm which makes it possible to treat data which have more features than objects [GELADI, 1988],... [Pg.200]

The calculated total concentration of component j T ) is then compared to the total analytical (input) concentration of component to calculate the residual in the mass balance. From this point an iterative algorithm based on the Newton-Rapshon method and Gausian elimination (to convert non-linear equations to linear) is used to refine the initial estimates of each component concentration. At each refinement the residual in the mass balance is reduced until some acceptable limit is reached. [Pg.126]

As we can see, the problem can be given a precise formulation, but what really counts is that it can also be given a solution. I have demonstrated that structures can indeed be reconstructed by using only 10% of the minimum number of projections (Barbieri, 1974a, 1974b, 1987), and an iterative algorithm which exploits memory matrices. More precisely, a reconstruction from incomplete projections is possible if two conditions are met (1) if the reconstruction method employs memory matrices where new information appears, and (2) if the reconstruction method employs codes, or conventions, which transfer information from the memory space to the real space. [Pg.205]

The solution of the resulting nonlinear equations is usually achieved via an iterative algorithm. Once a converged solution has been obtained it is essential to assess the invariance of the computational results with respect to the temporal and/or spatial discretisation. This aspect is unfortunately often not addressed in computational studies due to, amongst others, computer time constraints. [Pg.247]

Instead of adjusting the parameters of equation (17) empirically, we developed an iterative algorithm based on histogram analysis which permits the best biasing function to be found automatically (see Figure 11). For this, a series of MD simulations with different biasing potentials are performed. The iterations begin with an unbiased MD simulation ... [Pg.881]

Little more could be said of this problem, for such an iterated algorithm is simplicity itself to a digital computer, were it not that... [Pg.71]

As mentioned earlier, the start and end times of various operations must satisfy the various operational constraints. We classify these constraints into four categories and use an iterative algorithm to satisfy them. [Pg.196]

To compute a nonlinear map, the distances between all pairs of descriptors are calculated. The initial positions of the compounds on the map are chosen randomly and then modified in an iterative algorithm until all distances are represented as well as possible. The core algorithm of NLM is a partial least-squares error minimization (PLS). The total error of mapping must be smaller than the distances between the molecules and is therefore given on the NLM, e.g., as sum of error squares, E2. [Pg.591]

Fig. 15 Derived building blocks from a clustered virtual library by an iterative algorithm (e.g., a genetic algorithm). The cycle must be repeated until the final library satisfies the required criteria. Fig. 15 Derived building blocks from a clustered virtual library by an iterative algorithm (e.g., a genetic algorithm). The cycle must be repeated until the final library satisfies the required criteria.
The set of nonlinear equations (Equation 12.17) can be solved by different techniques, depending on what pair of variables is specified. An iterative algorithm is outlined here for the case where the number of stages, N, and the bottoms rate, B, are specified. [Pg.388]

We shall set N be the dimension of the finite basis subset used to represent f and v. The calculation can be performed with great efficiency using an iterative algorithm, such as the Lanczos algorithm, that transforms r into a tridiagonalized form. A continued fraction expansion is then obtained ... [Pg.118]

There are different routes to estimating the parameters of a model. Finding the parameters is an optimization problem, and in some situations, a directly computable solution may exist. In other situations an iterative algorithm has to be used. The two most important tools for fitting models in multi-way analysis are called alternating least squares and eigenvalue based solutions. Other approaches also exist, but these are beyond the scope of this book. [Pg.111]

These relations lead, on expansion in a Taylor series around an estimated equilibrium composition, to a set of linear equations. The number of unknowns is reduced to the number of elements and phases assumed to be present in the equilibrium. The solution approximates to the Gibbs energy surface on application of an iteration algorithm. Further explanation is given in references [4] and [6]. [Pg.1986]


See other pages where An Iterative Algorithm is mentioned: [Pg.129]    [Pg.2]    [Pg.134]    [Pg.109]    [Pg.102]    [Pg.417]    [Pg.57]    [Pg.469]    [Pg.167]    [Pg.58]    [Pg.14]    [Pg.387]    [Pg.26]    [Pg.755]    [Pg.340]    [Pg.134]    [Pg.109]    [Pg.132]    [Pg.119]    [Pg.88]    [Pg.173]    [Pg.108]    [Pg.41]    [Pg.181]    [Pg.47]    [Pg.77]    [Pg.89]    [Pg.134]   


SEARCH



ITER

Iterated

Iteration

Iteration iterator

Iterative

Iterative algorithm

© 2024 chempedia.info