Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Adams-Moulton Algorithms

The initial value problem, Eqs. 1-3, can be integrated by any marching algorithm which is based on the Runge-Kutta or Adams-Moulton techniques. Based on the calculated space profiles of C,... [Pg.384]

The predictor calls for four previous values in Adams-Moulton and Milne s algorithms. We obtain these by the fourth-order Runge-Kutta method. Also, we can reduce the step size to improve the accuracy of these methods. Milne s method is unstable in certain cases because the errors do not approach zero as we reduce the step size, h. Because of this instability, the method of Adams-Moulton is more widely used. [Pg.45]

For stiff differential equations, the backward difference algorithm should be preferred to the Adams-Moulton method. The well-known code LSODE with different options was published in 1980 s by Flindmarsh for the solution of stiff differential equations with linear multistep methods. The code is very efficient, and different variations of it have been developed, for instance, a version for sparse systems (LSODEs). In the international mathematical and statistical library, the code of Hindmarsh is called IVPAG and DIVPAG. [Pg.439]

For example, the trapezium algorithm, also called the Crank-Nicolson or the second-order Adams-Moulton ... [Pg.60]

The third-order implicit Adams-Moulton algorithm ... [Pg.63]

To better understand this new way of looking at the problem, we can take the example of a particular algorithm, such as the fourth-order Adams-Moulton algorithm in its multivalue version. [Pg.90]

For example, the multivalue fourth-order Adams-Moulton algorithm has the same local error as the corresponding multistep fourth-order Adams-Moulton algorithm ... [Pg.94]

For example, the fourth-order multivalue Adams-Moulton algorithm has a local error ... [Pg.95]

To understand how to change the order of the integration method, let us suppose we are using die Adams-Moulton algorithms and that we have already initialized them for the generic order p, which is smaller than the maximum order permitted. [Pg.98]

The Gear algorithms are stable for stiff problems, whereas the Adams-Moulton are unstable with orders larger than 2. [Pg.104]

For reasons that will be explained in due course, a substitution iterative method is adopted when the system to be solved is nonstiff and, in this case, the algorithm adopted belongs to the Adams-Moulton family. Conversely, in stiff problems, the nonlinear system is solved using the Newton method and the algorithm belongs to the Gear family. [Pg.105]

For example, in the case of the third-order multistep algorithm of the family of explicit Adams-Bashforth methods and of implicit Adams-Moulton methods,... [Pg.106]

Since the predictor-corrector method is particularly suitable (when it works) in terms of computational times and memory allocation (it does not need to store the Jacobian), it is used with nonstiff problems and with algorithms that are not good at solving stiff problems, but with better accuracy featiu-es (usually the Adams-Moulton methods are adopted). [Pg.108]

For the nonstiff problems, based on multivalue algorithms of the Adams-Moulton family ... [Pg.117]

In practice, implicit multistep methods are used to improve upon approximations obtained by explicit methods. This combination is the so-called predictor-corrector method. Predictor-corrector methods employ a single-step method, such as the Runge-Kutta of order 4, to generate the starting values to an explicit method, such as an Adams-Bashforth. Then the approximation from the explicit method is improved upon by use of an implicit method, such as an Adams-Moulton method. Also, there are variable step size algorithms associated with the predictor-corrector strategy in the literature [5,25]. [Pg.409]

The numerical integration of the equations of motion, equations (4) and (5), is straightforward. Various well known integration methods are used. For example, the fixed step-size fourth-order Runge-Kutta and Adams-Moulton fifth-order predictor/sixth-order corrector (initiated with a fourth-order Runge-Kutta) algorithms (see, for example. Ref. 7h) are often used. [Pg.3058]

The quasiclassical trajectory method was used to study this system, and the variable step size modified Bulirsch-Stoer algorithm was specially developed for recombination problems such as this one. Comparisons were made with the fourth order Adams-Bashforth-Moulton predictor-corrector algorithm, and the modified Bulirsch-Stoer method was always more efficient, with the relative efficiency of the Bulirsch-Stoer method increasing as the desired accuracy increased. We measure the accuracy by computing the rms relative difference between the initial coordinates and momenta and their back-integrated values. For example, for a rms relative difference of 0.01, the ratio of the CPU times for the two methods was 1.6, for a rms relative difference of 0.001 it was 2.0, and for a rms relative difference of 10 it was 3.3. Another advantage of the variable step size method is that the errors in individual trajectories are more similar, e.g. a test run of ten trajectories yielded rms errors which differed by a factor of 53 when using the modified Bulirsch-Stoer... [Pg.374]


See other pages where Adams-Moulton Algorithms is mentioned: [Pg.139]    [Pg.22]    [Pg.213]    [Pg.232]    [Pg.130]    [Pg.364]    [Pg.35]    [Pg.89]    [Pg.89]    [Pg.103]    [Pg.104]    [Pg.335]    [Pg.373]    [Pg.369]    [Pg.1358]    [Pg.95]    [Pg.77]   
See also in sourсe #XX -- [ Pg.71 , Pg.80 , Pg.85 ]




SEARCH



ADAM algorithm

ADaM

Adams-Moulton

© 2024 chempedia.info