Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Iterative Convergence Methods

One of the most common problems in digital simulation is the solution of simultaneous nonlinear algebraic equations. If these equations contain transcendental functions, analytical solutions are impossible. Therefore, an iterative trial-and-error procedure of some sort must be devised. If there is only one unknown, a value for the solution is guessed. It is plugged into the equation or equations to see if it satisfies them. If not, a new guess is made and the whole process is repeated until the iteration eonverges (we hope) to the right value. [Pg.91]

The key problem is to find a method for making the new guess that converges rapidly to the correct answer. There are a host of techniques. Unfortunately there is no best method for all equations. Some methods that converge very rapidly for some equations will diverge for other equations i.e., the series of new guesses will oscillate around the correct solution with ever-increasing deviations. This is one kind of numerieal instability. [Pg.91]

We will diseuss only a few of the simplest and most useful methods. Fortunately, in dynamic simulations, we start out from some converged initial steady-state. At each instant in time, variables have changed very little from the values they had a short time before. Thus we always are close to the correct solution. [Pg.91]

For this reason, the simple convergence methods are usually quite adequate for dynamic simulations. [Pg.92]

The problem is best understood by considering an example. One of the most common iterative calculations is a vapor-liquid equilibrium bubblepoint calculation. [Pg.92]


THE PROGRAM USES THE ITERATIVE CONVERGENCE METHOD SUGGESTED BY I.H. OLIVER AND LATER MODIFIED BY S.T. KOSTECKE TO CALCULATE FLASH VAPORIZATION CONTAINING UP TO 15 COMPONENTS OF MULTICOMPONENT MIXTURES OF OIL, GAS AND CHEMICAL PROCESSES. [Pg.549]

Trial and error procedures This refers to solution of equations using the iterative convergence methods (Newton s method). [Pg.37]

The vapor-liquid equilibrium is assumed ideal. Column pressure P is optimized for each case. With pressure P and tray hquid compositions x j known at each point in time on each tray, the temperature T and the vapor compositions y j can be calculated. This is a bubblepoint calculation and can be solved by a Newton-Raphson iterative convergence method. [Pg.46]

Use a forced convergence method. Give the calculation an extra thousand iterations or more along with this. The wave function obtained by these methods should be tested to make sure it is a minimum and not just a stationary point. This is called a stability test. [Pg.196]

Thus, HyperChem occasionally uses a three-point interpolation of the density matrix to accelerate the convergence of quantum mechanics calculations when the number of iterations is exactly divisible by three and certain criteria are met by the density matrices. The interpolated density matrix is then used to form the Fock matrix used by the next iteration. This method usually accelerates convergent calculations. However, interpolation with the MINDO/3, MNDO, AMI, and PM3 methods can fail on systems that have a significant charge buildup. [Pg.230]

If we consider the limiting case where p=0 and q O, i.e., the case where there are no unknown parameters and only some of the initial states are to be estimated, the previously outlined procedure represents a quadratically convergent method for the solution of two-point boundary value problems. Obviously in this case, we need to compute only the sensitivity matrix P(t). It can be shown that under these conditions the Gauss-Newton method is a typical quadratically convergent "shooting method." As such it can be used to solve optimal control problems using the Boundary Condition Iteration approach (Kalogerakis, 1983). [Pg.96]

Successive iterations converge extremely fast. After the fifth step, the results hardly change (Table 5.24) which, using the Newton method outlined in Section 3.1, indicates an age of T = 2.9065 Ga. [Pg.305]

The NEB is an iterative minimization method, so it requires an initial estimate for the MEP. The convergence rate of an NEB calculation will depend strongly on how close the initial estimate of the path is to a true MEP. [Pg.147]

A perturbation expansion version of this matrix inversion method in angular momentum space has been introduced with the Reverse Scattering Perturbation (RSP) method, in which the ideas of the RFS method are used the matrix inversion is replaced by an iterative, convergent expansion that exploits the weakness of the electron backscattering by any atom and sums over significant multiple scattering paths only. [Pg.29]

As illustrated in Fig. 15.5, the initial iterate (point 0) is within the domain of convergence of Newton s method. As a result the iteration converges rapidly. However, imagine the behavior of the algorithm if the starting iterate (initial guess at the solution) were just... [Pg.630]

Table 2.4 shows the SAS NLIN specifications and the computer output. You can choose one of the four iterative methods modified Gauss-Newton, Marquardt, gradient or steepest-descent, and multivariate secant or false position method (SAS, 1985). The Gauss-Newton iterative methods regress the residuals onto the partial derivatives of the model with respect to the parameters until the iterations converge. You also have to specify the model and starting values of the parameters to be estimated. It is optional to provide the partial derivatives of the model with respect to each parameter, b. Figure 2.9 shows the reaction rate versus substrate concentration curves predicted from the Michaelis-Menten equation with parameter values obtained by four different... [Pg.26]

MBPT starts with the partition of the Hamiltonian into H = H0 + V. The basic idea is to use the known eigenstates of H0 as the starting point to find the eigenstates of H. The most advanced solutions to this problem, such as the coupled-cluster method, are iterative well-defined classes of contributions are iterated until convergence, meaning that the perturbation is treated to all orders. Iterative MBPT methods have many advantages. First, they are economical and still capable of high accuracy. Only a few selected states are treated and the size of a calculation scales thus modestly with the basis set used to carry out the perturbation expansion. Radial basis sets that are complete in some discretized space can be used [112, 120, 121], and the basis... [Pg.274]

Normally the scaling factors are extracted by minimizing the squared deviation (4) considered as a functional R A) of the variable set A, - The frequency parameters z alc now correspond to the harmonic normal frequencies calculated with the scaled quantum-mechanical force-field (6). The first and second derivatives of R( A) with respect to the scaling factors can be calculated analytically [17,18], which permits to implement rapidly converging minimization procedures of the Newton-Gauss type. Alternative iterative minimization methods were also proposed [19]. [Pg.345]

As shown in Figure 6, the system successfully selects the diameter and length of the reactor which meets all specifications in five iterations. The method is stable and converges rapidly despite the non-linear relationship of the variables. [Pg.390]

Iterative resolution methods obtain the resolved concentration and response matrices through the one-at-a-time refinement or simultaneous refinement of the profiles in C, in ST, or in both matrices at each cycle of the optimization process. The profiles in C or ST are tailored according to the chemical properties and the mathematical features of each particular data set. The iterative process stops when a convergence criterion (e.g., a preset number of iterative cycles is exceeded or the lack of fit goes below a certain value) is fulfilled [21, 42, 47-50],... [Pg.431]

Simultaneous Convergence Methods One drawback of some tearing methods is their relatively limited range of application. For example, the BP methods are more successful for distillation, and the SR-type methods are considered better for mixtures that exhibit a wide range of (pure-component) boiling points (see, however, our remarks above on modified BP and SR methods). Other possible drawbacks (at least in some cases) include the number of times physical properties must be evaluated (several times per outer loop iteration) if temperature- and composition-dependent physical properties are used. It is the physical properties calculations that generally dominate the computational cost of chemical process simulation problems. Other problems can arise if any of the iteration loops are hard to converge. [Pg.33]

We should also keep in mind that the process conditions may directly influence the composition of the furnace gas mixture, so in this sense the diffusivity ratio a will vary with pressure and temperature due to variations inx., whether or not the ratios D, /D, are constant. In this case, the present analysis may be combined with a simple zero or one-dimensional auxiliary model of reactant injection and transport within the furnace. Using the specified injection rate and assumed trial values for the optimum pressure and temperature, the results presented here can be used with such an auxiliary model to compute the composition of the furnace gas mixture. From this estimate of the composition, a value for a and new candidate values for the optimum pressure and temperature can then be calculated. This computational procedure may then be repeated, each time using the final estimates of the optimum conditions as initial guesses for the next iteration. This method should converge quickly because the value of a is a fairly weak function of the composition. [Pg.202]

Iteration and convergence method explicit equations Monotone sequences and secant method Newton- Raphson Free ion molali-ties by difference Newton- Raphson conti nued fraction Newton- Raphson Newton-Raphson conti nued fraction conti nued fraction for anions only conti nued fraction conti nued fraction conti nued fraction brute force... [Pg.869]

A problem common to all iterative computational methods is knowing when to quit. An iterative method rarely yields a precise root, but rather gives successive approximations that (if the method converges) approach the root more and more closely. Whether you are doing the calculation by hand or writing a program to do it. you must specify how close is close enough. [Pg.618]

Use method (iv) to find p and Y. Make an initial estimate of say, using a psychrometric chart. Calculate Yas from Eq. (12-6). Find p from Table 12-1 and from Antoine equation (12-5). Repeat until iteration converges (e.g., using spreadsheet). [Pg.1335]


See other pages where Iterative Convergence Methods is mentioned: [Pg.91]    [Pg.426]    [Pg.477]    [Pg.436]    [Pg.91]    [Pg.426]    [Pg.477]    [Pg.436]    [Pg.230]    [Pg.195]    [Pg.74]    [Pg.410]    [Pg.59]    [Pg.108]    [Pg.77]    [Pg.130]    [Pg.99]    [Pg.249]    [Pg.454]    [Pg.248]    [Pg.19]    [Pg.360]    [Pg.49]    [Pg.193]    [Pg.249]    [Pg.66]    [Pg.94]    [Pg.872]    [Pg.1336]    [Pg.56]    [Pg.147]    [Pg.207]   


SEARCH



Convergent methods

ITER

Iterated

Iteration

Iteration iterator

Iteration method

Iterative

Iterative methods

© 2024 chempedia.info