Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Newton-Raphson iteration procedure

Newton-Raphson iterative procedure when consecutive substitution fails... [Pg.113]

A number of iterative methods exist, as described in Appendix L. TK Solver uses a modified Newton-Raphson iterative procedure (see Sec. L.2), which is satis-fectory for a wide variety of problems. [Pg.193]

Using the above expression for as the initial value in the Newton-Raphson iteration procedure [64], i.e.,... [Pg.439]

The boundary conditions that may be simulated with the flow-compaction module are autoclave pressure, impermeability or permeability with prescribed bag pressure, no displacement or no normal displacement (tangent sliding condition). The governing equations (Eqs [13.3] and [13.4]) are coupled during individual time-steps of the transient solution. A Newton-Raphson iterative procedure is used to solve the resulting nonlinear system of equations. Details in the solution of the flow-compaction for autoclave processing can be found in reference 17. [Pg.420]

The polynomial format of the Bender multiparameter EoS yields a simple solution via a Newton-Raphson iteration procedure [9]. We found three iterations provided fluid-phase density values with an associated RCSU of < 2 x 10" % between the third and fourth iteration. We took the value of the 6 -iteration, which gave an RCSU of 0.0% between the 4 - and 6 -iteration. The form of the Bender EoS requires implicit differential expressions [10] to evaluate and propagate the uncertainty to generate the CSU in the amoimt of fluid change from the material balance, leading to u n). Implicit differentials were developed for... [Pg.394]

For liquid-liquid separations, the basic Newton-Raphson iteration for a is converged for equilibrium ratios (K ) determined at the previous composition estimate. (It helps, and costs very little, to converge this iteration quite tightly.) Then, using new compositions from this converged inner iteration loop, new values for equilibrium ratios are obtained. This procedure is applied directly for the first three iterations of composition. If convergence has not occurred after three iterations, the mole fractions of all components in both phases are accelerated linearly with the deviation function... [Pg.125]

A more efficient way of solving the DFT equations is via a Newton-Raphson (NR) procedure as outlined here for a fluid between two surfaces. In this case one starts with an initial guess for the density profile. The self-consistent fields are then calculated and the next guess for density profile is obtained through a single-chain simulation. The difference from the Picard iteration method is that an NR procedure is used to estimate the new guess from the density profile from the old one and the one monitored in the single-chain simulation. This requires the computation of a Jacobian matrix in the course of the simulation, as described below. [Pg.126]

A sequence of Newton-Raphson iterations is obtained by solving equation (4 4) redefining the zero point, p0, as the new set of parameters recalculating g and H and returning to equation (4 4). Such a procedure converges quadratically, that is, the error vector in iteration n is a quadratic function of the error vector in iteration n-1. This does not necessarily mean that the NR procedure will converge fast, or even at all. However, close to the stationary point we can expect a quadratic behaviour. We shall return later to a more precise definition of what close means in this respect. [Pg.210]

All matrix elements in the Newton-Raphson methods may be constructed from the one- and two-particle density matrices and transition density matrices. The linear equation solutions may be found using either direct methods or iterative methods. For large CSF expansions, such micro-iterative procedures may be used to advantage. If a micro-iterative procedure is chosen that requires only matrix-vector products to be formed, expansion-vector-dependent effective Hamiltonian operators and transition density matrices may be constructed for the efficient computation of these products. Sufficient information is included in the Newton-Raphson optimization procedures, through the gradient and Hessian elements, to ensure second-order convergence in some neighborhood of the final solution. [Pg.119]

Poor initial guesses for the output variables may cause the Newton-Raphson SC procedure to fail to converge within a reasonable number of iterations. A new... [Pg.696]

The Newton-Raphson iteration works by incrementally improving an estimate to values (nw, mi)r of the unknown variables in the reduced basis. The procedure begins with a guess at the variables values. The first guess might be supplied by... [Pg.70]

Various procedures are available for accelerating the convergence of the modified Newton-Raphson iterations. Figure AIE.l shows the technique of computing individual acceleration factors, and <52 are known. Then, assuming a constant slope of the response curve, and from similar triangles, the value of <53 is computed ... [Pg.745]

In order to solve Eq. III.49, one can try to use the formula E k+D = f E k), which leads to a first-order iteration procedure. Starting from a trial value Z (0), one obtains a series E 1), E 2), E 3),. . . which may be convergent or divergent. In both cases, one can go over to a second-order iteration process, which is most easily derived by solving the equation F(E) — 0 by means of Newton-Raphson s formula... [Pg.272]

While it is technically erroneous to claim that the linearization method does not require any initialization (J2), it is true that the initialization procedure used appear to be quite effective. A more comprehensive discussion of initialization procedure will be given in Section III,A,5. With this initialization procedure, the linearization method appears to converge very rapidly, usually in less than 10 iterations for formulations A and B. Since the evaluation of f(x) and its partial derivatives is not required, the method is also simpler and easier to implement than the Newton-Raphson method. [Pg.156]

The Newton-Raphson procedure was used to find e satisfying F(e) = 0. Iterations began at high conversion and the derivative dF/de was found by numerical differentiation. Convergence was obtained in 5 iterations, with 10 critical point evaluations, in about 10 seconds. The computer used was the University of Calgary Honeywell HIS-Multics system. [Pg.388]

As stated, the most commonly used procedure for temperature and composition calculations is the versatile computer program of Gordon and McBride [4], who use the minimization of the Gibbs free energy technique and a descent Newton-Raphson method to solve the equations iteratively. A similar method for solving the equations when equilibrium constants are used is shown in Ref. [7],... [Pg.22]

The iteration counter k and the argument x(l) refers to the macroiterations made in the Newton-Raphson procedure, and they are obviously constant within the context of this section. Let us drop them for convenience. Also, let us explicitly assume that the Jacobian is in fact a positive definite Hessian, and that f(x< >) is a gradient The equation to be solved is thus rewritten in the form... [Pg.33]

The choice of optimization scheme in practical applications is usually made by considering the convergence rate versus the time needed for one iteration. It seems today that the best convergence is achieved using a properly implemented Newton-Raphson procedure, at least towards the end of the calculation. One full iteration is, on the other hand, more time-consuming in second order methods, than it is in more approximative schemes. It is therefore not easy to make the appropriate choice of optimization method, and different research groups have different opinions on the optimal choice. We shall discuss some of the more commonly implemented methods later. [Pg.209]

Computational aspects on the Newton-Raphson procedure When constructing methods to solve the system of linear equations (4 22) one should be aware of the dimension of the problem. It is not unusual to have Cl expansion comprising 104 - 106 terms, and orbital spaces with more than two hundred orbitals. In such calculations it is obviously not possible to explicitly construct the Hessian matrix. Instead we must look for iterative algorithms... [Pg.214]

Several attempts have been made to devise simpler optimization methods than the lull second order Newton-Raphson approach. Some are approximations of the full method, like the unfolded two-step procedure, mentioned in the preceding section. Others avoid the construction of the Hessian in every iteration by means of update procedures. An entirely different strategy is used in the so called Super - Cl method. Here the approach is to reach the optimal MCSCF wave function by annihilating the singly excited configurations (the Brillouin states) in an iterative procedure. This method will be described below and its relation to the Newton-Raphson method will be illuminated. The method will first be described in the unfolded two-step form. The extension to a folded one-step procedure will be indicated, but not carried out in detail. We therefore assume that every MCSCF iteration starts by solving the secular problem (4 39) with the consequence that the MC reference state does not... [Pg.224]

Computationally the super-CI method is more complicated to work with than the Newton-Raphson approach. The major reason is that the matrix d is more complicated than the Hessian matrix c. Some of the matrix elements of d will contain up to fourth order density matrix elements for a general MCSCF wave function. In the CASSCF case only third order term remain, since rotations between the active orbitals can be excluded. Besides, if an unfolded procedure is used, where the Cl problem is solved to convergence in each iteration, the highest order terms cancel out. In this case up to third order density matrix elements will be present in the matrix elements of d in the general case. Thus super-CI does not represent any simplification compared to the Newton-Raphson method. [Pg.227]

Class II Methods. The methods of Class II are those that use the simultaneous Newton-Raphson approach, in which all the equations are linearized by a first order Taylor series expansion about some estimate of the primitive variables. In its most general form, this expansion includes terms arising from the dependence of the thermo-physical property models on the primitive variables. The resulting system of linear equations is solved for a set of iteration variable corrections, which are then applied to obtain a new estimate. This procedure is repeated until the magnitudes of the corrections are sufficiently small. [Pg.138]

The equilibrium configuration of the surface region comprising n layers is determined by solving simultaneously the 4n equations obtained by equating to zero the partial derivatives of AU with respect to each of the variables. The equations so obtained are nonlinear and are solved by an iterative Newton-Raphson procedure (12), which necessitates calculating the second partial derivatives of AU with respect to all possible pairs of variables. A Bendix G15D computer was used for all numerical computations—i.e., evaluation of the various lattice sums, calculation of the derivatives of AU, and solution of the linearized forms in the Newton-Raphson treatment. [Pg.32]


See other pages where Newton-Raphson iteration procedure is mentioned: [Pg.397]    [Pg.227]    [Pg.82]    [Pg.154]    [Pg.397]    [Pg.227]    [Pg.82]    [Pg.154]    [Pg.118]    [Pg.184]    [Pg.62]    [Pg.273]    [Pg.266]    [Pg.164]    [Pg.133]    [Pg.198]    [Pg.126]    [Pg.227]    [Pg.117]    [Pg.2334]    [Pg.1264]    [Pg.118]    [Pg.335]    [Pg.286]    [Pg.179]    [Pg.3]    [Pg.288]    [Pg.237]    [Pg.270]    [Pg.315]    [Pg.496]   
See also in sourсe #XX -- [ Pg.439 ]




SEARCH



ITER

Iterated

Iteration

Iteration iterator

Iterative

Iterative procedure

Newton iteration

Newton-Raphson

Newton-Raphson procedure

Raphson

© 2024 chempedia.info