Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Computational methods Newton-Raphson

Unconstrained optimization methods [W. II. Press, et. ah, Numerical Recipes The An of Scieniific Compulime.. Cambridge University Press, 1 9H6. Chapter 101 can use values of only the objective function, or of first derivatives of the objective function. second derivatives of the objective function, etc. llyperChem uses first derivative information and, in the Block Diagonal Newton-Raphson case, second derivatives for one atom at a time. TlyperChem does not use optimizers that compute the full set of second derivatives (th e Hessian ) because it is im practical to store the Hessian for mac-romoleciiles with thousands of atoms. A future release may make explicit-Hessian meth oils available for smaller molecules but at this release only methods that store the first derivative information, or the second derivatives of a single atom, are used. [Pg.303]

The full Newton-Raphson method computes the full Hessian A of second derivatives and then computes a new guess at the 3X coordinate vector X, according to... [Pg.306]

Xk) is the inverse Hessian matrix of second derivatives, which, in the Newton-Raphson method, must therefore be inverted. This cem be computationally demanding for systems u ith many atoms and can also require a significant amount of storage. The Newton-Uaphson method is thus more suited to small molecules (usually less than 100 atoms or so). For a purely quadratic function the Newton-Raphson method finds the rniriimum in one step from any point on the surface, as we will now show for our function f x,y) =x + 2/. [Pg.285]

Numerical Derivatives The results given above can be used to obtain numerical derivatives when solving problems on the computer, in particular for the Newton-Raphson method and homotopy methods. Suppose one has a program, subroutine, or other function evaluation device that will calculate/given x. One can estimate the value of the first derivative at Xq using... [Pg.471]

There are several reasons that Newton-Raphson minimization is rarely used in mac-romolecular studies. First, the highly nonquadratic macromolecular energy surface, which is characterized by a multitude of local minima, is unsuitable for the Newton-Raphson method. In such cases it is inefficient, at times even pathological, in behavior. It is, however, sometimes used to complete the minimization of a structure that was already minimized by another method. In such cases it is assumed that the starting point is close enough to the real minimum to justify the quadratic approximation. Second, the need to recalculate the Hessian matrix at every iteration makes this algorithm computationally expensive. Third, it is necessary to invert the second derivative matrix at every step, a difficult task for large systems. [Pg.81]

Equation 13-39 is a cubic equation in terms of the larger aspect ratio R2. It can be solved by a numerical method, using the Newton-Raphson method (Appendix D) with a suitable guess value for R2. Alternatively, a trigonometric solution may be used. The algorithm for computing R2 with the trigonometric solution is as follows ... [Pg.1054]

Unlike the other alternative methods, analytical expressions of partial derivatives are required and the Jacobian must be evaluated in the Newton-Raphson method. These requirements sometimes prove to be the undoing when the method is applied to complicated equations. Brown (B12) has developed a modification to the Newton-Raphson method, which requires only some of the partial derivatives to be calculated. We have tested Brown s method on our sample problems but have found that it actually required more computing time than the unmodified Newton-Raphson method. [Pg.152]

Finally, for formulation D the flows in the tree branches can be computed sequentially assuming zero chord flows. This initialization procedure was used by Epp and Fowler (E2) who claimed that it led to fast convergence using the Newton-Raphson Method. [Pg.157]

If the Newton-Raphson method is used to solve Eq. (1), the Jacobian matrix (df/3x)u is already available. The computation of the sensitivity matrix amounts to solving the same Eq. (59) with m different right-hand side vectors which form the columns — (3f/<5u)x. Notice that only the partial derivatives with respect to those external variables subject to actual changes in values need be included in the m right-hand sides. [Pg.174]

The computational procedure can now be explained with reference to Fig. 19. Starting from points Pt and P2, Eqs. (134) and (135) hold true along the c+ characteristic curve and Eqs. (136) and (137) hold true along the c characteristic curve. At the intersection P3 both sets of equations apply and hence they may be solved simultaneously to yield p and W for the new point. To determine the conditions at the boundary, Eq. (135) is applied with the downstream boundary condition, and Eq. (137) is applied with the upstream boundary condition. It goes without saying that in the numerical procedure Eqs. (135) and (137) will be replaced by finite difference equations. The Newton-Raphson method is recommended by Streeter and Wylie (S6) for solving the nonlinear simultaneous equations. In the specified-time-... [Pg.194]

This equation must be solved for yn +l. The Newton-Raphson method can be used, and if convergence is not achieved within a few iterations, the time step can be reduced and the step repeated. In actuality, the higher-order backward-difference Gear methods are used in DASSL [Ascher, U. M., and L. R. Petzold, Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, SIAM, Philadelphia (1998) and Brenan, K. E., S. L. Campbell, and L. R. Petzold, Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations, North Holland Elsevier (1989)]. [Pg.50]

A more efficient way of solving the DFT equations is via a Newton-Raphson (NR) procedure as outlined here for a fluid between two surfaces. In this case one starts with an initial guess for the density profile. The self-consistent fields are then calculated and the next guess for density profile is obtained through a single-chain simulation. The difference from the Picard iteration method is that an NR procedure is used to estimate the new guess from the density profile from the old one and the one monitored in the single-chain simulation. This requires the computation of a Jacobian matrix in the course of the simulation, as described below. [Pg.126]

In this section we consider how Newton-Raphson iteration can be applied to solve the governing equations listed in Section 4.1. There are three steps to setting up the iteration (1) reducing the complexity of the problem by reserving the equations that can be solved linearly, (2) computing the residuals, and (3) calculating the Jacobian matrix. Because reserving the equations with linear solutions reduces the number of basis entries carried in the iteration, the solution technique described here is known as the reduced basis method. ... [Pg.60]

Fig. 4.4. Comparison of the computing effort, expressed in thousands of floating point operations (Aflop), required to factor the Jacobian matrix for a 20-component system (Nc = 20) during a Newton-Raphson iteration. For a technique that carries a nonlinear variable for each chemical component and each mineral in the system (top line), the computing effort increases as the number of minerals increases. For the reduced basis method (bottom line), however, less computing effort is required as the number of minerals increases. Fig. 4.4. Comparison of the computing effort, expressed in thousands of floating point operations (Aflop), required to factor the Jacobian matrix for a 20-component system (Nc = 20) during a Newton-Raphson iteration. For a technique that carries a nonlinear variable for each chemical component and each mineral in the system (top line), the computing effort increases as the number of minerals increases. For the reduced basis method (bottom line), however, less computing effort is required as the number of minerals increases.
Such a scheme is sometimes called a soft Newton-Raphson formulation because the partial derivatives in the Jacobian matrix are incomplete. We could, in principle, use a hard formulation in which the Jacobian accounts for the devia-tives dy/dm,i and daw/dm,i. The hard formulation sometimes converges in fewer iterations, but in tests, the advantage was more than offset by the extra effort in computing the Jacobian. The soft method also allows us to keep the method for calculating activity coefficients (see Chapter 8) separate from the Newton-Raphson formulation, which simplifies programming. [Pg.66]

Although several examples implementing the Newton-Raphson method for the computation of chemical equilibrium are developed in Chapter 6, we will now present some simple applications that illustrate its basic principles. [Pg.143]

The study described above for the water-gas shift reaction employed computational methods that could be used for other synthesis gas operations. The critical point calculation procedure of Heidemann and Khalil (14) proved to be adaptable to the mixtures involved. In the case of one reaction, it was possible to find conditions under which a critical mixture was at chemical reaction equilibrium by using a one dimensional Newton-Raphson procedures along the critical line defined by varying reaction extents. In the case of more than one independent chemical reaction, a Newton-Raphson procedure in the several reaction extents would be a candidate as an approach to satisfying the several equilibrium constant equations, (25). [Pg.391]

As stated, the most commonly used procedure for temperature and composition calculations is the versatile computer program of Gordon and McBride [4], who use the minimization of the Gibbs free energy technique and a descent Newton-Raphson method to solve the equations iteratively. A similar method for solving the equations when equilibrium constants are used is shown in Ref. [7],... [Pg.22]

These M + 1 equations in M + 1 unknowns p and Xm may be solved by the Newton-Raphson method, in which the unknowns are iteratively adjusted until the right and left sides of the equations agree. The object spectrum number-count set hm and noise em are then computed by substitution of p and the Xm into Eqs. (31) and (32). [Pg.117]

There are a few points with respect to this procedure that merit discussion. First, there is the Hessian matrix. With elements, where n is the number of coordinates in the molecular geometry vector, it can grow somewhat expensive to construct this matrix at every step even for functions, like those used in most force fields, that have fairly simple analytical expressions for their second derivatives. Moreover, the matrix must be inverted at every step, and matrix inversion formally scales as where n is the dimensionality of the matrix. Thus, for purposes of efficiency (or in cases where analytic second derivatives are simply not available) approximate Hessian matrices are often used in the optimization process - after aU, the truncation of the Taylor expansion renders the Newton-Raphson method intrinsically approximate. As an optimization progresses, second derivatives can be estimated reasonably well from finite differences in the analytic first derivatives over the last few steps. For the first step, however, this is not an option, and one typically either accepts the cost of computing an initial Hessian analytically for the level of theory in use, or one employs a Hessian obtained at a less expensive level of theory, when such levels are available (which is typically not the case for force fields). To speed up slowly convergent optimizations, it is often helpful to compute an analytic Hessian every few steps and replace the approximate one in use up to that point. For really tricky cases (e.g., where the PES is fairly flat in many directions) one is occasionally forced to compute an analytic Hessian for every step. [Pg.45]

Equations (9.30) and (9.31) are solved simultaneously for h and hs with the aid of the Newton-Raphson method as used in the computer program the integrands are evaluated and the integration are completed with Eq. (9.35). [Pg.281]

SC (simultaneous correction) method. The MESH equations are reduced to a set of N(2C +1) nonlinear equations in the mass flow rates of liquid components ltJ and vapor components and the temperatures 2J. The enthalpies and equilibrium constants Kg are determined by the primary variables lijt vtj, and Tf. The nonlinear equations are solved by the Newton-Raphson method. A convergence criterion is made up of deviations from material, equilibrium, and enthalpy balances simultaneously, and corrections for the next iterations are made automatically. The method is applicable to distillation, absorption and stripping in single and multiple columns. The calculation flowsketch is in Figure 13.19. A brief description of the method also will be given. The availability of computer programs in the open literature was cited earlier in this section. [Pg.408]

Computational aspects on the Newton-Raphson procedure When constructing methods to solve the system of linear equations (4 22) one should be aware of the dimension of the problem. It is not unusual to have Cl expansion comprising 104 - 106 terms, and orbital spaces with more than two hundred orbitals. In such calculations it is obviously not possible to explicitly construct the Hessian matrix. Instead we must look for iterative algorithms... [Pg.214]

Computationally the super-CI method is more complicated to work with than the Newton-Raphson approach. The major reason is that the matrix d is more complicated than the Hessian matrix c. Some of the matrix elements of d will contain up to fourth order density matrix elements for a general MCSCF wave function. In the CASSCF case only third order term remain, since rotations between the active orbitals can be excluded. Besides, if an unfolded procedure is used, where the Cl problem is solved to convergence in each iteration, the highest order terms cancel out. In this case up to third order density matrix elements will be present in the matrix elements of d in the general case. Thus super-CI does not represent any simplification compared to the Newton-Raphson method. [Pg.227]


See other pages where Computational methods Newton-Raphson is mentioned: [Pg.133]    [Pg.165]    [Pg.288]    [Pg.70]    [Pg.475]    [Pg.481]    [Pg.1264]    [Pg.74]    [Pg.201]    [Pg.321]    [Pg.332]    [Pg.354]    [Pg.153]    [Pg.158]    [Pg.163]    [Pg.56]    [Pg.303]    [Pg.3]    [Pg.4]    [Pg.220]    [Pg.293]    [Pg.218]   
See also in sourсe #XX -- [ Pg.52 , Pg.203 , Pg.265 , Pg.276 , Pg.283 ]




SEARCH



Computational methods

Computer methods

Newton method

Newton-Raphson

Newton-raphson method

Raphson

© 2024 chempedia.info