Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

First-order methods

These systems are solved by a step-limited Newton-Raphson iteration, which, because of its second-order convergence characteristic, avoids the problem of "creeping" often encountered with first-order methods (Law and Bailey, 1967) ... [Pg.116]

Alternate methods for nonhydrocarbon organics are the first order method of Lydersen with an average error of 9 K although the method of Ambrose is considerably better for alcohols and ketones. [Pg.384]

Implicit Methods By using different interpolation formulas involving y, it is possible to cferive imphcit integration methods. Implicit methods result in a nonhnear equation to be solved for y so that iterative methods must be used. The backward Euler method is a first-order method. [Pg.473]

Discretization error depends on the step size, i.e., if Ax. —> 0, the algorithm would theoretically be exact. The error for Euler method at step N is 0 N(Ax) and total accumulated error is 0 (Ax), that is, it is a first-order method. [Pg.84]

Rule 7 Spin systems that contain groups of chemically equivalent protons that are not magnetically equivalent cannot be analysed by first-order methods. [Pg.56]

Theoretical Number of Floating-Point Operations per Iteration (FLOPI), Maximum Number of Major Iterations, and Memory Usage for the Parallel Primal-Dual Interior-Point Method (pPDIPM) and for the First-Order Method (RRSDP) Applied to Primal and Dual SDP Formulations". [Pg.116]

From the table, we can see that the first-order method usually requires fewer floating-point operations and memory storage if compared with the Primal-Dual interior-point method. The unique drawback of the former method is that we cannot guarantee a convergence of the method in a certain time frame. [Pg.117]

We shall in this chapter discuss the methods employed for the optimization of the variational parameters of the MCSCF wave function. Many different methods have been used for this optimization. They are usually divided into two different classes, depending on the rate of convergence first or second order methods. First order methods are based solely on the calculation of the energy and its first derivative (in one form or another) with respect to the variational parameters. Second order methods are based upon an expansion of the energy to second order (first and second derivatives). Third or even higher order methods can be obtained by including more terms in the expansion, but they have been of rather small practical importance. [Pg.209]

Derive the detailed expression for the orbital Hessian for the special case of a closed shell single determinant wave function. Compare with equation (4 53) to check the result. The equation can be used to construct a second order optimization scheme in Hartree-Fock theory. What are the advantages and disadvantages of such a scheme compared to the conventional first order methods ... [Pg.231]

The kinetics of the addition of aniline (PI1NH2) to ethyl propiolate (HC CCChEt) in DMSO as solvent has been studied by spectrophotometry at 399 nm using the variable time method. The initial rate method was employed to determine the order of the reaction with respect to the reactants, and a pseudo-first-order method was used to calculate the rate constant. The Arrhenius equation log k = 6.07 - (12.96/2.303RT) was obtained the activation parameters, Ea, AH, AG, and Aat 300 K were found to be 12.96, 13.55, 23.31 kcalmol-1 and -32.76 cal mol-1 K-1, respectively. The results revealed a first-order reaction with respect to both aniline and ethyl propiolate. In addition, combination of the experimental results and calculations using density functional theory (DFT) at the B3LYP/6-31G level, a mechanism for this reaction was proposed.181... [Pg.352]

The direction given by —H(0s) lVU 0s) is a descent direction only when the Hessian matrix is positive definite. For this reason, the Newton-Raphson algorithm is less robust than the steepest descent method hence, it does not guarantee the convergence toward a local minimum. On the other hand, when the Hessian matrix is positive definite, and in particular in a neighborhood of the minimum, the algorithm converges much faster than the first-order methods. [Pg.52]

For more complex models or for input distributions for which exact analytical methods are not applicable, approximate methods might be appropriate. Many approximation methods are based on Taylor series expansion solutions, in which the series is truncated depending on the desired amount of solution accuracy and whether one wishes to consider covariance among the input distributions (Hahn Shapiro, 1967). These methods often go by names such as generation of system moments , statistical error propagation , delta method and first-order methods , as discussed by Cullen Frey (1999). [Pg.54]

He showed that positive dX give d that reduce Gibbs free energy. This method is analogous to that of steepest descent, a first-order method for minimization of Gibbs free energy. Ma and Ship-man (11) used Naphtali s method to estimate compositions at equilibrium and the Newton-Raphson method to achieve convergence. [Pg.121]

Extrapolation is an old technique invented by Richardson in 1927 [469]. Generally it makes use of known error orders to increase accuracy. In the present context, its application is based on the first-order method BI, mentioned above. One defines a notation in terms of operations L on the variable y(t), the operation being that of taking a step forward in time. Thus, the notation L y t) or, in terms of discrete time steps where one whole interval is St, Liyn, means a single step of one interval (the 1 being indicated by the subscript on L). The simplest variant is then the application of operation L, followed by two operations, t jA, that is, two consecutive steps of half St (again starting the first from yn), and finally a linear combination of the two results ... [Pg.61]

The number p is of great interest, more so than the constants, which are generally unknown and are usually unimportant (except in rare cases) when deciding on a given method. This is because a high order accuracy means that if we decrease h, we dramatically improve the accuracy. Conversely, this is not the case for a small p. So, first-order methods such as EX or BI mean that we must decrease the intervals greatly in order to achieve some... [Pg.263]

The NONMEM program implements two alternative estimation methods, the first-order conditional estimation and the Laplacian methods. The first-order conditional estimation (FOCE) method uses a first-order expansion about conditional estimates (empirical Bayes estimates) of interindividual random effects, rather than about zero. In this respect, it is like the conditional first-order method of Lindstrom and Bates.f Unlike the latter, which is iterative, a single objective function is minimized, achieving a similar effect as with iteration. The Laplacian method uses second-order expansions about the conditional estimates of the random effects. ... [Pg.2952]


See other pages where First-order methods is mentioned: [Pg.2332]    [Pg.2338]    [Pg.475]    [Pg.55]    [Pg.636]    [Pg.51]    [Pg.38]    [Pg.56]    [Pg.425]    [Pg.426]    [Pg.427]    [Pg.428]    [Pg.431]    [Pg.434]    [Pg.435]    [Pg.436]    [Pg.437]    [Pg.438]    [Pg.439]    [Pg.169]    [Pg.115]    [Pg.116]    [Pg.417]    [Pg.410]    [Pg.50]    [Pg.215]    [Pg.422]    [Pg.316]    [Pg.302]    [Pg.117]    [Pg.4556]    [Pg.2954]    [Pg.930]   
See also in sourсe #XX -- [ Pg.111 ]




SEARCH



© 2024 chempedia.info