Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Newton-type method

Gill, P.E. and W. Murray, "Newton-type Methods for Unconstrained and Linearly Constrained Optimization", Mathematical Programming, 7,311-350 (1974). [Pg.395]

Two kinds of method can be used to solve this system Newton type methods, in which it is necessary to compute a Jacobian matrix, and direct iterative methods, in which it is necessary to compute only the functions fj, f2,. . ., f . [Pg.289]

One of the principal advantages of conjugate gradient methods over the Newton-type methods described below (other than their simplicity of implementation) is then-ability to handle problems with a large number of parameters, possibly in the thousands. However, this is not much of a consideration in problems of fitting data to kinetic rate expressions, where there are unlikely to be more than a dozen or so parameters to be estimated. [Pg.189]

Conjugate gradient methods are fast, though not as fast as Newton-type methods near the minimum. They can handle large numbers of parameters since their storage requirement (needing only to store gradients) is a small fraction of that for Newton-type methods (which need to store Hessian matrices or their approximations). These methods are also relatively simple to implement, but this is of little concern to most practitioners who are unlikely to write their own optimization code. [Pg.193]

Newton-type methods are very fast near the minimum. However, pure Newton methods can have serious convergence difficulties if the starting point is far from the minimum. Hence some modified form is nearly always required in practice. For least squares problems a Levenberg-Marquardt method is attractive, since it guarantees convergence by balancing steepest descent with Newton s method, and since the special structure of least squares problems allows for an easy approximation to the Hessian matrix. [Pg.193]

Codes of block waveform relaxation methods and of block Jacobi-Newton type methods have been tested on sequential machines. At present the parallel case is simulated by these codes. In case of loosely coupled subproblems promising results have been achieved. [Pg.69]

MA28 or MA48 based on LU decomposition BDNLSOL block decomposition nonlinear solver SPARSE Newton-type method without block decomposition DASOLV based on variable time step/variable order backward differentiation formula. [Pg.371]

Newton-type methods using the energy function, as well as its first and second partial derivatives. [Pg.521]

The unknowns in this equation are the local coordinates of the foot (i.e. and 7]). After insertion of the global coordinates of the foot found at step 6 in the left-hand side, and the global coordinates of the nodal points in a given element in the right-hand side of this equation, it is solved using the Newton-Raphson method. If the foot is actually inside the selected element then for a quadrilateral element its local coordinates must be between -1 and +1 (a suitable criteria should be used in other types of elements). If the search is not successful then another element is selected and the procedure is repeated. [Pg.107]

Newton-type algorithms have been applied to process optimization over the past decade and have been well studied. In this section we provide a concise development of these methods, as well as their extension to handle large-scale problems. First, however, we consider the following, rather general, optimization problem ... [Pg.199]

Based on the analytical expression for the derivative of det[ V(p) ], Bates and Watts (ref. 30) recently proposed a Gauss-Newton type procedure for minimizing the objective function (3.66). We use here, however, the simplex method of Nelder and Mead (module M34) which is certainly less efficient but does not require further programming. The determinant is evaluated by the module M14. After 95 iterations we obtain the results shown in the second row of Table 3.5, in good agreement with the estimates of Box et al. (ref. 29 ). [Pg.187]

These partial derivatives provide a lot of information (ref. 10). They show how parameter perturbations (e.g., uncertainties in parameter values) affect the solution. Identifying the unimportant parameters the analysis may help to simplify the model. Sensitivities are also needed by efficient parameter estimation procedures of the Gauss - Newton type. Since the solution y(t,p) is rarely available in analytic form, calculation of the coefficients Sj(t,p) is not easy. The simplest method is to perturb the parameter pj, solve the differential equation with the modified parameter set and estimate the partial derivatives by divided differences. This "brute force" approach is not only time consuming (i.e., one has to solve np+1 sets of ny differential equations), but may be rather unreliable due to the roundoff errors. A much better approach is solving the sensitivity equations... [Pg.279]

In this section we deal with estimating the parameters p in the dynamical model of the form (5.37). As we noticed, methods of Chapter 3 directly apply to this problem only if the solution of the differential equation is available in analytical form. Otherwise one can follow the same algorithms, but solving differential equations numerically whenever the computed responses are needed. The partial derivations required by the Gauss - Newton type algorithms can be obtained by solving the sensitivity equations. While this indirect method is... [Pg.286]

X2° = X30 = 0 assumed to be known exactly. The only observed variable is = x. Jennrich and Bright (ref. 31) used the indirect approach to parameter estimation and solved the equations (5.72) numerically in each iteration of a Gauss-Newton type procedure exploiting the linearity of (5.72) only in the sensitivity calculation. They used relative weighting. Although a similar procedure is too time consuming on most personal computers, this does not mean that we are not able to solve the problem. In fact, linear differential equations can be solved by analytical methods, and solutions of most important linear compartmental models are listed in pharmacokinetics textbooks (see e.g., ref. 33). For the three compartment model of Fig. 5.7 the solution is of the form... [Pg.314]

The solution of (4.88), using Newton s method with four variables, is illustrated in Fig. 4.18 for various concentrations cL of passing bonds. At low concentrations, we find back the abovementioned gap in the density of states, with a more complex structure due to the presence of two types of bond at high concentrations, the spectrum becomes smoothed and approaches that of a perfect crystal. [Pg.226]

However, in a quantum chemical context there is often one overwhelming difficulty that is common to both Newton-like and variable-metric methods, and that is the difficulty of storing the hessian or an approximation to its inverse. This problem is not so acute if one is using such a method in optimizing orbital exponents or internuclear distances, but in optimizing linear coefficients in LCAO type calculations it can soon become impossible. In modern calculations a basis of say fifty AOs to construct ten occupied molecular spin-orbitals would be considered a modest size, and that would, even in a closed-shell case, give one a hessian of side 500. In a Newton-like method the problem of inverting a matrix of such a size is a considerable... [Pg.57]

The Newton-Raphson method requires differentiation of all data points with respect to the parameters. For flxed-geometry properties (like energy derivatives), the force field derivatives can be obtained analytically (32). For other types of properties, an approximate analytical solution can be obtained by assuming that the shift in geometry is small upon parameter change (45). However, the most general and safest method is to obtain the derivatives numerically (15). The drawback is that the method is substantially slower than calculating analytical derivatives. [Pg.25]

This method is referred to as Newton s method and has reliable convergence properties for the types of objective functions typically found in pharmacokinetics. For nonquadratic objective functions, however, sometimes, convergence will not be achieved. To this end, then, Eq. (3.28) is modified to include a step length... [Pg.100]

It has been well established that Newton-based methods are the most efficient type for minimization problems [9,11,12,25,72]. The starting point for these algorithms is to approximate the PES by a Taylor series expansion about the current point, Xq. Truncating the expansion at second order gives... [Pg.203]


See other pages where Newton-type method is mentioned: [Pg.112]    [Pg.38]    [Pg.1]    [Pg.68]    [Pg.69]    [Pg.193]    [Pg.82]    [Pg.521]    [Pg.112]    [Pg.38]    [Pg.1]    [Pg.68]    [Pg.69]    [Pg.193]    [Pg.82]    [Pg.521]    [Pg.74]    [Pg.300]    [Pg.64]    [Pg.144]    [Pg.200]    [Pg.30]    [Pg.139]    [Pg.33]    [Pg.861]    [Pg.74]    [Pg.264]    [Pg.614]    [Pg.1467]    [Pg.406]    [Pg.121]    [Pg.230]    [Pg.45]   
See also in sourсe #XX -- [ Pg.201 ]




SEARCH



Newton method

Newton-type

© 2024 chempedia.info