Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Newton-type

Newton-type. Finally, we come to those algorithms which depend on a knowledge of A and A l (the Newton-type algorithms). If we are dealing with quadratic functions, then once we know A l it follows immediately from equation (22) that we can reach the minimum in just one step, so that we need not trouble about directions of descent. However, if the function is not quadratic, then the problem of optimal directions again becomes [Pg.46]

It is clear that if the method does descend into a quadratic region, then fl may be chosen as zero in step (ii) and an exact minimum will be found at step (iv). The incorporation of step (iv) ensures that the algorithm is stable (if a minimum exists along p) so that, barring accidents, we can be sure of eventually entering a quadratic region of the function. [Pg.47]

There are many variants of this kind of algorithm and examples of some of them may be found in chapter 4 of ref. 8. It should also be pointed out that such methods may be combined with those variable metric methods which estimate A-1, so that instead of calculating A 1 at every stage, an estimate of it may be obtained merely by updating the previously calculated matrix. Some examples of studies undertaken by such a combined method may be found in the review by Yde.27 [Pg.47]


From the above list of rate-based model equations, it is seen that they total 5C -t- 6 for each tray, compared to 2C -t-1 or 2C -t- 3 (depending on whether mole fractious or component flow rates are used for composition variables) for each stage in the equihbrium-stage model. Therefore, more computer time is required to solve the rate-based model, which is generally converged by an SC approach of the Newton type. [Pg.1292]

Gill, P.E. and W. Murray, "Newton-type Methods for Unconstrained and Linearly Constrained Optimization", Mathematical Programming, 7,311-350 (1974). [Pg.395]

Newton-type algorithms have been applied to process optimization over the past decade and have been well studied. In this section we provide a concise development of these methods, as well as their extension to handle large-scale problems. First, however, we consider the following, rather general, optimization problem ... [Pg.199]

This form is convenient in that the active inequality constraints can now be replaced in the QP by all of the inequalities, with the result that Sa is determined directly from the QP solution. Finally, since second derivatives may often be hard to calculate and a unique solution is desired for the QP problem, the Hessian matrix, is approximated by a positive definite matrix, B, which is constructed by a quasi-Newton formula and requires only first-derivative information. Thus, the Newton-type derivation for (2) leads to a nonlinear programming algorithm based on the successive solution of the following QP subproblem ... [Pg.201]

Unlike parameter optimization, the optimal control problem has degrees of freedom that increase linearly with the number of finite elements. Here, for problems with many finite elements, the decomposition strategy for SQP becomes less efficient. As an alternative, we discussed the application of Newton-type algorithms for unconstrained optimal control problems. Through the application of Riccati-like transformations, as well as parallel solvers for banded matrices, these problems can be solved very efficiently. However, the efficient solution of large optimal control problems with... [Pg.250]

Lorenz Biegler tackles the ambitious and comprehensive problem of modeling and optimization of complex process models, and features the simultaneous or Newton-type optimization strategies. [Pg.274]

In the restrained optimization scheme, the SCF algorithm is employed with a fixed multiplier until convergence is achieved and subsequently the multiplier is updated, which is in our implementation (134) realized by a Newton-type optimization algorithm when employing the second derivatives of C with respect to Xa as... [Pg.215]

Based on the analytical expression for the derivative of det[ V(p) ], Bates and Watts (ref. 30) recently proposed a Gauss-Newton type procedure for minimizing the objective function (3.66). We use here, however, the simplex method of Nelder and Mead (module M34) which is certainly less efficient but does not require further programming. The determinant is evaluated by the module M14. After 95 iterations we obtain the results shown in the second row of Table 3.5, in good agreement with the estimates of Box et al. (ref. 29 ). [Pg.187]

These partial derivatives provide a lot of information (ref. 10). They show how parameter perturbations (e.g., uncertainties in parameter values) affect the solution. Identifying the unimportant parameters the analysis may help to simplify the model. Sensitivities are also needed by efficient parameter estimation procedures of the Gauss - Newton type. Since the solution y(t,p) is rarely available in analytic form, calculation of the coefficients Sj(t,p) is not easy. The simplest method is to perturb the parameter pj, solve the differential equation with the modified parameter set and estimate the partial derivatives by divided differences. This "brute force" approach is not only time consuming (i.e., one has to solve np+1 sets of ny differential equations), but may be rather unreliable due to the roundoff errors. A much better approach is solving the sensitivity equations... [Pg.279]

In this section we deal with estimating the parameters p in the dynamical model of the form (5.37). As we noticed, methods of Chapter 3 directly apply to this problem only if the solution of the differential equation is available in analytical form. Otherwise one can follow the same algorithms, but solving differential equations numerically whenever the computed responses are needed. The partial derivations required by the Gauss - Newton type algorithms can be obtained by solving the sensitivity equations. While this indirect method is... [Pg.286]

X2° = X30 = 0 assumed to be known exactly. The only observed variable is = x. Jennrich and Bright (ref. 31) used the indirect approach to parameter estimation and solved the equations (5.72) numerically in each iteration of a Gauss-Newton type procedure exploiting the linearity of (5.72) only in the sensitivity calculation. They used relative weighting. Although a similar procedure is too time consuming on most personal computers, this does not mean that we are not able to solve the problem. In fact, linear differential equations can be solved by analytical methods, and solutions of most important linear compartmental models are listed in pharmacokinetics textbooks (see e.g., ref. 33). For the three compartment model of Fig. 5.7 the solution is of the form... [Pg.314]

Two kinds of method can be used to solve this system Newton type methods, in which it is necessary to compute a Jacobian matrix, and direct iterative methods, in which it is necessary to compute only the functions fj, f2,. . ., f . [Pg.289]

One of the principal advantages of conjugate gradient methods over the Newton-type methods described below (other than their simplicity of implementation) is then-ability to handle problems with a large number of parameters, possibly in the thousands. However, this is not much of a consideration in problems of fitting data to kinetic rate expressions, where there are unlikely to be more than a dozen or so parameters to be estimated. [Pg.189]

Conjugate gradient methods are fast, though not as fast as Newton-type methods near the minimum. They can handle large numbers of parameters since their storage requirement (needing only to store gradients) is a small fraction of that for Newton-type methods (which need to store Hessian matrices or their approximations). These methods are also relatively simple to implement, but this is of little concern to most practitioners who are unlikely to write their own optimization code. [Pg.193]

Newton-type methods are very fast near the minimum. However, pure Newton methods can have serious convergence difficulties if the starting point is far from the minimum. Hence some modified form is nearly always required in practice. For least squares problems a Levenberg-Marquardt method is attractive, since it guarantees convergence by balancing steepest descent with Newton s method, and since the special structure of least squares problems allows for an easy approximation to the Hessian matrix. [Pg.193]

Under steady-state conditions, the reagent flux from the bulk to the electrode surface, frequenfly modeled by a Newton-type law, equals the reagent consumption due to the electrochemical reaction ... [Pg.461]

By definition, the potential V has Newton-type singularities on the set = C M if in the neighbourhood of the point qi, in the coordinates z, conformal with respect to the Riemannian metric T, there holds the relation ... [Pg.279]

Codes of block waveform relaxation methods and of block Jacobi-Newton type methods have been tested on sequential machines. At present the parallel case is simulated by these codes. In case of loosely coupled subproblems promising results have been achieved. [Pg.69]


See other pages where Newton-type is mentioned: [Pg.24]    [Pg.64]    [Pg.467]    [Pg.197]    [Pg.198]    [Pg.199]    [Pg.200]    [Pg.250]    [Pg.37]    [Pg.112]    [Pg.226]    [Pg.614]    [Pg.230]    [Pg.626]    [Pg.696]    [Pg.38]    [Pg.278]    [Pg.1]    [Pg.68]    [Pg.69]    [Pg.193]    [Pg.255]   


SEARCH



Newton-Type Optimization Algorithms

Newton-type iteration around stationary flame equations

Newton-type method

© 2024 chempedia.info