Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Convergence ratio

To improve the convergation ratio of the calculations a modified pressure correction algorithm and changed calculation of velocities on the boundaries were implemented. [Pg.561]

Steepest descent is simple to implement and requires modest storage, O(k) however, progress toward a minimum may be very slow, especially near a solution. The convergence rate of SD when applied to a convex quadratic function, as in Eq. [22], is only linear. The associated convergence ratio is no greater than [(k - 1)/(k + l)]4 where k, the condition number, is the ratio of largest to smallest eigenvalues of A ... [Pg.30]

As the convergence ratio measures the reduction of the error at every step (llx +i x ll — Pita x ll for a linear rate), the relevant SD value can be arbitrarily close to 1 when k is large (Figure 12). In other words, because the n lengths of the elliptical axes belonging to the contours of the function are proportional to the eigenvalue reciprocals, the convergence rate of SD is slowed as the contours of the objective function become more eccentric. Thus, the SD search vectors may in some cases exhibit very inefficient paths toward a solution (see final section for a numerical example). [Pg.30]

The new coefficient matrix is symmetric as M lA can be written as M 1/2AM 1/Z. Preconditioning aims to produce a more clustered eigenvalue structure for M A and/or lower condition number than for A to improve the relevant convergence ratio however, preconditioning also adds to the computational effort by requiring that a linear system involving M (namely, Mz = r) be solved at every step. Thus, it is essential for efficiency of the method that M be factored very rapidly in relation to the original A. This can be achieved, for example, if M is a sparse component of the dense A. Whereas the solution of an n X n dense linear system requires order of 3 operations, the work for sparse systems can be as low as order n.13-14... [Pg.33]

Meisami E. 1989. A proposed relationship between increases in the number of olfactory receptor neurons, convergence ratio and sensitivity in the developing rat. Brain Res Dev Brain Res 46 9-19. [Pg.195]

In the specific case of an accurate one-dimensional search along the gradient direction, it is possible to join the linear convergence ratio (Nocedal and Wright, 2000) to the maximum and minimum eigenvalues A ax and A m of the Hessian. [Pg.99]

If the function has a strong minimum and the gradient method is adopted using exact one-dimensional searches, convergence is linear with an asymptotic convergence ratio ... [Pg.99]

This asymptotic convergence ratio can even be written as a function of the Hessian condition number ... [Pg.99]

If the Hessian condition number is large at the minimum, the function contours are ellipsoids significantly lengthened in the direction of the eigenvector that corresponds to Amin- Therefore, the convergence ratio of the gradient method approaches one and the overall convergence becomes very slow. [Pg.99]

The largest number such that a finite limit (the convergence ratio) exists for a sequence jc, where... [Pg.1142]


See other pages where Convergence ratio is mentioned: [Pg.158]    [Pg.199]    [Pg.679]    [Pg.28]    [Pg.28]    [Pg.345]    [Pg.51]    [Pg.138]    [Pg.589]    [Pg.589]    [Pg.937]    [Pg.937]    [Pg.734]    [Pg.734]    [Pg.1156]    [Pg.532]    [Pg.532]   
See also in sourсe #XX -- [ Pg.28 , Pg.30 ]




SEARCH



© 2024 chempedia.info