Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hessian sparsity

More variables are retained in this type of NLP problem formulation, but you can take advantage of sparse matrix routines that factor the linear (and linearized) equations efficiently. Figure 15.5 illustrates the sparsity of the Hessian matrix used in the QP subproblem that is part of the execution of an optimization of a plant involving five unit operations. [Pg.528]

The Hessian matrix for the QP subproblem showing five units and the sparsity of the matrix. [Pg.529]

The formulae to update the Hessian in a stable way do not preserve matrix sparsity. Thus, some problems (usually memory allocation problems) may arise when the Hessian is large. [Pg.126]

When a large optimization problem is involved, it is essential to exploit the sparsity of the Hessian to reduce both memory allocation and computational time. The following alternatives can be used ... [Pg.171]

The Hessian can be evaluated numerically through the analytical gradient. In such cases, it is crucial to preserve the sparsity and symmetry ofthe Hessian. A lot of time can also be saved by simultaneously varying several variables (Nocedal and Wright, 2000). [Pg.171]

The reason is simple no methods that update the factorization of Hessian (see Section 3.7) can preserve its sparsity without worsening the efficiency of the method itself. [Pg.173]

As above, a BzzMatrixSparseSymmetricLocked class object can use the function BuildGradientAndHessian to evaluate the function Hessian preserving its sparsity. [Pg.173]

Such an object can also use the function BuildGradientAndUpdateHessian to update the Hessian (not its factorization) by preserving the sparsity as shown in Section 13.2. [Pg.173]

We have already discussed in Chapter 4 how an object of BzzMatrixSparse-SymmetricLocked class could use the function BuildGradientAndHes-s i an to numerically evaluate the Hessian by preserving its sparsity. [Pg.447]

It is possible to update the Hessian as if it were the Jacobian of gradient of the objective function. When the Jacobian of a system of equations is updated, it is possible to preserve its sparsity by means of the Schubert s formula (7.103). [Pg.447]

The function BuildGradientAndUpdatePositiveHessian is used when we need the Hessian positive definite. It updates the Hessian with the aforementioned technique to preserve its sparsity and, analogously to BuildGradien-tAndPositiv6H6ssian, it makes the Hessian positive definite by adequately increasing diagonal elements similarly to Gill-Murray s method (see Section 3.6.1). [Pg.449]

Chapter 4 has been devoted to large-scale unconstrained optimization problems, where problems related to the management of matrix sparsity and the ordering of rows and columns are broached. Hessian evaluation, Newton and inexact Newton methods are discussed. [Pg.517]

The Hessian-vector products in each linear conjugate gradient step are more significant. For a Hessian formulated with a nonbonded cutoff radius (e.g., 8 A), many zeros result for the Hessian (see Figure 3) when this sparsity is exploited in the multiplication routine, performance is fast compared with a dense matrix-vector product. When the Hessian is dense and large in size, the following forward-difference formula of two gradients often works faster ... [Pg.1152]

BFGS can be applied to large problems when the Hessian is sparse if the update formula (5.38) is provided with the sparsity pattern so that only the nonzero positions are stored and updated. Alternatively, only die most recent gradient vectors may be retained and used in (5.38). Both approaches allow the construction of approximate Hessians with limited memory usage. For a more detailed discussion of memory-efficient BFGS methods, consult Nocedal Wright (1999). [Pg.227]

The need in Newton s method to store and to solve a linear system (5.50) at each iteration poses a sigitificant challenge for large optimization problems. For large problems with Hessians tiiat are dense or whose sparsity padems are unknown, the tricks above cannot be used. Instead, the nonlinear conjugate gradient method, which does not require any curvature knowledge, is recommended. [Pg.227]


See other pages where Hessian sparsity is mentioned: [Pg.153]    [Pg.153]    [Pg.247]    [Pg.251]    [Pg.203]    [Pg.38]    [Pg.447]    [Pg.228]   
See also in sourсe #XX -- [ Pg.153 ]




SEARCH



Hessian

© 2024 chempedia.info