Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonlinear programming function

Nonlinear Programming The most general case for optimization occurs when both the objective function and constraints are nonlinear, a case referred to as nonlinear programming. While the idea behind the search methods used for unconstrained multivariable problems are applicable, the presence of constraints complicates the solution procedure. [Pg.745]

One important class of nonlinear programming techniques is called quadratic programming (QP), where the objective function is quadratic and the constraints are hnear. While the solution is iterative, it can be obtained qmckly as in linear programming. This is the basis for the newest type of constrained multivariable control algorithms called model predic tive control. The dominant method used in the refining industiy utilizes the solution of a QP and is called dynamic matrix con-... [Pg.745]

In an earlier section, we had alluded to the need to stop the reasoning process at some point. The operationality criterion is the formal statement of that need. In most problems we have some understanding of what properties are easy to determine. For example, a property such as the processing time of a batch is normally given to us and hence is determined by a simple database lookup. The optimal solution to a nonlinear program, on the other hand, is not a simple property, and hence we might look for a simpler explanation of why two solutions have equal objective function values. In the case of our branch-and-bound problem, the operationality criterion imposes two requirements ... [Pg.318]

Here the B-spline Bim(zf, Xj) is the ith B-spline basis function on the extended partition Xj (which contains locations of the knots in the Zj direction), and is a coefficient. We use cubic splines and sufficient numbers of uniformly spaced knots so that the estimation problem is not affected by the partition. The estimation problem now involves determining the set of B-spline coefficients that minimizes Eq. (4.1.26), subject to the state equations [Eqs. (4.1.24 and 4.1.25)], for a suitable value of the regularization parameter. At this point, the minimization problem corresponds to a nonlinear programming problem. [Pg.374]

Another method for solving nonlinear programming problems is based on quadratic programming (QP)1. Quadratic programming is an optimization procedure that minimizes a quadratic objective function subject to linear inequality or equality (or both types) of constraints. For example, a quadratic function of two variables x and X2 would be of the general form ... [Pg.46]

Figure 1.1 A simple model of gene expression. The model has been implemented and simulated in the program Gepasi.41 42 Nonlinear saturable functions chosen for the kinetics of all 6 reactions and parameter values chosen to produce extreme behaviors (details available from authors). Figure 1.1 A simple model of gene expression. The model has been implemented and simulated in the program Gepasi.41 42 Nonlinear saturable functions chosen for the kinetics of all 6 reactions and parameter values chosen to produce extreme behaviors (details available from authors).
Neither of the problems illustrated in Figures 4.5 and 4.6 had more than one optimum. It is easy, however, to construct nonlinear programs in which local optima occur. For example, if the objective function / had two minima and at least one was interior to the feasible region, then the constrained problem would have two local minima. Contours of such a function are shown in Figure 4.7. Note that the minimum at the boundary point x1 = 3, x2 = 2 is the global minimum at / = 3 the feasible local minimum in the interior of the constraints is at / = 4. [Pg.120]

Sketch the objective function and constraints of the following nonlinear programming problems. [Pg.144]

The NLP (nonlinear programming) methods to be discussed in this chapter differ mainly in how they generate the search directions. Some nonlinear programming methods require information about derivative values, whereas others do not use derivatives and rely solely on function evaluations. Furthermore, finite difference substitutes can be used in lieu of derivatives as explained in Section 8.10. For differentiable functions, methods that use analytical derivatives almost always use less computation time and are more accurate, even if finite difference approxima-... [Pg.182]

The Kuhn-Tucker conditions are predicated on this fact At any local constrained optimum, no (small) allowable change in the problem variables can improve the value bf the objective function. To illustrate this statement, consider the nonlinear programming problem ... [Pg.273]

Successive linear programming (SLP) methods solve a sequence of linear programming approximations to a nonlinear programming problem. Recall that if g,(x) is a nonlinear function and x° is the initial value for x, then the first two terms in the Taylor series expansion of gt(x) around x° are... [Pg.293]

Successive quadratic programming (SQP) methods solve a sequence of quadratic programming approximations to a nonlinear programming problem. Quadratic programs (QPs) have a quadratic objective function and linear constraints, and there exist efficient procedures for solving them see Section 8.3. As in SLP, the linear constraints are linearizations of the actual constraints about the selected point. The objective is a quadratic approximation to the Lagrangian function, and the algorithm is simply Newton s method applied to the KTC of the problem. [Pg.302]

Rustem, B. Algorithms for Nonlinear Programming and Multiple Objective Functions. Wiley, New York (1998). [Pg.328]

As posed here, the problem is a nonlinear programming one and involves nested loops of calculations, the outer loop of which is Equation (j) subject to Equations (a) through (i), and subject to the inequality constraints. If capital costs are to be included in the objective function, refer to Frey and colleagues (1997). [Pg.446]

The nonlinear programming problem based on objective function (/), model equations (b)-(g), and inequality constraints (was solved using the generalized reduced gradient method presented in Chapter 8. See Setalvad and coworkers (1989) for details on the parameter values used in the optimization calculations, the results of which are presented here. [Pg.504]

As a consequence, the gradient of the objective function and the Jacobian matrix of the constraints in the nonlinear programming problem cannot be determined analytically. Finite difference substitutes as discussed in Section 8.10 had to be used. To be conservative, substitutes for derivatives were computed as suggested by Curtis and Reid (1974). They estimated the ratio /x of the truncation error to the roundoff error in the central difference formula... [Pg.535]

Fiacco, A. V. Sensitivity Analysis of Nonlinear Programming Using Penalty Function Methods. Math Program 10 287-311 (1976). [Pg.547]

The sufficient conditions for obtaining a global solution of the nonlinear programming problem are that both the objective function and the constraint set be convex. If these conditions are not satisfied, there is no guarantee that the local optima will be the global optima. [Pg.102]

Finally in this chapter, an alternative approach for nonlinear dynamic data reconciliation, using nonlinear programming techniques, is discussed. This formulation involves the optimization of an objective function through the adjustment of estimate functions constrained by differential and algebraic equalities and inequalities and thus requires efficient and novel solution techniques. [Pg.157]

On the other hand, the optimal control problem with a discretized control profile can be treated as a nonlinear program. The earliest studies come under the heading of control vector parameterization (Rosenbrock and Storey, 1966), with a representation of U t) as a polynomial or piecewise constant function. Here the mode is solved repeatedly in an inner loop while parameters representing V t) are updated on the outside. While hill climbing algorithms were used initially, recent efficient and sophisticated optimization methods require techniques for accurate gradient calculation from the DAE model. [Pg.218]


See other pages where Nonlinear programming function is mentioned: [Pg.72]    [Pg.6]    [Pg.293]    [Pg.319]    [Pg.46]    [Pg.54]    [Pg.136]    [Pg.543]    [Pg.184]    [Pg.60]    [Pg.60]    [Pg.156]    [Pg.182]    [Pg.223]    [Pg.284]    [Pg.292]    [Pg.352]    [Pg.385]    [Pg.469]    [Pg.470]    [Pg.104]    [Pg.122]    [Pg.58]    [Pg.198]    [Pg.201]    [Pg.202]    [Pg.205]    [Pg.217]    [Pg.218]    [Pg.251]   
See also in sourсe #XX -- [ Pg.41 ]




SEARCH



Nonlinear function

© 2024 chempedia.info