Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Unconstrained nonlinear optimization

This chapter discusses the fundamentals of nonlinear optimization. Section 3.1 focuses on optimality conditions for unconstrained nonlinear optimization. Section 3.2 presents the first-order and second-order optimality conditions for constrained nonlinear optimization problems. [Pg.45]

This section presents the formulation and basic definitions of unconstrained nonlinear optimization along with the necessary, sufficient, and necessary and sufficient optimality conditions. [Pg.45]

An unconstrained nonlinear optimization problem deals with the search for a minimum of a nonlinear function f(x) of n real variables x - (xi, x2, , x ), and is denoted as... [Pg.45]

Unconstrained nonlinear optimization problems arise in several science and engineering applications ranging from simultaneous solution of nonlinear equations (e.g., chemical phase equilibrium) to parameter estimation and identification problems (e.g., nonlinear least squares). [Pg.45]

This chapter introduces the fundamentals of unconstrained and constrained nonlinear optimization. Section 3.1 presents the formulation and basic definitions of unconstrained nonlinear optimization along with the necessary, sufficient, and necessary and sufficient optimality conditions. For further reading refer to Hestenes (1975), Luenberger (1984), and Minoux (1986). [Pg.70]

BFGS Broyden Fletcher Goldfarb Shanno algorithm, an iterative metho for solving unconstrained nonlinear optimization problems ... [Pg.286]

EXTRAPOLATION OF THERMAL FUNCTIONS TO 0 K. USING UNCONSTRAINED NONLINEAR OPTIMIZATION. [Pg.180]

LANCELOT Philippe Toint pht raath.fundp.ac.be Various Newton methods for constrained and unconstrained nonlinear optimization, specializing in laige-scale problems and including a trust-region Newton method and an algorithm for nonlinear least squares that exploits partial separability... [Pg.1153]

The area of nonlinear optimization can be subdivided further into various classes of problems. Firstly, there is constrained versus unconstrained optimization. Normally, the inclusion of constraints generates only a special case for the sibling, unconstrained... [Pg.69]

The optimality conditions discussed in the previous sections formed the theoretical basis for the development of several algorithms for unconstrained and constrained nonlinear optimization problems. In this section, we will provide a brief outline of the different classes of nonlinear multivariable optimization algorithms. [Pg.68]

Part 1, comprised of three chapters, focuses on the fundamentals of convex analysis and nonlinear optimization. Chapter 2 discusses the key elements of convex analysis (i.e., convex sets, convex and concave functions, and generalizations of convex and concave functions), which are very important in the study of nonlinear optimization problems. Chapter 3 presents the first and second order optimality conditions for unconstrained and constrained nonlinear optimization. Chapter 4 introduces the basics of duality theory (i.e., the primal problem, the perturbation function, and the dual problem) and presents the weak and strong duality theorem along with the duality gap. Part 1 outlines the basic notions of nonlinear optimization and prepares the reader for Part 2. [Pg.466]

Sargent, R.W.H., and D.J. Sebastian, "Numerical Experience with Algorithms for Unconstrained Minimization" in F.A. Lootsma (Ed)., "Numerical Methods for Nonlinear Optimization", Academic Press 1972, pp45-68. [Pg.53]

Problem Type Unconstrained and constrained nonlinear optimization Method Various methods... [Pg.2564]

Nonlinear optimization problems in just a single decision variable frequently arise in chemical engineering applications. If the objective function is unconstrained, the optimal solutionis) can often be obtained analytically using derivatives from calculus. When constrained, numerical methods are frequently necessary. Some applications that have appeared often in chemical engineering textbooks include the following, many of which involve a balance between capital and operating costs ... [Pg.626]

These goals may conflict. For example, a rapidly convergent method for a large unconstrained nonlinear problem may require too much computer memory. On the other hand, a robust method may also be the slowest. Tradeoffs between convergence rate and storage requirements, and between robustness and speed, and so on, are central issues in numerical optimization. [Pg.431]

Nonlinear optimization is one of the crucial topics in the numerical treatment of chemical engineering problems. Numerical optimization deals with the problems of solving systems of nonlinear equations or minimizing nonlinear functionals (with respect to side conditions). In this article we present a new method for unconstrained minimization which is suitable as well in large scale as in bad conditioned problems. The method is based on a true multi-dimensional modeling of the objective function in each iteration step. The scheme allows the incorporation of more given or known information into the search than in common line search methods. [Pg.183]

Constrained Optimization When constraints exist and cannot be eliminated in an optimization problem, more general methods must be employed than those described above, since the unconstrained optimum may correspond to unrealistic values of the operating variables. The general form of a nonhuear programming problem allows for a nonlinear objec tive function and nonlinear constraints, or... [Pg.744]

Nonlinear Programming The most general case for optimization occurs when both the objective function and constraints are nonlinear, a case referred to as nonlinear programming. While the idea behind the search methods used for unconstrained multivariable problems are applicable, the presence of constraints complicates the solution procedure. [Pg.745]

Although K appears linearly in both response equations, rx in (2.12) and rx and r2 in (2.13) appear nonlinearly, so that nonlinear least squares must be used to estimate their values. The specific details of how to carry out the computations will be deferred until we take up numerical methods of unconstrained optimization in Chapter 6. [Pg.62]

Problem 4.1 is nonlinear if one or more of the functions/, gv...,gm are nonlinear. It is unconstrained if there are no constraint functions g, and no bounds on the jc,., and it is bound-constrained if only the xt are bounded. In linearly constrained problems all constraint functions g, are linear, and the objective/is nonlinear. There are special NLP algorithms and software for unconstrained and bound-constrained problems, and we describe these in Chapters 6 and 8. Methods and software for solving constrained NLPs use many ideas from the unconstrained case. Most modem software can handle nonlinear constraints, and is especially efficient on linearly constrained problems. A linearly constrained problem with a quadratic objective is called a quadratic program (QP). Special methods exist for solving QPs, and these iare often faster than general purpose optimization procedures. [Pg.118]

Although the examples thus far have involved linear constraints, the chief nonlinearity of an optimization problem often appears in the constraints. The feasible region then has curved boundaries. A problem with nonlinear constraints may have local optima, even if the objective function has only one unconstrained optimum. Consider a problem with a quadratic objective function and the feasible region shown in Figure 4.8. The problem has local optima at the two points a and b because no point of the feasible region in the immediate vicinity of either point yields a smaller value of/. [Pg.120]

Dennis, J. E. and R. B. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood Cliffs, New Jersey (1983) chapter 2. [Pg.176]

As mentioned earlier, nonlinear objective functions are sometimes nonsmooth due to the presence of functions like abs, min, max, or if-then-else statements, which can cause derivatives, or the function itself, to be discontinuous at some points. Unconstrained optimization methods that do not use derivatives are often able to solve nonsmooth NLP problems, whereas methods that use derivatives can fail. Methods employing derivatives can get stuck at a point of discontinuity, but the function-value-only methods are less affected. For smooth functions, however, methods that use derivatives are both more accurate and faster, and their advantage grows as the number of decision variables increases. Hence, we now turn our attention to unconstrained optimization methods that use only first partial derivatives of the objective function. [Pg.189]

In nonlinear programming problems, optimal solutions need not occur at vertices and can occur at points with positive degrees of freedom. It is possible to have no active constraints at a solution, for example in unconstrained problems. We consider nonlinear problems with constraints in Chapter 8. [Pg.229]

Unconstrained optimization Nonlinear regression of VLE data (12.3) Minimum work of compression (13.2) 1 ) ... [Pg.416]


See other pages where Unconstrained nonlinear optimization is mentioned: [Pg.45]    [Pg.68]    [Pg.471]    [Pg.2]    [Pg.45]    [Pg.68]    [Pg.471]    [Pg.2]    [Pg.12]    [Pg.68]    [Pg.69]    [Pg.70]    [Pg.69]    [Pg.2543]    [Pg.2553]    [Pg.430]    [Pg.240]    [Pg.83]    [Pg.184]    [Pg.182]    [Pg.188]    [Pg.211]    [Pg.295]    [Pg.319]    [Pg.322]   


SEARCH



Nonlinear programming problem Unconstrained optimization

Optimization nonlinear

Unconstrained

© 2024 chempedia.info