Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Nonlinear/nonlinearity optimization problem

The camera model has a high number of parameters with a high correlation between several parameters. Therefore, the calibration problem is a difficult nonlinear optimization problem with the well known problems of instable behaviour and local minima. In out work, an approach to separate the calibration of the distortion parameters and the calibration of the projection parameters is used to solve this problem. [Pg.486]

When solving such nonlinear optimization problems, it is not desirable to terminate the search at a peak that is grossly inferior to the highest peak. The solution can be checked by repeating the search but starting from a different initial point. [Pg.54]

Consequently, modeling of a two-phase flow system is subject to both the constraints of the hydrodynamic equations and the constraint of minimizing N. Such modeling is a nonlinear optimization problem. Numerical solution on a computer of this mathematical system yields the eight parameters ... [Pg.572]

If/(x) has a simple closed-form expression, analytical methods yield an exact solution, a closed form expression for the optimal x, x. Iff(x) is more complex, for example, if it requires several steps to compute, then a numerical approach must be used. Software for nonlinear optimization is now so widely available that the numerical approach is almost always used. For example, the Solver in the Microsoft Excel spreadsheet solves linear and nonlinear optimization problems, and many FORTRAN and C optimizers are available as well. General optimization software is discussed in Section 8.9. [Pg.154]

As explained in Chapter 9, a branch-and-bound enumeration is nothing more than a search organized so that certain portions of the possible solution set are deleted from consideration. A tree is formed of nodes and branches (arcs). Each branch in the tree represents an added or modified inequality constraint to the problem defined for the prior node. Each node of the tree itself represents a nonlinear optimization problem without integer variables. [Pg.474]

In general, linear functions and correspondingly linear optimization methods can be distinguished from nonlinear optimization problems. The former, being in itself the wide field of linear programming with the predominant Simplex algorithm for routine solution [75] shall be excluded here. [Pg.69]

In a strict sense parameter estimation is the procedure of computing the estimates by localizing the extremum point of an objective function. A further advantage of the least squares method is that this step is well supported by efficient numerical techniques. Its use is particularly simple if the response function (3.1) is linear in the parameters, since then the estimates are found by linear regression without the inherent iteration in nonlinear optimization problems. [Pg.143]

Mixed-integer nonlinear optimization problems of the form (1.1) are encountered in a variety of applications in all branches of engineering, applied mathematics, and operations research. These represent currently very important and active research areas, and a partial list includes ... [Pg.5]

Remark 1 Farkas theorem has been used extensively in the development of optimality conditions for linear and nonlinear optimization problems. [Pg.23]

This chapter discusses the fundamentals of nonlinear optimization. Section 3.1 focuses on optimality conditions for unconstrained nonlinear optimization. Section 3.2 presents the first-order and second-order optimality conditions for constrained nonlinear optimization problems. [Pg.45]

An unconstrained nonlinear optimization problem deals with the search for a minimum of a nonlinear function f(x) of n real variables x - (xi, x2, , x ), and is denoted as... [Pg.45]

Unconstrained nonlinear optimization problems arise in several science and engineering applications ranging from simultaneous solution of nonlinear equations (e.g., chemical phase equilibrium) to parameter estimation and identification problems (e.g., nonlinear least squares). [Pg.45]

This section presents first the formulation and basic definitions of constrained nonlinear optimization problems and introduces the Lagrange function and the Lagrange multipliers along with their interpretation. Subsequently, the Fritz John first-order necessary optimality conditions are discussed as well as the need for first-order constraint qualifications. Finally, the necessary, sufficient Karush-Kuhn-Dicker conditions are introduced along with the saddle point necessary and sufficient optimality conditions. [Pg.49]

The Lagrange multipliers in a constrained nonlinear optimization problem have a similar interpretation to the dual variables or shadow prices in linear programming. To provide such an interpretation, we will consider problem (3.3) with only equality constraints that is,... [Pg.52]

The optimality conditions discussed in the previous sections formed the theoretical basis for the development of several algorithms for unconstrained and constrained nonlinear optimization problems. In this section, we will provide a brief outline of the different classes of nonlinear multivariable optimization algorithms. [Pg.68]

Nonlinear optimization problems have two different representations, the primal problem and the dual problem. The relation between the primal and the dual problem is provided by an elegant duality theory. This chapter presents the basics of duality theory. Section 4.1 discusses the primal problem and the perturbation function. Section 4.2 presents the dual problem. Section 4.3 discusses the weak and strong duality theorems, while section 4.4 discusses the duality gap. [Pg.75]

A wide range of nonlinear optimization problems involve integer or discrete variables in addition to the continuous variables. These classes of optimization problems arise from a variety of applications and are denoted as Mixed-Integer Nonlinear Programming MINLP problems. [Pg.109]

Part 1, comprised of three chapters, focuses on the fundamentals of convex analysis and nonlinear optimization. Chapter 2 discusses the key elements of convex analysis (i.e., convex sets, convex and concave functions, and generalizations of convex and concave functions), which are very important in the study of nonlinear optimization problems. Chapter 3 presents the first and second order optimality conditions for unconstrained and constrained nonlinear optimization. Chapter 4 introduces the basics of duality theory (i.e., the primal problem, the perturbation function, and the dual problem) and presents the weak and strong duality theorem along with the duality gap. Part 1 outlines the basic notions of nonlinear optimization and prepares the reader for Part 2. [Pg.466]

Approaches based on parameter estimation assume that the faults lead to detectable changes of physical system parameters. Therefore, FD can be pursued by comparing the estimates of the system parameters with the nominal values obtained in healthy conditions. The operative procedure, originally established in [23], requires an accurate model of the process (including a reliable nominal estimate of the model parameters) and the determination of the relationship between model parameters and physical parameters. Then, an online estimation of the process parameters is performed on the basis of available measures. This approach, of course, might reveal ineffective when the parameter estimation technique requires solution to a nonlinear optimization problem. In such cases, reduced-order or simplified mathematical models may be used, at the expense of accuracy and robustness. Moreover, fault isolation could be difficult to achieve, since model parameters cannot always be converted back into corresponding physical parameters, and thus the influence of each physical parameters on the residuals could not be easily determined. [Pg.127]

The many preexponential factors, activation energies and reaction order parameters required to describe the kinetics of chemical reactors must be determined, usually from laboratory, pilot plant, or plant experimental data. Ideally, the chemist or biologist has made extensive experiments in the laboratory at different temperatures, residence times and reactant concentrations. From these data, parameters can be estimated using a variety of mathematical methods. Some of these methods are quite simple. Others involve elegant statistical methods to attack this nonlinear optimization problem. A discussion of these methods is beyond the scope of this book. The reader is referred to the textbooks previously mentioned. [Pg.19]

Most of the optimization techniques in use today have been developed since the end of World War II. Considerable advances in computer architecture and optimization algorithms have enabled the complexity of problems that are solvable via optimization to steadily increase. Initial work in the field centered on studying linear optimization problems (linear programming, or LP), which is still used widely today in business planning. Increasingly, nonlinear optimization problems (nonlinear programming, or NLP) have become more and more important, particularly for steady-state processes. [Pg.134]

With increased computer storage and speed, the feasible methods for solution of very large (e.g., O(105) or more variables) nonlinear optimization problems arising in important applications (macromolecular structure, meteorology, economics) will undoubtedly expand considerably and make possible new orders of resolution. [Pg.64]

This is a nonlinear optimization problem with eight parameters and nine constraints, called energy-minimization multi-scale (EMMS) modeling, from which the parameter vector X and various energy consumptions can be calculated. [Pg.171]

The EMMS model is a nonlinear optimization problem involving eight parameters and nine constraints consisting of both equalities and inequalities,... [Pg.171]

SCF will soon be the O(N ) operations (e.g., diagonalization). This realization prompted investigation of both the performance of parallel eigensolvers for large processor counts and the adoption of alternative approaches to the SCF nonlinear optimization problem (conjugate gradient and full-second-order approaches),... [Pg.252]

The solution of the nonlinear optimization problem (PIO) gives us a lower bound on the objective function for the flowsheet. However, the cross-flow model may not be sufficient for the network, and we need to check for reactor extensions that improve our objective function beyond those available from the cross-flow reactor. We have already considered nonisothermal systems in the previous section. However, for simultaneous reactor energy synthesis, the dimensionality of the problem increases with each iteration of the algorithm in Fig. 8 because the heat effects in the reactor affect the heat integration of the process streams. Here, we check for CSTR extensions from the convex hull of the cross-flow reactor model, in much the same spirit as the illustration in Fig. 5, except that all the flowsheet constraints are included in each iteration. A CSTR extension to the convex hull of the cross-flow reactor constitutes the addition of the following terms to (PIO) in order to maximize (2) instead of [Pg.279]


See other pages where Nonlinear/nonlinearity optimization problem is mentioned: [Pg.272]    [Pg.46]    [Pg.53]    [Pg.526]    [Pg.542]    [Pg.105]    [Pg.5]    [Pg.68]    [Pg.68]    [Pg.69]    [Pg.70]    [Pg.109]    [Pg.336]    [Pg.467]    [Pg.41]    [Pg.35]    [Pg.8]    [Pg.550]    [Pg.453]    [Pg.35]    [Pg.569]    [Pg.255]    [Pg.731]    [Pg.731]    [Pg.519]   
See also in sourсe #XX -- [ Pg.115 ]




SEARCH



Nonlinear problems

Nonlinear problems, optimization

Nonlinear problems, optimization

Nonlinear programming problem Constrained optimization

Nonlinear programming problem Unconstrained optimization

Optimization nonlinear

Optimization problems

Process optimization nonlinear objective function problems

© 2024 chempedia.info