Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization problems function characteristics

Combinatorial. Combinatorial methods express the synthesis problem as a traditional optimization problem which can only be solved using powerful techniques that have been known for some time. These may use total network cost direcdy as an objective function but do not exploit the special characteristics of heat-exchange networks in obtaining a solution. Much of the early work in heat-exchange network synthesis was based on exhaustive search or combinatorial development of networks. This work has not proven useful because for only a typical ten-process-stream example problem the alternative sets of feasible matches are cal.55 x 10 without stream spHtting. [Pg.523]

In this text we will discuss optimization problems based on behavior of physical systems that have a complicated objective function or constraints for these problems some optimization procedures may be inappropriate and sometimes misleading. Often optimization problems exhibit one or more of the following characteristics, causing a failure in the calculation of the desired optimal solution ... [Pg.26]

To understand the strategy of optimization procedures, certain basic concepts must be described. In this chapter we examine the properties of objective functions and constraints to establish a basis for analyzing optimization problems. We identify those features that are desirable (and also undesirable) in the formulation of an optimization problem. Both qualitative and quantitative characteristics of functions are described. In addition, we present the necessary and sufficient conditions to guarantee that a supposed extremum is indeed a minimum or a maximum. [Pg.114]

Optimization methods calculate one best future state as optimal result. Mathematical algorithms e.g. SIMPLEX or Branch Bound are used to solve optimization problems. Optimization problems have a basic structure with an objective function H(X) to be maximized or minimized varying the decision variable vector X with X subject to a set of defined constraints 0 leading to max(min)//(X),Xe 0 (Tekin/Sabuncuoglu 2004, p. 1067). Optimization can be classified by a set of characteristics ... [Pg.69]

Linear programming is one of the most common optimization techniques applied. LPs are commonly used on production scheduling and resourcing problems. A linear program is a class of optimization problems where the objective function and constraints are linear. The objective function and constraints of a linear program are convex therefore, a local optimum is the global optimum. In addition, LPs demonstrate the characteristic wherein the optimum solutions of LPs lie on a constraint... [Pg.137]

An EA works with a set of candidate solutions to the optimization problem. A solution is referred to as an individual and a set of p solutions is called the population. Each individual has a fitness value which shows how good the solution is with respect to the objective function. X new individuals are added to the population by recombination and mutation of existing individuals. The idea is that the new individuals inherit good characteristics from the existing individuals. The X worst solutions are removed from the population. After several iterations, which are called generations, the algorithm provides a population that comprises good solutions. [Pg.418]

In contrast with the use of objective functions such as observability or reliability that had been used, Bagajewicz (1997, 2000) formulated a mixed integer nonlinear programming to obtain sensor networks satisfying the constraints of residual precision, resilience, error detectability at minimal cost. A tree enumeration was proposed where at each node the optimization problem of the different characteristics are solved. [Pg.429]

In order to optimize acceptance, subject to constraints on sensory levels, we turn the problem into a straightforward optimization problem Maximize a quadratic function (viz., liking) subject to ingredient constraints on the concentrations, and subject to linear constraints (viz., sensory characteristics). [Pg.43]

This way, a multi-criteria optimization problem is reduced to a single-criteria problem. A disadvantage of this method is that, on the one hand, a significant improvement in one fitness function may mask deterioration of another fitness function. On the other hand, the choice of the weighting already implies knowledge of the characteristics of good solutions. [Pg.1263]

The efficient and accurate solution to the optimal problem is not only dependent on the size of the problem in terms of the number of constraints and design variables but also on the characteristics of the objective function and constraints. When both the objective function and the constraints are linear functions of the design variable, the problem is known as a LP problem. Quadratic programming (QP) concerns the minimization or maximization of a quadratic objective function that is linearly constrained. For both the LP and QP problems, reliable solution procedures are readily available. More difficult to solve is the NLP problem in which the objective function and constraints may be nonlinear functions of the design variables. A solution of the NLP problem generally requires an iterative procedure to establish a direction of search at each major iteration. This is usually achieved by the solution of an LP, a QP, or an unconstrained subproblem. [Pg.366]

The methods for solving an optimization task depend on the problem classification. Since the maximum of a function / is the minimum of the function —/, it suffices to deal with minimization. The optimization problem is classified according to the type of independent variables involved (real, integer, mixed), the number of variables (one, few, many), the functional characteristics (linear, least squares, nonlinear, nondifferentiable, separable, etc.), and the problem. statement (unconstrained, subject to equality constraints, subject to simple bounds, linearly constrained, nonlinearly constrained, etc.). For each category, suitable algorithms exist that exploit the problem s structure and formulation. [Pg.1143]

The preceding set of characteristics and properties of the estimators makes our type of mapping procedures, /, particularly appealing for the kinds of systems that we are especially interested to study, i.e., manufacturing systems where considerable amounts of data records are available, with poorly understood behavior, and for which neither accurate first-principles quantitative models exist nor adequate functional form choices for empirical models can be made a priori. In other situations and application contexts that are substantially different from the above, while much can still be gained by adopting the same problem statements, solution formats and performance criteria, other mapping and search procedures (statistical, optimization theory) may be more efficient. [Pg.109]

In addition to the elimination of partial solutions on the basis of their lower-bound values, we can provide two mechanisms that operate directly on pairs of partial solutions. These two mechanisms are based on dominance and equivalence conditions. The utility of these conditions comes from the fact that we need not have found a feasible solution to use them, and that the lower-bound values of the eliminated solutions do not have to be higher than the objective function value of the optimal solution. This is particularly important in scheduling problems where one may have a large number of equivalent schedules due to the use of equipment with identical processing characteristics, and many batches with equivalent demands on the available resources. [Pg.282]

We start with continuous variable optimization and consider in the next section the solution of NLP problems with differentiable objective and constraint functions. If only local solutions are required for the NLP problem, then very efficient large-scale methods can be considered. This is followed by methods that are not based on local optimality criteria we consider direct search optimization methods that do not require derivatives as well as deterministic global optimization methods. Following this, we consider the solution of mixed integer problems and outline the main characteristics of algorithms for their solution. Finally, we conclude with a discussion of optimization modeling software and its implementation on engineering models. [Pg.60]


See other pages where Optimization problems function characteristics is mentioned: [Pg.426]    [Pg.109]    [Pg.70]    [Pg.233]    [Pg.160]    [Pg.66]    [Pg.34]    [Pg.34]    [Pg.568]    [Pg.140]    [Pg.909]    [Pg.481]    [Pg.170]    [Pg.914]    [Pg.748]    [Pg.977]    [Pg.477]    [Pg.631]    [Pg.10]    [Pg.358]    [Pg.434]    [Pg.566]    [Pg.456]    [Pg.338]    [Pg.1118]    [Pg.7]    [Pg.1786]    [Pg.75]    [Pg.324]    [Pg.693]    [Pg.247]    [Pg.70]    [Pg.270]    [Pg.11]    [Pg.251]    [Pg.120]   
See also in sourсe #XX -- [ Pg.2 , Pg.1144 ]




SEARCH



Characteristic function

Characteristic functional

Optimization function

Optimization functional

Optimization problems

© 2024 chempedia.info