Unconstrained optimization methods [W. H. Press, et. al.. Numerical Recipes The Art of Scientific Computing, Cambridge University Press, 1986, Chapter 10] can use values of only the objective function, or of first derivatives of the objective function, second derivatives of the objective function, etc. HyperChem uses first derivative information and, in the Block Diagonal Newton-Raphson case, second derivatives for one atom at a time. HyperChem does not use optimizers that compute the full set of second derivatives (the Hessian) because it is impractical to store the Hessian for macromolecules with thousands of atoms. A future release may make explicit-Hessian methods available for smaller molecules but at this release only methods that store the first derivative information, or the second derivatives of a single atom, are used. [Pg.303]

Combinatorial. Combinatorial methods express the synthesis problem as a traditional optimization problem which can only be solved using powerful techniques that have been known for some time. These may use total network cost direcdy as an objective function but do not exploit the special characteristics of heat-exchange networks in obtaining a solution. Much of the early work in heat-exchange network synthesis was based on exhaustive search or combinatorial development of networks. This work has not proven useful because for only a typical ten-process-stream example problem the alternative sets of feasible matches are cal.55 x 10 without stream spHtting. [Pg.523]

Finding the best solution when a large number of variables are involved is a fundamental engineering activity. The optimal solution is with respect to some critical resource, most often the cost (or profit) measured in doUars. For some problems, the optimum may be defined as, eg, minimum solvent recovery. The calculated variable that is maximized or minimized is called the objective or the objective function. [Pg.78]

Many process simulators come with optimizers that vary any arbitrary set of stream variables and operating conditions and optimize an objective function. Such optimizers start with an initial set of values of those variables, carry out the simulation for the entire flow sheet, determine the steady-state values of all the other variables, compute the value of the objective function, and develop a new guess for the variables for the optimization so as to produce an improvement in the objective function. [Pg.78]

There are several mathematical methods for producing new values of the variables in this iterative optimization process. The relation between a simulation and an optimization is depicted in Eigure 6. Mathematical methods that provide continual improvement of the objective function in the iterative [Pg.78]

In real-life problems ia the process iadustry, aeady always there is a nonlinear objective fuactioa. The gradieats deteroiiaed at any particular poiat ia the space of the variables to be optimized can be used to approximate the objective function at that poiat as a linear fuactioa similar techniques can be used to represent nonlinear constraints as linear approximations. The linear programming code can then be used to find an optimum for the linearized problem. At this optimum poiat, the objective can be reevaluated, the gradients can be recomputed, and a new linearized problem can be generated. The new problem can be solved and the optimum found. If the new optimum is the same as the previous one then the computations are terminated. [Pg.79]

Sufficient conditions are that any local move away from the optimal point ti gives rise to an increase in the objective function. Expand F in a Taylor series locally around the candidate point ti up to second-order terms [Pg.484]

The calculations begin with given values for the independent variables u and exit with the (constrained) derivatives of the objective function with respec t to them. Use the routine described above for the unconstrained problem where a succession of quadratic fits is used to move toward the optimal point for an unconstrained problem. This approach is a form or the generahzed reduced gradient (GRG) approach to optimizing, one of the better ways to cany out optimization numerically. [Pg.486]

Objective Function This is the quantity for which a minimax is sought. For a complete manufacturing plant, it is related closely to the economy of the plant. Subsidiary problems may be to optimize conversion, production, selectivity, energy consumption, and so on in terms of temperature, pressure, catalyst, or other pertinent variables. [Pg.705]

Westerterp et al. (1984 see Case Study 4, preceding) conclude, Thanks to mathematical techniques and computing aids now available, any optimization problem can be solved, provided it is reahstic and properly stated. The difficulties of optimization lie mainly in providing the pertinent data and in an adequate construc tion of the objective function. [Pg.706]

Determine the criterion for optimization and specify the objective function in terms of the above variables together with coefficients. This step provides the performance model (sometimes called the economic model when appropriate). [Pg.742]

FIG. 8-46 Diagram for selection of optimization techniques with algebraic constraints and objective function. [Pg.743]

Formulation of the Objective Function The formulation of objective functions is one of the crucial steps in the application of optimization to a practical problem. You must be able to translate the desired objective into mathematical terms. In the chemical process industries, the obective function often is expressed in units of currency (e.g., U.S. dollars) because the normal industrial goal is to minimize costs or maximize profits subject to a variety of constraints. [Pg.743]

The steepest descent method is quite old and utilizes the intuitive concept of moving in the direction where the objective function changes the most. However, it is clearly not as efficient as the other three. Conjugate gradient utilizes only first-derivative information, as does steepest descent, but generates improved search directions. Newton s method requires second derivative information but is veiy efficient, while quasi-Newton retains most of the benefits of Newton s method but utilizes only first derivative information. All of these techniques are also used with constrained optimization. [Pg.744]

Nonlinear Programming The most general case for optimization occurs when both the objective function and constraints are nonlinear, a case referred to as nonlinear programming. While the idea behind the search methods used for unconstrained multivariable problems are applicable, the presence of constraints complicates the solution procedure. [Pg.745]

© 2019 chempedia.info