Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Objective function methods, optimization

Figure 5 Optimization of the objective function in Modeller. Optimization of the objective function (curve) starts with a random or distorted model structure. The iteration number is indicated below each sample structure. The first approximately 2000 iterations coiTespond to the variable target function method [82] relying on the conjugate gradients technique. This approach first satisfies sequentially local restraints, then slowly introduces longer range restraints until the complete objective function IS optimized. In the remaining 4750 iterations, molecular dynamics with simulated annealing is used to refine the model [83]. CPU time needed to generate one model is about 2 mm for a 250 residue protein on a medium-sized workstation. Figure 5 Optimization of the objective function in Modeller. Optimization of the objective function (curve) starts with a random or distorted model structure. The iteration number is indicated below each sample structure. The first approximately 2000 iterations coiTespond to the variable target function method [82] relying on the conjugate gradients technique. This approach first satisfies sequentially local restraints, then slowly introduces longer range restraints until the complete objective function IS optimized. In the remaining 4750 iterations, molecular dynamics with simulated annealing is used to refine the model [83]. CPU time needed to generate one model is about 2 mm for a 250 residue protein on a medium-sized workstation.
Since in this study in vitro dissolution served as the response or objective function for optimizing the level of magnesium stearate, it would appear that the authors of Ref. 8 had a high degree of confidence in this method. The dissolution test method and acceptance criterion in the selected example is fairly common. Its in vivo relevance is assumed by many with a fair degree of confidence, as exemplified by the following perspective expressed in the USP [35] ... [Pg.340]

Dielectric barrier discharge reactor for conversion of methane and CO2 into synthesis gas and C2+ hydrocarbons Three cases (a) maximization of metiiane conversion and C2+ selectivity, (b) maximization of methane conversion and C2+ yield, and (c) maximization of methane conversion and H2 selectivity. Weighted sum of squared objective functions method along with GA An artificial neural network model of the process was developed based on experimental data, and then used for optimization. Istadi and Amin (2006)... [Pg.45]

If the problem considered has only two objective functions, methods generating a representation of the Pareto optimal set, like EMO approaches can be applied because it is simple to visualize the solutions on a plane. However, when the problem has more than two objectives, the visualization is no longer trivial and interactive approaches offer a viable alternative to solve the problem without artificial simplifications. Because interactive methods rely heavily on the preference information specified by the DM, it is important to select such a user-friendly method where the style of specif3ung preferences is convenient for the DM. In addition, the specific features of the problem to be solved must be taken into consideration. [Pg.181]

There are many efficient methods for disrupting cells for the release of its intracellular content. The problem with enzymes is that the method must be sufficiently rough to disrupt or distort the cell envelopes, but gentle enough to preserve activity. This poses a compromise so that the process can be optimized. A suitable objective function for optimization is the amount of active enzyme recovered ... [Pg.71]

Minimization of the time demand or cost of analyses might also constitute an objective function. To optimize an entire analytical procedure, methods of operational research might be needed in addition to the systematic approaches considered in this section. This is especially important in cases where risk assessment is required in connection with the analytical procedure. [Pg.100]

The optimal damper distributions in buildings are found for various objective functions. The weighted sum of amplitudes of the transfer functions of interstorey drifts and the weighted sum of amplitudes of the transfer functions of displacements evaluated at the fundamental natural frequency of the frame with the dampers are most frequently used as the objective function. The optimization problem is solved using the sequential optimization method and the particle swarm optimization method. Several numerical solutions to the considered optimization problem are presented and discussed in detail. [Pg.75]

Minimum-time joint trajectory is a constrained non-linear optimization problem with a single objective function. The optimization procedure used in this work is the non-linear optimization search method with goal programming based on the Modified Hooke and Jeeves Direct Search Method [13]. [Pg.503]

Unconstrained optimization methods [W. II. Press, et. ah, Numerical Recipes The An of Scieniific Compulime.. Cambridge University Press, 1 9H6. Chapter 101 can use values of only the objective function, or of first derivatives of the objective function. second derivatives of the objective function, etc. llyperChem uses first derivative information and, in the Block Diagonal Newton-Raphson case, second derivatives for one atom at a time. TlyperChem does not use optimizers that compute the full set of second derivatives (th e Hessian ) because it is im practical to store the Hessian for mac-romoleciiles with thousands of atoms. A future release may make explicit-Hessian meth oils available for smaller molecules but at this release only methods that store the first derivative information, or the second derivatives of a single atom, are used. [Pg.303]

Combinatorial. Combinatorial methods express the synthesis problem as a traditional optimization problem which can only be solved using powerful techniques that have been known for some time. These may use total network cost direcdy as an objective function but do not exploit the special characteristics of heat-exchange networks in obtaining a solution. Much of the early work in heat-exchange network synthesis was based on exhaustive search or combinatorial development of networks. This work has not proven useful because for only a typical ten-process-stream example problem the alternative sets of feasible matches are cal.55 x 10 without stream spHtting. [Pg.523]

There are several mathematical methods for producing new values of the variables in this iterative optimization process. The relation between a simulation and an optimization is depicted in Eigure 6. Mathematical methods that provide continual improvement of the objective function in the iterative... [Pg.78]

This method of optimization is known as the generalized reduced-gradient (GRG) method. The objective function and the constraints are linearized ia a piecewise fashioa so that a series of straight-line segments are used to approximate them. Many computer codes are available for these methods. Two widely used ones are GRGA code (49) and GRG2 code (50). [Pg.79]

Optimization should be viewed as a tool to aid in decision making. Its purpose is to aid in the selection of better values for the decisions that can be made by a person in solving a problem. To formulate an optimization problem, one must resolve three issues. First, one must have a representation of the artifact that can be used to determine how the artifac t performs in response to the decisions one makes. This representation may be a mathematical model or the artifact itself. Second, one must have a way to evaluate the performance—an objective function—which is used to compare alternative solutions. Third, one must have a method to search for the improvement. This section concentrates on the third issue, the methods one might use. The first two items are difficult ones, but discussing them at length is outside the scope of this sec tion. [Pg.483]

No single method or algorithm of optimization exists that can be apphed efficiently to all problems. The method chosen for any particular case will depend primarily on (I) the character of the objective function, (2) the nature of the constraints, and (3) the number of independent and dependent variables. Table 8-6 summarizes the six general steps for the analysis and solution of optimization problems (Edgar and Himmelblau, Optimization of Chemical Processes, McGraw-HiU, New York, 1988). You do not have to follow the cited order exac tly, but vou should cover all of the steps eventually. Shortcuts in the procedure are allowable, and the easy steps can be performed first. Steps I, 2, and 3 deal with the mathematical definition of the problem ideutificatiou of variables and specification of the objective function and statement of the constraints. If the process to be optimized is very complex, it may be necessaiy to reformulate the problem so that it can be solved with reasonable effort. Later in this section, we discuss the development of mathematical models for the process and the objec tive function (the economic model). [Pg.742]

The steepest descent method is quite old and utilizes the intuitive concept of moving in the direction where the objective function changes the most. However, it is clearly not as efficient as the other three. Conjugate gradient utilizes only first-derivative information, as does steepest descent, but generates improved search directions. Newton s method requires second derivative information but is veiy efficient, while quasi-Newton retains most of the benefits of Newton s method but utilizes only first derivative information. All of these techniques are also used with constrained optimization. [Pg.744]

Nonlinear Programming The most general case for optimization occurs when both the objective function and constraints are nonlinear, a case referred to as nonlinear programming. While the idea behind the search methods used for unconstrained multivariable problems are applicable, the presence of constraints complicates the solution procedure. [Pg.745]

To overcome the limitations of the database search methods, conformational search methods were developed [95,96,109]. There are many such methods, exploiting different protein representations, objective function tenns, and optimization or enumeration algorithms. The search algorithms include the minimum perturbation method [97], molecular dynamics simulations [92,110,111], genetic algorithms [112], Monte Carlo and simulated annealing [113,114], multiple copy simultaneous search [115-117], self-consistent field optimization [118], and an enumeration based on the graph theory [119]. [Pg.286]

The random search technique can be applied to constrained or unconstrained optimization problems involving any number of parameters. The solution starts with an initial set of parameters that satisfies the constraints. A small random change is made in each parameter to create a new set of parameters, and the objective function is calculated. If the new set satisfies all the constraints and gives a better value for the objective function, it is accepted and becomes the starting point for another set of random changes. Otherwise, the old parameter set is retained as the starting point for the next attempt. The key to the method is the step that sets the new, trial values for the parameters ... [Pg.206]

Appendix B. Optimal Control Equations for Photodissociation Appendix C. Derivative of the Objective Functional Appendix D. Various Conjugate Gradient Methods... [Pg.43]

Let II II denote the Euclidean norm and define = gk+i gk- Table I provides a chronological list of some choices for the CG update parameter. If the objective function is a strongly convex quadratic, then in theory, with an exact line search, all seven choices for the update parameter in Table I are equivalent. For a nonquadratic objective functional J (the ordinary situation in optimal control calculations), each choice for the update parameter leads to a different performance. A detailed discussion of the various CG methods is beyond the scope of this chapter. The reader is referred to Ref. [194] for a survey of CG methods. Here we only mention briefly that despite the strong convergence theory that has been developed for the Fletcher-Reeves, [195],... [Pg.83]

The method above does not account for differences in the profitability of various products. Reinhardt and Rippin formulated an objective function that takes these into account. They proposed to use the Here and Now design including that objective function. The Here and Now design comprise three steps (1) optimization of the design and operating variables for the worst possible realization of the uncertain parameters ... [Pg.504]


See other pages where Objective function methods, optimization is mentioned: [Pg.286]    [Pg.179]    [Pg.903]    [Pg.44]    [Pg.160]    [Pg.50]    [Pg.549]    [Pg.79]    [Pg.79]    [Pg.744]    [Pg.744]    [Pg.284]    [Pg.146]    [Pg.5]    [Pg.207]    [Pg.106]    [Pg.368]    [Pg.697]    [Pg.45]    [Pg.49]    [Pg.52]    [Pg.72]    [Pg.262]    [Pg.505]   


SEARCH



Functionalization methods

Method objective

Object function

Objective function

Optimization function

Optimization functional

Optimization methods

Optimization objective function

Optimized method

© 2024 chempedia.info