Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gradient-based search method

When the second stage decisions are real-valued variables, the value function Qu(x) is piecewise-linear and convex in x. However, when some of the second stage variables are integer-valued, the convexity property is lost. The value function Qafx) is in general non-convex and non-differentiable in x. The latter property prohibits the use of gradient-based search methods for solving (MASTER). [Pg.201]

Many analytical optimal placement methods have been proposed, including methods based on the principles of optimal control theory, gradient-based search methods, and an analysis/redesign method. [Pg.36]

Takewaki Method The aim ofthe Takewaki (1997) method is to minimise an objective function given by the sum of the amplitudes ofthe interstory drifts ofthe transfer function, evaluated atthe undamped natural frequency of the structure, subject to a constraint on the total amount of added viscous damping. Initially the added damping is uniformly distributed and the optimum distribution is then achieved using a gradient-based search algorithm, that is, the damping distribution... [Pg.38]

Direct search methods use only function evaluations. They search for the minimum of an objective function without calculating derivatives analytically or numerically. Direct methods are based upon heuristic rules which make no a priori assumptions about the objective function. They tend to have much poorer convergence rates than gradient methods when applied to smooth functions. Several authors claim that direct search methods are not as efficient and robust as the indirect or gradient search methods (Bard, 1974 Edgar and Himmelblau, 1988 Scales, 1986). However, in many instances direct search methods have proved to be robust and reliable particularly for systems that exhibit local minima or have complex nonlinear constraints (Wang and Luus, 1978). [Pg.78]

The problem of local minima in function minimization or optimization problems has given rise to the development of a variety of algorithms which are able to seek global minima. Traditional gradient based optimizers proceed by the selection of fruitful search directions and subsequent numerical one-dimensional optimization along these paths. Such methods are therefore inherently prone to the discovery of local minima in the vicinity of their starting point, as illustrated in Fig. 5.4(b). This property is in fact desirable in... [Pg.122]

As noted in the introduction, energy-only methods are generally much less efficient than gradient-based techniques. The simplex method [9] (not identical with the similarly named method used in linear programming) was used quite widely before the introduction of analytical energy gradients. The intuitively most obvious method is a sequential optimization of the variables (sequential univariate search). As the optimization of one variable affects the minimum of the others, the whole cycle has to be repeated after all variables have been optimized. A one-dimensional minimization is usually carried out by finding the... [Pg.2333]

As an alternative to RSM, simulation responses can be used directly to explore the sample space of control variables. To do so, a lot of combinatorial optimization approaches were adapted for simulation optimization. In general, there are four main classes of methods that have shown a particular applicability in (multi-objective) simulation optimization Meta-heuristics, gradient-based procedures, random search, and sample path optimization. Of particular interest are meta-heuristics as they have shown a good performance for a wide range of combinatorial optimization approaches. Therefore, commercial simulation software primarily uses these techniques to incorporate simulation optimization routines. Among meta-heuristics, tabu search, scatter search, and genetic algorithms are most widely used. Table 4.13 provides an overview on aU aforementioned techniques. [Pg.186]

More generally, the process optimisation problem is solved using an optimisation solver, which interacts with the process simulator to minimise the objective function. The optimisation solver can be based on any one of many optimisation techniques. One group of possible optimisation techniques are the gradient-search methods. These methods rely on analytic or semi-analytic expressions for the objective function and objective function... [Pg.370]

Further methods, which have been suggested to optimize the weight parameters, are genetic algorithmswith real-valued encodings, simulated annealing and swarm searches, which can also be combined with the gradient-based methods for a refinement of the fits. [Pg.348]


See other pages where Gradient-based search method is mentioned: [Pg.37]    [Pg.37]    [Pg.170]    [Pg.483]    [Pg.484]    [Pg.102]    [Pg.2333]    [Pg.207]    [Pg.66]    [Pg.70]    [Pg.385]    [Pg.202]    [Pg.364]    [Pg.68]    [Pg.167]    [Pg.168]    [Pg.21]    [Pg.207]    [Pg.119]    [Pg.258]    [Pg.299]    [Pg.616]    [Pg.620]    [Pg.223]    [Pg.628]    [Pg.632]    [Pg.234]    [Pg.187]    [Pg.223]    [Pg.109]    [Pg.61]    [Pg.432]    [Pg.51]    [Pg.334]    [Pg.83]    [Pg.35]    [Pg.43]    [Pg.207]    [Pg.46]    [Pg.549]    [Pg.1116]   
See also in sourсe #XX -- [ Pg.37 ]




SEARCH



Gradient method

Search methods

Searching methods

© 2024 chempedia.info