Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Local optimization problem

The camera model has a high number of parameters with a high correlation between several parameters. Therefore, the calibration problem is a difficult nonlinear optimization problem with the well known problems of instable behaviour and local minima. In out work, an approach to separate the calibration of the distortion parameters and the calibration of the projection parameters is used to solve this problem. [Pg.486]

This code is iavoked for the process optimization problem oace it is formulated as a quadratic problem locally. The solutioa from the code is used to arrive at the values of the optimization variables, at which the objective fuactioa is reevaluated and a new quadratic expression is generated for it. The... [Pg.79]

Local Minimum Point for Unconstrained Problems Consider the following unconstrained optimization problem ... [Pg.484]

One of the goals of Localized Molecular Orbitals (LMO) is to derive MOs which are approximately constant between structurally similar units in different molecules. A set of LMOs may be defined by optimizing the expectation value of an two-electron operator The expectation value depends on the n, parameters in eq. (9.19), i.e. this is again a function optimization problem (Chapter 14). In practice, however, the localization is normally done by performing a series of 2 x 2 orbital rotations, as described in Chapter 13. [Pg.227]

The space-frequency localization of wavelets has lead other researchers as well (Pati, 1992 Zhang and Benveniste, 1992) in considering their use in a NN scheme. In their schemes, however, the determination of the network involves the solution of complicated optimization problem where not only the coefficients but also the wavelet scales and positions in the input space are unknown. Such an approach evidently defies the on-line character of the learning problem and renders any structural adaptation procedure impractical. In that case, those networks suffer from all the deficiencies of NNs for which the network structure is a static decision. [Pg.186]

The back-propagation strategy is a steepest gradient method, a local optimization technique. Therefore, it also suffers from the major drawback of these methods, namely that it can become locked in a local optimum. Many variants have been developed to overcome this drawback [20-24]. None of these does however really solve the problem. [Pg.677]

The nature of the optimization problem can mm out to be linear or nonlinear depending on the mass transfer model chosen14. If a model based on a fixed outlet concentration is chosen, the model turns out to be a linear model (assuming linear cost models are adopted). If the outlet concentration is allowed to vary, as in Figure 26.35a and Figure 26.35b, then the optimization turns out to be a nonlinear optimization with all the problems of local optima associated with such problems. The optimization is in fact not so difficult in practice as regards the nonlinearity, because it is possible to provide a good initialization to the nonlinear model. If the outlet concentrations from each operation are initially assumed to go to their maximum outlet concentrations, then this can then be solved by a linear optimization. This usually... [Pg.605]

As shown in Fig. 3-53, optimization problems that arise in chemical engineering can be classified in terms of continuous and discrete variables. For the former, nonlinear programming (NLP) problems form the most general case, and widely applied specializations include linear programming (LP) and quadratic programming (QP). An important distinction for NLP is whether the optimization problem is convex or nonconvex. The latter NLP problem may have multiple local optima, and an important question is whether a global solution is required for the NLP. Another important distinction is whether the problem is assumed to be differentiable or not. [Pg.60]

We start with continuous variable optimization and consider in the next section the solution of NLP problems with differentiable objective and constraint functions. If only local solutions are required for the NLP problem, then very efficient large-scale methods can be considered. This is followed by methods that are not based on local optimality criteria we consider direct search optimization methods that do not require derivatives as well as deterministic global optimization methods. Following this, we consider the solution of mixed integer problems and outline the main characteristics of algorithms for their solution. Finally, we conclude with a discussion of optimization modeling software and its implementation on engineering models. [Pg.60]

Convex Cases of NLP Problems Linear programs and quadratic programs are special cases of (3-85) that allow for more efficient solution, based on application of KKT conditions (3-88) through (3-91). Because these are convex problems, any locally optimal solution is a global solution. In particular, if the objective and constraint functions in (3-85) are linear, then the following linear program (LP)... [Pg.62]

This basic concept leads to a wide variety of global algorithms, with the following features that can exploit different problem classes. Bounding strategies relate to the calculation of upper and lower bounds. For the former, any feasible point or, preferably, a locally optimal point in the subregion can be used. For the lower bound, convex relaxations of the objective and constraint functions are derived. [Pg.66]

By partitioning one can try to split the problem into a number of smaller problems that may be easier to solve and then recombine the local optimal solutions into a global solution. [Pg.275]

The objective function may exhibit many local optima, whereas the global optimum is sought. A solution to the optimization problem may be obtained that is less satisfactory than another solution elsewhere in the region. The better solution may be reached only by initiating the search for the optimum from a different starting point. [Pg.27]

Although the examples thus far have involved linear constraints, the chief nonlinearity of an optimization problem often appears in the constraints. The feasible region then has curved boundaries. A problem with nonlinear constraints may have local optima, even if the objective function has only one unconstrained optimum. Consider a problem with a quadratic objective function and the feasible region shown in Figure 4.8. The problem has local optima at the two points a and b because no point of the feasible region in the immediate vicinity of either point yields a smaller value of/. [Pg.120]

So far, only techniques, starting from some initial point and searching locally for an optimum, have been discussed. However, most optimization problems of interest will have the complication of multiple local optima. Stochastic search procedures (cf Section 4.4.4.1) attempt to overcome this problem. Deterministic approaches have to rely on rigorous sampling techniques for the initial configuration and repeated application of the local search method to reliably provide solutions that are reasonably close to globally optimal solutions. [Pg.70]

Figure 11.1 The basic problem of an optimization problem with a one-dimensional function having more than one minimum. The slope (gradient) can give useful information concerning which direction to seek the minimum and which point to try next. Extrema are characterized by a gradient of zero, so methods that rely solely on this information will halt also in a local minimum. Figure 11.1 The basic problem of an optimization problem with a one-dimensional function having more than one minimum. The slope (gradient) can give useful information concerning which direction to seek the minimum and which point to try next. Extrema are characterized by a gradient of zero, so methods that rely solely on this information will halt also in a local minimum.
In a strict sense parameter estimation is the procedure of computing the estimates by localizing the extremum point of an objective function. A further advantage of the least squares method is that this step is well supported by efficient numerical techniques. Its use is particularly simple if the response function (3.1) is linear in the parameters, since then the estimates are found by linear regression without the inherent iteration in nonlinear optimization problems. [Pg.143]

In the previous section we discussed the primal and master problem for the GBD. We have the primal problem being a (linear or) nonlinear programming NLP problem that can be solved via available local NLP solvers (e.g., MINOS 5.3). The master problem, however, consists of outer and inner optimization problems, and approaches towards attaining its solution are discussed in the following. [Pg.122]

Remark 1 If no approximation is introduced in the PFR model, then the mathematical model will consist of both algebraic and differential equations with their related boundary conditions (Horn and Tsai, 1967 Jackson, 1968). If in addition local mixing effects are considered, then binary variables need to be introduced (Ravimohan, 1971), and as a result the mathematical model will be a mixed-integer optimization problem with both algebraic and differential equations. Note, however, that there do not exist at present algorithmic procedures for solving this class of problems. [Pg.413]


See other pages where Local optimization problem is mentioned: [Pg.65]    [Pg.338]    [Pg.1143]    [Pg.65]    [Pg.338]    [Pg.1143]    [Pg.2332]    [Pg.2820]    [Pg.155]    [Pg.339]    [Pg.323]    [Pg.300]    [Pg.190]    [Pg.78]    [Pg.39]    [Pg.41]    [Pg.416]    [Pg.134]    [Pg.141]    [Pg.61]    [Pg.66]    [Pg.66]    [Pg.67]    [Pg.183]    [Pg.9]    [Pg.392]    [Pg.236]    [Pg.429]    [Pg.219]    [Pg.159]    [Pg.317]    [Pg.68]    [Pg.336]    [Pg.408]   
See also in sourсe #XX -- [ Pg.2 , Pg.1143 ]




SEARCH



Optimization problems

© 2024 chempedia.info