Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Series-optimization problems

We shall now describe a much more efficient way of analyzing the series-optimization problem. This method was conceived by Bellman, who called it dynamic programming (B2). [Pg.297]

The general constrained optimization problem can be considered as minimizing a function of n variables F(x), subject to a series of m constraints of the fomi C.(x) = 0. In the penalty fiinction method, additional temis of the fomi. (x), a.> 0, are fomially added to the original fiinction, thus... [Pg.2347]

One of the goals of Localized Molecular Orbitals (LMO) is to derive MOs which are approximately constant between structurally similar units in different molecules. A set of LMOs may be defined by optimizing the expectation value of an two-electron operator The expectation value depends on the n, parameters in eq. (9.19), i.e. this is again a function optimization problem (Chapter 14). In practice, however, the localization is normally done by performing a series of 2 x 2 orbital rotations, as described in Chapter 13. [Pg.227]

Like penalty methods, barrier methods convert a constrained optimization problem into a series of unconstrained ones. The optimal solutions to these unconstrained subproblems are in the interior of the feasible region, and they converge to the constrained solution as a positive barrier parameter approaches zero. This approach contrasts with the behavior of penalty methods, whose unconstrained subproblem solutions converge from outside the feasible region. [Pg.291]

To apply the procedure, the nonlinear constraints Taylor series expansion and an optimization problem is resolved to find the solution, d, that minimizes a quadratic objective function subject to linear constraints. The QP subproblem is formulated as follows ... [Pg.104]

When is an experiment, or a series of experiments, optimal The answer to this often asked question (129) is not unambiguous because, as in most optimization problems, it involves multiple and mutually contradictory criteria, such as... [Pg.452]

Modeled relationships can take the form of a step response, impulse response, state-space representation, or a neural network (see Section 2.6.17). If a linear form is desired, the model is usually linearized around some operating point. Another option is to produce a series of linear models, each representing a specific operating condition (usually load level). The obtained model can be used for solving a static optimization problem to find out the optimal operating point. The "optimal" criterion can be user selectable. [Pg.147]

The system identification step in the core-box modeling framework has two major sub-steps parameter estimation and model quality analysis. The parameter estimation step is usually solved as an optimization problem that minimizes a cost function that depends on the model s parameters. One choice of cost function is the sum of squares of the residuals, Si(t p) = yi(t) — yl(t p). However, one usually needs to put different weights, up (t), on the different samples, and additional information that is not part of the time-series is often added as extra terms k(p). These extra terms are large if the extra information is violated by the model, and small otherwise. A general least-squares cost function, Vp(p), is thus of the form... [Pg.126]

In this problem, there are 3 outer loop decision variables, N and the recovery of component 1 from each mixture (Re1 D1B0, Re D2,BO)- Two time intervals for reflux ratio were used for each distillation task giving 4 optimisation variables in each inner loop optimisation making a total of 8 inner loop optimisation variables. A series of problems was solved using different allocation time to each mixture, to show that the optimal design and operation are indeed affected by such allocation. A simple dynamic model (Type III) was used based on constant relative volatilities but incorporating detailed plate-to-plate calculations (Mujtaba and Macchietto, 1993 Mujtaba, 1997). The input data are given in Table 7.3. [Pg.213]

Univariate optimization is a common way of optimizing simple processes, which are affected by a series of mutually independent parameters. For two parameters such a simple problem is illustrated in figure 5.3a. In this figure a contour plot corresponding to the three-dimensional response surface is shown. The independence of the parameters leads to circular contour lines. If the value of x is first optimized at some constant value of y (line 1) and if y is subsequently optimized at the optimum value observed for x, the true optimum is found in a straightforward way, regardless of the initial choice for the constant value of y. For this kind of optimization problem univariate optimization clearly is an attractive method. [Pg.173]

Partial chemical information in the form of known pure response profiles, such as pure-component reference spectra or pure-component concentration profiles for one or more species, can also be introduced in the optimization problem as additional equality constraints [5, 42, 62, 63, 64], The known profiles can be set to be invariant along the iterative process. The known profile does not need to be complete to be used. When only selected regions of profiles are known, they can also be set to be invariant, whereas the unknown parts can be left loose. This opens up the possibility of using resolution methods for quantitative purposes, for instance. Thus, data sets analogous to those used in multivariate calibration problems, formed by signals recorded from a series of calibration and unknown samples, can be analyzed. Quantitative information is obtained by resolving the system by fixing the known concentration values of the analyte(s) in the calibration samples in the related concentration prohle(s) [65],... [Pg.435]

The methods just described do not work when there is more than one independent variable. There is certainly a need for techniques which can be extended to problems with many operating variables, most industrial systems being quite complicated. We shall now consider methods which reduce an optimization problem involving many variables to a series of one-dimensional searches. For simplicity we shall discuss optimization of an unknown function y of only two independent variables a i and x2, indicating later how to extend the techniques to more general problems where possible. [Pg.286]

It is usual in series-optimization processes to seek, for a given initial state s1, the set of decisions d1, d2,. . . , du which give the highest total yield y1 + yz +. . . + yM. To show that the initial state s1 is fixed, we shall capitalize it S1. Let y be the total yield, Y its optimal value, and let the optimal decisions be denoted by capital letters D1, D2,. . . , Du. The problem is illustrated schematically in the following diagram. [Pg.296]

In the last decades, and especially after 1990, several EAs have been proposed for solving multi-objective optimization problems. Surveys of MOEAs can be found in the literature (e.g., Coello Coello, 1999, 2005 Coello Coello et al., 2002 Deb, 2001 Jaimes and Coello Coello, 2008). The main motivation for using MOEA to solve problems is the fact that they deal simultaneously with a set of possible solutions allowing to find several members of the Pareto optimal set in a single run of the algorithm, instead of having to perform a series of separate runs as in the case of traditional programming techniques (Coello Coello, 2005 Miettinen, 1999). In addition, they can easily deal with discontinuities and concave Pareto fronts (Coello Coello, 1999 Coello Coello et al., 2002 Coello Coello, 2005 Deb, 2001). [Pg.344]

A number of optimization problems can be expressed in terms of ordered sets of numbers. The classic example is the traveling salesman problem (TSP). In the TSP, a salesman needs to visit a series of cities, with the constraint that each city must be visited once and only once. The object is to minimize the total distance traveled. An (integer) chromosome describing a trial route among eight cities could be... [Pg.27]

The optimal values can be obtained by minimizing the objective function but this optimization problem is nonlinear. However, the solution can be obtained by solving a series of linear... [Pg.64]

Genetic algorithms are typically used on function optimization problems, where they are used to calculate which input yields the maximum (or minimum) value. However, they have also been put to use on a wide variety of problems. These range from game-playing, where the genotype encodes a series of moves, to compiler optimization, in which case each gene is an optimization to be applied to a piece of code. [Pg.129]

This book deals with the solution of nonlinear systems and optimization problems in continuous space. Analogous to its companion books, it proposes a series of robust and high-performance algorithms implemented in the BzzMath library to tackle these multifaceted and notoriously difficult issues. [Pg.516]

We use MMP approaches to analyze primary SAR, selectivities, and anti-target S ARs. Results of an MMP analysis are presented in the form of HTML or PDF reports, which show the constant region of the molecular series on the left-hand side followed by the variable region of each individual compound (cf Figure 13.6) sorted by the property of interest. Activity can be substituted by any other property that is part of the multiparameter optimization problem, for example, solubility or ADMET endpoints. Data interpretation is facilitated by color coding. [Pg.309]


See other pages where Series-optimization problems is mentioned: [Pg.293]    [Pg.293]    [Pg.322]    [Pg.20]    [Pg.45]    [Pg.429]    [Pg.213]    [Pg.278]    [Pg.69]    [Pg.66]    [Pg.231]    [Pg.393]    [Pg.151]    [Pg.121]    [Pg.393]    [Pg.2448]    [Pg.220]    [Pg.342]    [Pg.755]    [Pg.376]    [Pg.182]    [Pg.9]    [Pg.196]    [Pg.237]    [Pg.339]    [Pg.338]    [Pg.754]    [Pg.257]   
See also in sourсe #XX -- [ Pg.293 , Pg.296 ]




SEARCH



Optimization problems

Optimization series

© 2024 chempedia.info