Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Optimization problems independent variables

Since 5 is a function of all the intermediate coordinates, a large scale optimization problem is to be expected. For illustration purposes consider a molecular system of 100 degrees of freedom. To account for 1000 time points we need to optimize 5 as a function of 100,000 independent variables ( ). As a result, the use of a large time step is not only a computational benefit but is also a necessity for the proposed approach. The use of a small time step to obtain a trajectory with accuracy comparable to that of Molecular Dynamics is not practical for systems with more than a few degrees of freedom. Fbr small time steps, ordinary solution of classical trajectories is the method of choice. [Pg.270]

The calculations begin with given values for the independent variables u and exit with the (constrained) derivatives of the objective function with respec t to them. Use the routine described above for the unconstrained problem where a succession of quadratic fits is used to move toward the optimal point for an unconstrained problem. This approach is a form or the generahzed reduced gradient (GRG) approach to optimizing, one of the better ways to cany out optimization numerically. [Pg.486]

No single method or algorithm of optimization exists that can be apphed efficiently to all problems. The method chosen for any particular case will depend primarily on (I) the character of the objective function, (2) the nature of the constraints, and (3) the number of independent and dependent variables. Table 8-6 summarizes the six general steps for the analysis and solution of optimization problems (Edgar and Himmelblau, Optimization of Chemical Processes, McGraw-HiU, New York, 1988). You do not have to follow the cited order exac tly, but vou should cover all of the steps eventually. Shortcuts in the procedure are allowable, and the easy steps can be performed first. Steps I, 2, and 3 deal with the mathematical definition of the problem ideutificatiou of variables and specification of the objective function and statement of the constraints. If the process to be optimized is very complex, it may be necessaiy to reformulate the problem so that it can be solved with reasonable effort. Later in this section, we discuss the development of mathematical models for the process and the objec tive function (the economic model). [Pg.742]

Indeed, using the Gauss-Newton method with an initial estimate of k(0)=(450, 7) convergence to the optimum was achieved in three iterations with no need to employ Marquardt s modification. The optimal parameter estimates are k = 420.2 8.68% and k2= 5.705 24.58%. It should be noted however that this type of a model can often lead to ill-conditioned estimation problems if the data have not been collected both at low and high values of the independent variable. The convergence to the optimum is shown in Table 17.5 starting with the initial guess k(0)=(l, 1). [Pg.326]

This optimization method, which represents the mathematical techniques, is an extension of the classic method and was the first, to our knowledge, to be applied to a pharmaceutical formulation and processing problem. Fonner et al. [15] chose to apply this method to a tablet formulation and to consider two independent variables. The active ingredient, phenylpropanolamine HC1, was kept at a constant level, and the levels of disintegrant (corn starch) and lubricant (stearic acid) were selected as the independent variables, X and Xj. The dependent variables include tablet hardness, friability, volume, in vitro release rate, and urinary excretion rate in human subjects. [Pg.611]

Feasible region for an optimization problem involving two independent variables. The dashed lines represent the side of the inequality constraints in the plane that form part of the infeasible region. The heavy line shows the feasible region. [Pg.15]

A third strategy can be carried out when the problem has many constraints and many variables. We assume that some variables are fixed and let the remainder of the variables represent degrees of freedom (independent variables) in the optimization procedure. For example, the optimum pressure of a distillation column might occur at the minimum pressure (as limited by condenser cooling). [Pg.20]

For each of the following six problems, formulate the objective function, the equality constraints (if any), and the inequality constraints (if any). Specify and list the independent variables, the number of degrees of freedom, and the coefficients in the optimization problem. [Pg.28]

Examine the following optimization problem. State the total number of variables, and list them. State the number of independent variables, and list a set. [Pg.29]

Nf = 0 The problem is exactly determined. If NF = 0, then the number of independent equations is equal to the number of process variables and the set of equations may have a unique solution, in which case the problem is not an optimization problem. For a set of linear independent equations, a unique solution exists. If the equations are nonlinear, there may be no real solution or there may be multiple solutions. [Pg.66]

The model involves four variables and three independent nonlinear algebraic equations, hence one degree of freedom exists. The equality constraints can be manipulated using direct substitution to eliminate all variables except one, say the diameter, which would then represent the independent variables. The other three variables would be dependent. Of course, we could select the velocity as the single independent variable of any of the four variables. See Example 13.1 for use of this model in an optimization problem. [Pg.69]

Note that all of these objective functions differ from one another only by a multiplicative constant this constant has no effect on the values of the independent variables at the optimum. For simplicity, we therefore use/i to determine the optimal values of D and L. Implicit in the problem statement is that a relation exists between volume and length, namely the constraint... [Pg.87]

Would your evaluation change if there were 20 independent variables in the optimization problem ... [Pg.217]

Our interest here in posing an optimization problem is to have one or more degrees of freedom left after prespecifying the values of most of the independent variables. Frequently, values are given for the following parameters ... [Pg.445]

To this point we isolated four variables D, v, Ap, and/, and have introduced three equality constraints—Equations (d (e), and (/)—leaving 1 degree of freedom (one independent variable). To facilitate the solution of the optimization problem, we eliminate three of the four unknown variables (Ap, v, and/) from the objective function using the three equality constraints, leaving D as the single independent variable. Direct substitution yields the cost equation... [Pg.462]

The optimization problem for the case of sudden injection of SiH4 involves as independent variables the total gas flow velocities ... [Pg.506]

Although, as explained in Chapter 9, many optimization problems can be naturally formulated as mixed-integer programming problems, in this chapter we will consider only steady-state nonlinear programming problems in which the variables are continuous. In some cases it may be feasible to use binary variables (on-off) to include or exclude specific stream flows, alternative flowsheet topography, or different parameters. In the economic evaluation of processes, in design, or in control, usually only a few (5-50) variables are decision, or independent, variables amid a multitude of dependent variables (hundreds or thousands). The number of dependent variables in principle (but not necessarily in practice) is equivalent to the number of independent equality constraints plus the active inequality constraints in a process. The number of independent (decision) variables comprises the remaining set of variables whose values are unknown. Introduction into the model of a specification of the value of a variable, such as T = 400°C, is equivalent to the solution of an independent equation and reduces the total number of variables whose values are unknown by one. [Pg.520]

Optimization problems are by their nature mathematical in nature. The first and perhaps the most difficult step is to determine how to mathematically model the system to be optimized (for example, paint mixing, chemical reactor, national economy, environment). This model consists of an objective function, constraints, and decision variables. The objective function is often called the merit or cost function this is the expression to be optimized that is the performance measure. For example, in Fig. 3 the objective function would be the total cost. The constraints are equations that describe the model of the process (for example, mass balances) or inequality relationships (insulation thickness >0 in the above example) among the variables. The decision variables constitute the independent variables that can be changed to optimize the system. [Pg.134]

A number of steps are involved in the solution of optimization problems, including analyzing the system to be optimized so that all variables are characterized. Next, the objective function and constraints are specified in terms of these variables, noting the independent variables (degrees of freedom). The complexity of the problem may necessitate the use of more advanced optimization techniques or problem simplification. The solution should be checked and the result examined for sensitivity to changes in the model parameters. [Pg.134]

Then the problem is transformed into one of optimizing a function with respect to the independent variables subjected to some constraints governed by their physical limits. Equations 8, 10 and 11 constitute a typical linear programming problem which can be readily solved by the simplex method (18). An example is the design problem where the residence time is minimized if its specification cannot be met. [Pg.382]

This problem contains 31 variables and 29 equality constraints (or governing equations) including the objective function. This gives rise to 2 variables as independent (or decision) variables. For a practical reason, the saturation pressure for steam, P, and the fraction of steam generated in the evaporator, which is reused for heating, a.., are selected as the independent variables. A random search technique (26) is adopted to locate the optimal point for each given e. The results are tabulated in Table I, and the trade-off curve is plotted in Figure 3. The relationship between these two objectives is obtained by the least square method as... [Pg.314]

Sometimes, instead of an initial value problem, the mathematical model of a chemical process is a boundary value problem in which values of the dependent variables are specified at different values of the independent variable t. The shooting technique consists of solving an initial value problem, but with an initial value vector a considered as a parameter to estimate (by optimization techniques) so that boundary conditions are satisfied. In this way, a boundary value problem is transformed into an initial value problem. [Pg.294]

The multiple-minimum problem is a severe handicap of many large-scale optimization applications. The state of the art today is such that for reasonable small problems (30 variables or less) suitable algorithms exist for finding all local minima for linear and nonlinear functions. For larger problems, however, many trials are generally required to find local minima, and finding the global minimum cannot be ensured. These features have prompted research in conformational-search techniques independent of, or in combination with, minimization.26... [Pg.16]

In the next two sections we briefly detail the problem. Sections 4 and 5 describe two aspects of optimization algorithms, the line search and the search direction. These sections are somewhat lengthy as befits "art," for there is still a good deal of artistry in optimizing functions that do not explicitly show dependence on their independent variables. In section 4 we give an overview of the various suggested procedures in the context of a normal mode analysis where the "physics" of the situation is somewhat more transparent. [Pg.242]

The methods just described do not work when there is more than one independent variable. There is certainly a need for techniques which can be extended to problems with many operating variables, most industrial systems being quite complicated. We shall now consider methods which reduce an optimization problem involving many variables to a series of one-dimensional searches. For simplicity we shall discuss optimization of an unknown function y of only two independent variables a i and x2, indicating later how to extend the techniques to more general problems where possible. [Pg.286]

In recent times, stochastic methods have become frequently used for solving different types of optimization problems [4.54—4.59]. If we consider here, for a steady state process analysis, the optimization problem given schematically in Fig. 4.14, we can wonder where the place of stochastic methods is in such a process. The answer to this question is limited to each particular case where we identify a normal type distribution for a fraction or for all the independent variables of the process pc = pCj]). When we use a stochastic algorithm to solve an optimization problem, we note that stochastic involvement can be considered in [4.59] ... [Pg.255]


See other pages where Optimization problems independent variables is mentioned: [Pg.214]    [Pg.744]    [Pg.324]    [Pg.262]    [Pg.33]    [Pg.200]    [Pg.613]    [Pg.85]    [Pg.443]    [Pg.526]    [Pg.217]    [Pg.624]    [Pg.624]    [Pg.127]    [Pg.48]    [Pg.4]    [Pg.2]    [Pg.278]    [Pg.568]    [Pg.142]    [Pg.452]   
See also in sourсe #XX -- [ Pg.2 , Pg.1144 ]




SEARCH



Optimization problems

Variable independent

© 2024 chempedia.info