Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Objective Function

An objective function S(6) is presented here for use in Bayesian estimation of the parameter vector 0 in a mathematical model [Pg.96]

When y is given instead of 6 and cr, we call this function the likelihood, [Pg.97]

To apply Bayes theorem, we need a prior probability density for the unknowns, 6 and a. Treating 6 and cr as independent a priori and p 6) as uniform over the permitted range of 6. we obtain the joint prior density [Pg.98]

An alternate argument for minimizing S 6) is to maximize the function i( 6,a Y) given in Eq. (6.1-10). This maximum likelihood approach, advocated by Fisher (1925). gives the same point estimate 6 as does the posterior density function in Eq. (6.1-13). The posterior density function is essential, however, for calculating posterior probabilities for regions of 6 and for rival models, as we do in later sections of this chapter. [Pg.98]

The permitted region of 6 can take various forms. Our package GREGPLUS uses a rectangular region [Pg.98]

As in algebraic models, the error term accounts for the measurement error as well as for all model inadequacies. In dynamic systems we have the additional complexity that the error terms may be autocorrelated and in such cases several modifications to the objective function should be performed. Details are provided in Chapter 8. [Pg.13]

In dynamic systems we may have the situation where a series of runs have been conducted and we wish to estimate the parameters using all the data simultaneously. For example in a study of isothermal decomposition kinetics, measurements are often taken over time for each run which is carried out at a fixed temperature. [Pg.13]

Estimation of parameters present in partial differential equations is a very complex issue. Quite often by proper discretization of the spatial derivatives we transform the governing PDEs into a large number of ODEs. Hence, the problem can be transformed into one described by ODEs and be tackled with similar techniques. However, the fact that in such cases we have a system of high dimensionality requires particular attention. Parameter estimation for systems described by PDEs is examined in Chapter 11. [Pg.13]

What type of objective function should we minimize This is the question that we are always faced with before we can even start the search for the parameter values. In general, the unknown parameter vector k is found by minimizing a scalar function often referred to as the objective function. We shall denote this function as S(k) to indicate the dependence on the chosen parameters. [Pg.13]

The objective function is a suitable measure of the overall departure of the model calculated values from the measurements. For an individual measurement the departure from the model calculated value is represented by the residual e,. For example, the i,h residual of an explicit algebraic model is [Pg.13]


As indicated in Chapter 6, and discussed in detail by Anderson et al. (1978), optimum parameters, based on the maximum-likelihood principle, are those which minimize the objective function... [Pg.67]

For binary vapor-liquid equilibrium measurements, the parameters sought are those that minimize the objective function... [Pg.98]

For liquid-liquid systems, the separations are isothermal and the objective function is one-dimensional, consisting of Equation (7-17). However, the composition dependence of the... [Pg.117]

For bubble and dew-point calculations we have, respectively, the objective functions... [Pg.118]

Equations (7-8) and (7-9) are then used to calculate the compositions, which are normalized and used in the thermodynamic subroutines to find new equilibrium ratios,. These values are then used in the next Newton-Raphson iteration. The iterative process continues until the magnitude of the objective function 1g is less than a convergence criterion, e. If initial estimates of x, y, and a are not provided externally (for instance from previous calculations of the same separation under slightly different conditions), they are taken to be... [Pg.121]

In the case of the adiabatic flash, application of a two-dimensional Newton-Raphson iteration to the objective functions represented by Equations (7-13) and (7-14), with Q/F = 0, is used to provide new estimates of a and T simultaneously. The derivatives with respect to a in the Jacobian matrix are found analytically while those with respect to T are found by finite-difference approximation... [Pg.121]

Liquid-liquid equilibrium separation calculations are superficially similar to isothermal vapor-liquid flash calculations. They also use the objective function. Equation (7-13), in a step-limited Newton-Raphson iteration for a, which is here E/F. However, because of the very strong dependence of equilibrium ratios on phase compositions, a computation as described for isothermal flash processes can converge very slowly, especially near the plait point. (Sometimes 50 or more iterations are required. )... [Pg.124]

Subroutine FUNDR. This subroutine calculates the required derivatives for REGRES by central difference, using EVAL to calculate the objective functions. [Pg.218]

Bubble-point temperature or dew-point temperatures are calculated iteratively by applying the Newton-Raphson iteration to the objective functions given by Equations (7-23) or (7-24) respectively. [Pg.326]

Secondly, the linearized inverse problem is, as well as known, ill-posed because it involves the solution of a Fredholm integral equation of the first kind. The solution must be regularized to yield a stable and physically plausible solution. In this apphcation, the classical smoothness constraint on the solution [8], does not allow to recover the discontinuities of the original object function. In our case, we have considered notches at the smface of the half-space conductive media. So, notche shapes involve abrupt contours. This strong local correlation between pixels in each layer of the half conductive media suggests to represent the contrast function (the object function) by a piecewise continuous function. According to previous works that we have aheady presented [14], we 2584... [Pg.326]

The object function we have to estimate is the relative conductivity fi = ——... [Pg.331]

Unconstrained optimization methods [W. II. Press, et. ah, Numerical Recipes The An of Scieniific Compulime.. Cambridge University Press, 1 9H6. Chapter 101 can use values of only the objective function, or of first derivatives of the objective function. second derivatives of the objective function, etc. llyperChem uses first derivative information and, in the Block Diagonal Newton-Raphson case, second derivatives for one atom at a time. TlyperChem does not use optimizers that compute the full set of second derivatives (th e Hessian ) because it is im practical to store the Hessian for mac-romoleciiles with thousands of atoms. A future release may make explicit-Hessian meth oils available for smaller molecules but at this release only methods that store the first derivative information, or the second derivatives of a single atom, are used. [Pg.303]

Here and below we emphasize the dependence of the objective functional on 5, because later we shall investigate the convergence of the solutions of problem (2.189) as 5 —> 0. [Pg.130]

We note that if the crack opening is zero on F,, i.e. [%] = 0, the value of the objective functional Js u) is zero. We also assume that near F, the punch does not interact with the shell. It turns out that in this case the solution X = (IF, w) of problem (2.188) is infinitely differentiable in a neighbourhood of points of the crack. This property is local, so that a zero opening of the crack near the fixed point guarantees infinite differentiability of the solution in some neighbourhood of this point. Here it is undoubtedly necessary to require appropriate regularity of the curvatures % and the external forces u. The aim of the following discussion is to justify this fact. At this point the external force u is taken to be fixed. [Pg.131]

Finding the best solution when a large number of variables are involved is a fundamental engineering activity. The optimal solution is with respect to some critical resource, most often the cost (or profit) measured in doUars. For some problems, the optimum may be defined as, eg, minimum solvent recovery. The calculated variable that is maximized or minimized is called the objective or the objective function. [Pg.78]

Many process simulators come with optimizers that vary any arbitrary set of stream variables and operating conditions and optimize an objective function. Such optimizers start with an initial set of values of those variables, carry out the simulation for the entire flow sheet, determine the steady-state values of all the other variables, compute the value of the objective function, and develop a new guess for the variables for the optimization so as to produce an improvement in the objective function. [Pg.78]

There are several mathematical methods for producing new values of the variables in this iterative optimization process. The relation between a simulation and an optimization is depicted in Eigure 6. Mathematical methods that provide continual improvement of the objective function in the iterative... [Pg.78]

When an optimum is reached, the perturbation ia any direction would reduce the value of the objective function. Such an optimum, however, does not guarantee a global optimum, and the mountain-climbing process stops at a local optimum. [Pg.79]

In real-life problems ia the process iadustry, aeady always there is a nonlinear objective fuactioa. The gradieats deteroiiaed at any particular poiat ia the space of the variables to be optimized can be used to approximate the objective function at that poiat as a linear fuactioa similar techniques can be used to represent nonlinear constraints as linear approximations. The linear programming code can then be used to find an optimum for the linearized problem. At this optimum poiat, the objective can be reevaluated, the gradients can be recomputed, and a new linearized problem can be generated. The new problem can be solved and the optimum found. If the new optimum is the same as the previous one then the computations are terminated. [Pg.79]

This method of optimization is known as the generalized reduced-gradient (GRG) method. The objective function and the constraints are linearized ia a piecewise fashioa so that a series of straight-line segments are used to approximate them. Many computer codes are available for these methods. Two widely used ones are GRGA code (49) and GRG2 code (50). [Pg.79]

Sufficient conditions are that any local move away from the optimal point ti gives rise to an increase in the objective function. Expand F in a Taylor series locally around the candidate point ti up to second-order terms ... [Pg.484]

Constrained Derivatives—Equality Constrained Problems Consider minimizing the objective function F written in terms of n variables z and subject to m equahty constraints h z) = 0, or... [Pg.484]

At any point where the functions h(z) are zero, the Lagrange func tion equals the objective function. [Pg.484]

As the goal is to minimize the objective function, releasing the constraint into the feasible region must not decrease the objective function. Using the shadow price argument above, it is evident that the multipher must be nonnegative (Ref. 177). [Pg.485]

The calculations begin with given values for the independent variables u and exit with the (constrained) derivatives of the objective function with respec t to them. Use the routine described above for the unconstrained problem where a succession of quadratic fits is used to move toward the optimal point for an unconstrained problem. This approach is a form or the generahzed reduced gradient (GRG) approach to optimizing, one of the better ways to cany out optimization numerically. [Pg.486]

Inequality Constrained Problems To solve inequality constrained problems, a strategy is needed that can decide which of the inequality constraints should be treated as equalities. Once that question is decided, a GRG type of approach can be used to solve the resulting equality constrained problem. Solving can be split into two phases phase 1, where the go is to find a point that is feasible with respec t to the inequality constraints and phase 2, where one seeks the optimum while maintaining feasibility. Phase 1 is often accomphshed by ignoring the objective function and using instead... [Pg.486]

Westerterp et al. (1984 see Case Study 4, preceding) conclude, Thanks to mathematical techniques and computing aids now available, any optimization problem can be solved, provided it is reahstic and properly stated. The difficulties of optimization lie mainly in providing the pertinent data and in an adequate construc tion of the objective function. ... [Pg.706]

No single method or algorithm of optimization exists that can be apphed efficiently to all problems. The method chosen for any particular case will depend primarily on (I) the character of the objective function, (2) the nature of the constraints, and (3) the number of independent and dependent variables. Table 8-6 summarizes the six general steps for the analysis and solution of optimization problems (Edgar and Himmelblau, Optimization of Chemical Processes, McGraw-HiU, New York, 1988). You do not have to follow the cited order exac tly, but vou should cover all of the steps eventually. Shortcuts in the procedure are allowable, and the easy steps can be performed first. Steps I, 2, and 3 deal with the mathematical definition of the problem ideutificatiou of variables and specification of the objective function and statement of the constraints. If the process to be optimized is very complex, it may be necessaiy to reformulate the problem so that it can be solved with reasonable effort. Later in this section, we discuss the development of mathematical models for the process and the objec tive function (the economic model). [Pg.742]

Determine the criterion for optimization and specify the objective function in terms of the above variables together with coefficients. This step provides the performance model (sometimes called the economic model when appropriate). [Pg.742]


See other pages where The Objective Function is mentioned: [Pg.117]    [Pg.139]    [Pg.326]    [Pg.326]    [Pg.330]    [Pg.332]    [Pg.303]    [Pg.526]    [Pg.78]    [Pg.79]    [Pg.79]    [Pg.79]    [Pg.79]    [Pg.80]    [Pg.81]    [Pg.483]    [Pg.484]    [Pg.485]    [Pg.716]   


SEARCH



Components for modelling the objective function

Defining the Objective Function

Formulation of the Economic Objective Function

Formulation of the Objective Function

Object function

Objective function

Objective function of the optimization

Parameter Estimation The Objective Function

The Linear Least Squares Objective Function

© 2024 chempedia.info