Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Quadratic objective

With this prediction model one can set up a suitable quadratic objective function as follows ... [Pg.863]

The constant in the numerator can always be chosen to preserve the steady state gain of the transfer function. As suggested by Luus (1980) the 5 unknown parameters can be obtained by minimizing the following quadratic objective function... [Pg.301]

Another method for solving nonlinear programming problems is based on quadratic programming (QP)1. Quadratic programming is an optimization procedure that minimizes a quadratic objective function subject to linear inequality or equality (or both types) of constraints. For example, a quadratic function of two variables x and X2 would be of the general form ... [Pg.46]

Problem 4.1 is nonlinear if one or more of the functions/, gv...,gm are nonlinear. It is unconstrained if there are no constraint functions g, and no bounds on the jc,., and it is bound-constrained if only the xt are bounded. In linearly constrained problems all constraint functions g, are linear, and the objective/is nonlinear. There are special NLP algorithms and software for unconstrained and bound-constrained problems, and we describe these in Chapters 6 and 8. Methods and software for solving constrained NLPs use many ideas from the unconstrained case. Most modem software can handle nonlinear constraints, and is especially efficient on linearly constrained problems. A linearly constrained problem with a quadratic objective is called a quadratic program (QP). Special methods exist for solving QPs, and these iare often faster than general purpose optimization procedures. [Pg.118]

Although the examples thus far have involved linear constraints, the chief nonlinearity of an optimization problem often appears in the constraints. The feasible region then has curved boundaries. A problem with nonlinear constraints may have local optima, even if the objective function has only one unconstrained optimum. Consider a problem with a quadratic objective function and the feasible region shown in Figure 4.8. The problem has local optima at the two points a and b because no point of the feasible region in the immediate vicinity of either point yields a smaller value of/. [Pg.120]

Geometry of a quadratic objective function of two independent variables—elliptical contours. If the eigenvalues are equal, then the contours are circles. [Pg.132]

Geometry of a quadratic objective function of two independent variables—saddle point. [Pg.133]

For well-posed quadratic objective functions the contours always form a convex region for more general nonlinear functions, they do not (see tlje next section for an example). It is helpful to construct contour plots to assist in analyzing the performance of multivariable optimization techniques when applied to problems of two or three dimensions. Most computer libraries have contour plotting routines to generate the desired figures. [Pg.134]

First, let us consider the perfectly scaled quadratic objective function /(x) = x + x, whose contours are concentric circles as shown in Figure 6.6. Suppose we calculate the gradient at the point xT = [2 2]... [Pg.191]

LP software includes two related but fundamentally different kinds of programs. The first is solver software, which takes data specifying an LP or MILP as input, solves it, and returns the results. Solver software may contain one or more algorithms (simplex and interior point LP solvers and branch-and-bound methods for MILPs, which call an LP solver many times). Some LP solvers also include facilities for solving some types of nonlinear problems, usually quadratic programming problems (quadratic objective function, linear constraints see Section 8.3), or separable nonlinear problems, in which the objective or some constraint functions are a sum of nonlinear functions, each of a single variable, such as... [Pg.243]

Successive quadratic programming (SQP) methods solve a sequence of quadratic programming approximations to a nonlinear programming problem. Quadratic programs (QPs) have a quadratic objective function and linear constraints, and there exist efficient procedures for solving them see Section 8.3. As in SLP, the linear constraints are linearizations of the actual constraints about the selected point. The objective is a quadratic approximation to the Lagrangian function, and the algorithm is simply Newton s method applied to the KTC of the problem. [Pg.302]

In MPC a dynamic model is used to predict the future output over the prediction horizon based on a set of control changes. The desired output is generated as a set-point that may vary as a function of time the prediction error is the difference between the setpoint trajectory and the model prediction. A model predictive controller is based on minimizing a quadratic objective function over a specific time horizon based on the sum of the square of the prediction errors plus a penalty... [Pg.568]

We now develop a mathematical statement for model predictive control with a quadratic objective function for each sampling instant k and linear process model in Equation 16.1 ... [Pg.569]

Now let us define a more general form of the quadratic objective function, which permits us to assign predetermined weights to the components. Consider the general quadratic objective... [Pg.33]

To apply the procedure, the nonlinear constraints Taylor series expansion and an optimization problem is resolved to find the solution, d, that minimizes a quadratic objective function subject to linear constraints. The QP subproblem is formulated as follows ... [Pg.104]

The covariance of the residuals in the estimate can be expressed as the contribution of two terms, the first corresponding to the original adjustment and the second to a correction term. Furthermore, the quadratic objective can be expressed as... [Pg.142]

V. Visweswaran and C. A. Floudas. New properties and computational improvement of the GOP algorithm for problems with quadratic objective function and onstraints. J. Global Optim., 3(3) 439, 1993. [Pg.450]

Depending on the form of the objective function, the final formulation obtained by replacing the nonlinear Eq. (17) by the set of linear inequalities corresponds to a MINLP (nonlinear objective), to a MIQP (quadratic objective) or to a MILP (linear objective). For the cases where the objective function is linear, solution to global optimal solution is guaranteed using currently available software. The same holds true for the more general case where the objective function is a convex function. [Pg.43]

Optimal Control. Optimal control is extension of the principles of parameter optimization to dynamic systems. In this case one wishes to optimize a scalar objective function, which may be a definite integral of some function of the state and control variables, subject to a constraint, namely a dynamic equation, such as Equation (1). The solution to this problem requires the use of time-varying Lagrange multipliers for a general objective function and state equation, an analytical solution is rarely forthcoming. However, a specific case of the optimal control problem does lend itself to analytical solution, namely a state equation described by Equation (1) and a quadratic objective function given by... [Pg.104]

Edgar (22) has also discussed the quadratic objective function issue, i.e., whether it incorporates economic realities, and he concluded that it is not wholly satisfactory in this regard. At the present time the LQP appears to be a method which can usually yield results more or less equivalent to other design techniques, although possibly with larger effort. Other features of the LQP controller synthesis approach are as follows ... [Pg.105]

The minimum variance control for an SISO system finds the unrestricted minimum of the expected value of a quadratic objective function ... [Pg.106]

Summarizing At the time point t t-i the optimum of the [quadratic objective function] Zk is sought. The resulting control [input] vector U k) depends on x k—l) and contains all control [input] vectors u%, uf+i,. .., u% which control the process optimally over the interval tk-i, T. Of these control [input] vectors, one implements the vector (which depends on x k — 1)) as input vector for the next interval [t -i, tj. At the next time point a new input vector M +i is determined. This is calculated from the objective function Z +i and is dependent on x k). Therefore, the vector Ui, which is implemented in the interval is dependent on the state vector x k — 1). Hence, the sought feedback law consists of the solution of a convex optimization problem at each time point (k = 1, 2,. .., N). (Translation by the author.)... [Pg.136]

When there are no inequality constraints [Eqs. (15)-(17)], the minimization of the quadratic objective function in Eq. (4) has a simple closed-form solution, which can be expressed as follows. Equations (11) and (13) yield... [Pg.142]

G. Sigl, K. Doll, F. M. Johannes. Analytical placement a linear or a quadratic objective function In Proc. Design Automation Conf., 1991, pp. 427 - 432. [Pg.142]

Since Vf(x ) = 0, the algorithm terminates with x = (3, 2f. Note that Dj is precisely the inverse of the Hessian matrix, H(x) 2 convex quadratic objective function /(x). [Pg.2552]

Algorithms for the solution of quadratic programs, such as the Wolfe (1959) algorithm, are very reliable and readily available. Hence, these have been used in preference to the implementation of the Newton-Raphson method. For each iteration, the quadratic objective function is minimized subject to linearized equality and inequality constraints. Clearly, the most computationally expensive step in carrying out an iteration is in the evaluation of the Lapla-cian of the Lagrangian, V xL x , X which is also the Hessian matrix of the La-grangian that is, the matrix of second derivatives with respect to X . [Pg.632]

A key problem that arises in the implementation of Powell s algorithm is due to the linearization that produces a quadratic objective function and linear constraints, which often lead to infeasible solution vectors, X ]. This problem manifests itself in solu-... [Pg.632]

Figure 5.19 (a) AR for complex isola kinetics. The objective function is simply the concentration of component B. (b) A quadratic objective function in c representing operating profit from the sale of component B. [Pg.130]

An optimization problem vith a quadratic objective function and linear constraints is called a quadratic program and is denoted by QP. [Pg.390]

A problem with an easier solution has the quadratic objective function 1... [Pg.407]

Consider the problem with the quadratic objective function... [Pg.412]

The Lagrange function is approximated with a quadratic function, whereas the nonlinear constraints are linearized (Sequential Quadratic Programming, SQP, method see Section 13.7). Also in this case, a lower level BzzConstrai-nedMinimization class object with a quadratic objective function and hnear constraints is invoked. [Pg.446]

A method called PARSE (Probability Assessment via Relaxation rates of a Structural Ensemble) is described for determination of ensembles of structures from NMR data. The problem is approached in two separate steps (1) generation of a pool of potential conformers, and (2) determination of the conformers probabilities which best account for the experimental data. The probabilities are calculated by a global constrained optimization of a quadratic objective function measuring the agreement between observed NMR parameters and those calculated for the ensemble. The performance of the method is tested on synthetic data sets simulated for various structural ensembles of the complementary dinucleotide d(CA) d(TG). [Pg.181]


See other pages where Quadratic objective is mentioned: [Pg.185]    [Pg.284]    [Pg.568]    [Pg.33]    [Pg.873]    [Pg.527]    [Pg.4]    [Pg.167]    [Pg.187]    [Pg.14]   
See also in sourсe #XX -- [ Pg.406 ]




SEARCH



Bound quadratic objective function

Quadratic

Quadratic objective function

© 2024 chempedia.info