Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Constrained least-squares linear constraint

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]

The constrained least-square method is developed in Section 5.3 and a numerical example treated in detail. Efficient specific algorithms taking errors into account have been developed by Provost and Allegre (1979). Literature abounds in alternative methods. Wright and Doherty (1970) use linear programming methods that are fast and offer an easy implementation of linear constraints but the structure of the data is not easily perceived and error assessment inefficiently handled. Principal component analysis (Section 4.4) is more efficient when the end-members are unknown. [Pg.9]

A quadratic programming problem minimizes a quadratic function of n variables subject to m linear inequality or equality constraints. A convex QP is the simplest form of a nonlinear programming problem with inequality constraints. A number of practical optimization problems are naturally posed as a QP problem, such as constrained least squares and some model predictive control problems. [Pg.380]

In this section we give a brief review of the mathematics involved in solving the linear least-squares problems, with or without linear constraints, as encountered in this chapter. More details about the unconstrained problem can be found in any text on linear regression, e.g. Draper and Smith (1981), or Press, et al. (1986). More details about the constrained problem can be found in Lawson and Hanson (1974). [Pg.178]

The methods for solving an optimization task depend on the problem classification. Since the maximum of a function / is the minimum of the function —/, it suffices to deal with minimization. The optimization problem is classified according to the type of independent variables involved (real, integer, mixed), the number of variables (one, few, many), the functional characteristics (linear, least squares, nonlinear, nondifferentiable, separable, etc.), and the problem. statement (unconstrained, subject to equality constraints, subject to simple bounds, linearly constrained, nonlinearly constrained, etc.). For each category, suitable algorithms exist that exploit the problem s structure and formulation. [Pg.1143]


See other pages where Constrained least-squares linear constraint is mentioned: [Pg.284]    [Pg.615]    [Pg.179]    [Pg.547]    [Pg.120]    [Pg.2]   
See also in sourсe #XX -- [ Pg.278 ]




SEARCH



Least squares linear

Least-square constraints

Least-squares constrained

© 2024 chempedia.info