Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Ill-conditioned estimation problems

Given the fact that in parameter estimation we normally have a relatively smooth LS objective function, we do not need to be exceptionally concerned about local optima (although this may not be the case for ill-conditioned estimation problems). This is particularly true if we have a good idea of the range where the parameter values should be. As a result, it may be more efficient to consider using a value for NR which is a function of the number of unknown parameters. For example, we may consider... [Pg.80]

Indeed, using the Gauss-Newton method with an initial estimate of k(0)=(450, 7) convergence to the optimum was achieved in three iterations with no need to employ Marquardt s modification. The optimal parameter estimates are k = 420.2 8.68% and k2= 5.705 24.58%. It should be noted however that this type of a model can often lead to ill-conditioned estimation problems if the data have not been collected both at low and high values of the independent variable. The convergence to the optimum is shown in Table 17.5 starting with the initial guess k(0)=(l, 1). [Pg.326]

In spite of its simplicity the direct integral method has relatively good statistical properties and it may be even superior to the traditional indirect approach in ill-conditioned estimation problems (ref. 18). Good performance, however, can be expected only if the sampling is sufficiently dense and the measurement errors are moderate, since otherwise spline interpolation may lead to severely biased estimates. [Pg.289]

In fact, with simple input functions common in pharmacokinetic applications (e.g., impulse or step function), the columns of the observation matrix X created from the integrals in (5.69) tend to be linearly dependent, resulting in ill - conditioned estimation problems. As discussed in the next section, this method is, however, excellent for input identification. [Pg.306]

Maximum likelihood methods are commonly used to estimate parameters from noisy data. Such methods can be applied to image restoration, possibly with additional constraints (e.g., positivity). Maximum likelihood methods are however not appropriate for solving ill-conditioned inverse problems as will be shown in this section. [Pg.403]

In practice, the solution of Equation 3.16 for the estimation of the parameters is not done by computing the inverse of matrix A. Instead, any good linear equation solver should be employed. Our preference is to perform first an eigenvalue decomposition of the real symmetric matrix A which provides significant additional information about potential ill-conditioning of the parameter estimation problem (see Chapter 8). [Pg.29]

Generally speaking, for condition numbers less than 10 the parameter estimation problem is well-posed. For condition numbers greater than 1010 the problem is relatively ill-conditioned whereas for condition numbers 10 ° or greater the problem is very ill-conditioned and we may encounter computer overflow problems. [Pg.142]

Thus, the error in the solution vector is expected to be large for an ill-conditioned problem and small for a well-conditioned one. In parameter estimation, vector b is comprised of a linear combination of the response variables (measurements) which contain the error terms. Matrix A does not depend explicitly on the response variables, it depends only on the parameter sensitivity coefficients which depend only on the independent variables (assumed to be known precisely) and on the estimated parameter vector k which incorporates the uncertainty in the data. As a result, we expect most of the uncertainty in Equation 8.29 to be present in Ab. [Pg.142]

If matrix A is ill-conditioned at the optimum (i.e., at k=k ), there is not much we can do. We are faced with a truly ill-conditioned problem and the estimated parameters will have highly questionable values with unacceptably large estimated variances. Probably, the most productive thing to do is to reexamine the structure and dependencies of the mathematical model and try to reformulate a better posed problem. Sequential experimental design techniques can also aid us in... [Pg.142]

When the parameters differ by more than one order of magnitude, matrix A may appear to be ill-conditioned even if the parameter estimation problem is well-posed. The best way to overcome this problem is by introducing the reduced sensitivity coefficients, defined as... [Pg.145]

With this modification the conditioning of matrix A is significantly improved and cond AR) gives a more reliable measure of the ill-conditioning of the parameter estimation problem. This modification has been implemented in all computer programs provided with this book. [Pg.146]

The remedies to increase the region of convergence include the use of a pseudoinverse or Marquardt s modification that overcome the problem of ill-conditioning of matrix A. However, if the basic sensitivity information is not there, the estimated direction Ak +I) cannot be obtained reliably. [Pg.152]

In this problem it is very difficult to obtain convergence to the global optimum as the condition number of matrix A at the above local optimum is 3xl018. Even if this was the global optimum, a small change in the data would result in widely different parameter estimates since this parameter estimation problem appears to be fairly ill-conditioned. [Pg.292]

The LS objective function was found to be 0.7604x10"9. This value is almost three orders of magnitude smaller than the one found earlier at a local optimum. The estimated parameter values were At=22.672, A2=132.4, A3=585320, Ej=l3899, E2=2439.6 and E3=13506 where parameters A, and E were estimated back from Ai and E. With this reparameterization we were able to lessen the ill-conditioning of the problem since the condition number of matrix A was now 5.6x108. [Pg.293]

After 10 iterations of the Gauss-Newton method the LS objective function was reduced to 0.0147. The estimation problem as defined, is severely ill-conditioned. Although the algorithm did not converged, the estimation part of the program provided estimates of the standard deviation in the parameter values obtained thus far. [Pg.378]

Firstly, it has been found that the estimation of all of the amplitudes of the LI spectrum cannot be made with a standard least-squares based fitting scheme for this ill-conditioned problem. One of the solutions to this problem is a numerical procedure called regularization [55]. In this method, the optimization criterion includes the misfit plus an extra term. Specifically in our implementation, the quantity to be minimized can be expressed as follows [53] ... [Pg.347]

As seen from Fig. 5.3, the substrate concentration is most sensitive to the parameters around t = 7 hours. It is therefore advantageous to select more observation points in this region when designing identification experiments (see Section 3.10.2). The sensitivity functions, especially with respect to Ks and Kd, seem to be proportional to each other, and the near—linear dependence of the columns in the Jacobian matrix may lead to ill-conditioned parameter estimation problem. Principal component analysis of the matrix STS is a powerful help in uncovering such parameter dependences. The approach will be discussed in Section 5.8.1. [Pg.282]

Third, at the ill-conditioning of numerical problems for parameter estimation with models involving a large number of exponential terms. Wise [299] has developed a class of powers of time models as alternatives to the sums of exponentials models and has validated these alternative models on many sets of experimental data. From an empirical standpoint, Wise [244] reported 1000 or more published time—concentration curves where alternative models fit the data as weU or better than the sums-of-exponentials models. [Pg.201]

Were it not for the ill-conditioned nature of the problem, yv " could be simply estimated from w = [G G] GV where the hat indicates the estimated value. Unfortunately, [G G] is almost singular therefore, such a solution is highly oscillatory with negative peaks. [Pg.205]

The reliability of the parameter estimates can be checked using a nonparametric technique—the jackknife technique (20, 34). The nonlinearity of the statistical model and ill-conditioning of a given problem can produce numerical difficulties and force the estimation algorithm into a false minimum. [Pg.393]

Ill-conditioned matrices can be due to either insufficient data to fit a model or a poor model. A model-driven situation where ill conditioning is a problem is when the model parameter estimates themselves are highly correlated (correlations greater than 0.95). An example of the latter situation using logarithms of thermometer resistance (Y) as a function of temperature (x) was reported by Simonoff and Tsai (1989) and Meyer and Roth (1972) (Table 3.3). [Pg.109]


See other pages where Ill-conditioned estimation problems is mentioned: [Pg.330]    [Pg.130]    [Pg.326]    [Pg.133]    [Pg.310]    [Pg.345]    [Pg.383]    [Pg.63]    [Pg.74]    [Pg.311]    [Pg.166]    [Pg.501]    [Pg.130]    [Pg.88]    [Pg.23]    [Pg.373]    [Pg.613]    [Pg.154]    [Pg.2592]    [Pg.2592]    [Pg.67]    [Pg.91]    [Pg.98]    [Pg.103]    [Pg.154]   


SEARCH



Ill-conditioned

Ill-conditioning

Illness Condition

© 2024 chempedia.info