Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Ill-conditioned

A matrix with a large condition number is commonly referred to as ill-conditioned and particularly vulnerable to round-off errors. Special techniques. [Pg.206]

This set is said to be ill-conditioned because the second equation is almost an exact multiple of the first. The matrix of coefficients is almost singular. [Pg.55]

The norm is useful when doing numerical calculations. If the computer s floating-point precision is 10" , then K = 10 indicates an ill-conditioned matrix. If the floating-point precision is I0" (double precision), then a matrix with K = I0 may be ill-conditioned. Two other measures are useful and are more easily calculated ... [Pg.466]

The higher the Condition Number, the more ill-conditioned the % matrix is... [Pg.382]

If a matrix is ill-conditioned, its inverse may be inaccurate or the solution vector for its set of equations may be inaccurate. Two of the many ways to recognize possible ill-conditioning are... [Pg.74]

Instrumental transmission (convolution by the PSF) is always a smoothing process whereas noise is usually non-negligible at high frequencies, the noise amplification problem therefore always arises in deconvolution. This is termed as ill-conditioning in inverse problem theory. [Pg.400]

Maximum likelihood methods are commonly used to estimate parameters from noisy data. Such methods can be applied to image restoration, possibly with additional constraints (e.g., positivity). Maximum likelihood methods are however not appropriate for solving ill-conditioned inverse problems as will be shown in this section. [Pg.403]

Inverse problems are very common in experimental and observational sciences. Typically, they are encountered when a large number of parameters (as many as or more than measurements) are to be retrieved from measured data assuming a model of the data - also called the direct model. Such problems are ill-conditioned in the sense that a simple inversion of the direct model applied directly to the data yields a solution which exhibits significant, or even dominant, features which are completely different for a small change of the input data (for instance due to a different realization of the noise). Since the objective constraints set by the data alone are not sufficient to provide a unique and... [Pg.419]

The forward shooting method seems straightforward but is troublesome to use. What we have done is to convert a two-point boundary value problem into an easier-to-solve initial value problem. Unfortunately, the conversion gives a numerical computation that is ill-conditioned. Extreme precision is needed at the inlet of the tube to get reasonable accuracy at the outlet. The phenomenon is akin to problems that arise in the numerical inversion of matrices and Laplace transforms. [Pg.338]

Thus, we obtain = 0.159 for a step size of Aj = 0.03125. The ill-conditioning problem has been solved, but the solution remains inaccurate due to the simple integration scheme and the large step size. [Pg.339]

In practice, the solution of Equation 3.16 for the estimation of the parameters is not done by computing the inverse of matrix A. Instead, any good linear equation solver should be employed. Our preference is to perform first an eigenvalue decomposition of the real symmetric matrix A which provides significant additional information about potential ill-conditioning of the parameter estimation problem (see Chapter 8). [Pg.29]

Given the fact that in parameter estimation we normally have a relatively smooth LS objective function, we do not need to be exceptionally concerned about local optima (although this may not be the case for ill-conditioned estimation problems). This is particularly true if we have a good idea of the range where the parameter values should be. As a result, it may be more efficient to consider using a value for NR which is a function of the number of unknown parameters. For example, we may consider... [Pg.80]

If two or more of the unknown parameters are highly correlated, or one of the parameters does not have a measurable effect on the response variables, matrix A may become singular or near-singular. In such a case we have a so called ill-posed problem and matrix A is ill-conditioned. [Pg.141]

A measure of the degree of ill-conditioning of a nonsingular square matrix is through the condition number which is defined as... [Pg.141]

Generally speaking, for condition numbers less than 10 the parameter estimation problem is well-posed. For condition numbers greater than 1010 the problem is relatively ill-conditioned whereas for condition numbers 10 ° or greater the problem is very ill-conditioned and we may encounter computer overflow problems. [Pg.142]

Thus, the error in the solution vector is expected to be large for an ill-conditioned problem and small for a well-conditioned one. In parameter estimation, vector b is comprised of a linear combination of the response variables (measurements) which contain the error terms. Matrix A does not depend explicitly on the response variables, it depends only on the parameter sensitivity coefficients which depend only on the independent variables (assumed to be known precisely) and on the estimated parameter vector k which incorporates the uncertainty in the data. As a result, we expect most of the uncertainty in Equation 8.29 to be present in Ab. [Pg.142]

If matrix A is ill-conditioned at the optimum (i.e., at k=k ), there is not much we can do. We are faced with a truly ill-conditioned problem and the estimated parameters will have highly questionable values with unacceptably large estimated variances. Probably, the most productive thing to do is to reexamine the structure and dependencies of the mathematical model and try to reformulate a better posed problem. Sequential experimental design techniques can also aid us in... [Pg.142]

If however, matrix A is reasonably well-conditioned at the optimum, A could easily be ill-conditioned when the parameters are away from their optimal values. This is quite often the case in parameter estimation and it is particularly true for highly nonlinear systems. In such cases, we would like to have the means to move the parameters estimates from the initial guess to the optimum even if the condition number of matrix A is excessively high for these initial iterations. [Pg.143]

If matrix A is well-conditioned, the above equation should be used. If however, A is ill-conditioned, we have the option without any additional computation effort, to use instead the pseudoinverse of A. Essentially, instead of A 1 in Equation 8.31, we use the pseudoinverse of A, A. ... [Pg.143]

When the parameters differ by more than one order of magnitude, matrix A may appear to be ill-conditioned even if the parameter estimation problem is well-posed. The best way to overcome this problem is by introducing the reduced sensitivity coefficients, defined as... [Pg.145]

With this modification the conditioning of matrix A is significantly improved and cond AR) gives a more reliable measure of the ill-conditioning of the parameter estimation problem. This modification has been implemented in all computer programs provided with this book. [Pg.146]

The remedies to increase the region of convergence include the use of a pseudoinverse or Marquardt s modification that overcome the problem of ill-conditioning of matrix A. However, if the basic sensitivity information is not there, the estimated direction Ak +I) cannot be obtained reliably. [Pg.152]

In certain occasions the volume criterion is not appropriate. Fn particular when we have an ill-conditioned problem, use of the volume criterion results in an elongated ellipsoid (like a cucumber) for the joint confidence region that has a small volume however, the variance of the individual parameters can be very high. We can determine the shape of the joint confidence region by examining the cond( ) which is equal to and represents the ratio of the principal axes of... [Pg.189]

In this problem it is very difficult to obtain convergence to the global optimum as the condition number of matrix A at the above local optimum is 3xl018. Even if this was the global optimum, a small change in the data would result in widely different parameter estimates since this parameter estimation problem appears to be fairly ill-conditioned. [Pg.292]

At this point we should always try and see whether there is anything else that could be done to reduce the ill-conditioning of the problem. Upon reexamination of the structure of the model given by Equation 16.4 we can readily notice that it can be rewritten as... [Pg.292]

The LS objective function was found to be 0.7604x10"9. This value is almost three orders of magnitude smaller than the one found earlier at a local optimum. The estimated parameter values were At=22.672, A2=132.4, A3=585320, Ej=l3899, E2=2439.6 and E3=13506 where parameters A, and E were estimated back from Ai and E. With this reparameterization we were able to lessen the ill-conditioning of the problem since the condition number of matrix A was now 5.6x108. [Pg.293]

Indeed, using the Gauss-Newton method with an initial estimate of k(0)=(450, 7) convergence to the optimum was achieved in three iterations with no need to employ Marquardt s modification. The optimal parameter estimates are k = 420.2 8.68% and k2= 5.705 24.58%. It should be noted however that this type of a model can often lead to ill-conditioned estimation problems if the data have not been collected both at low and high values of the independent variable. The convergence to the optimum is shown in Table 17.5 starting with the initial guess k(0)=(l, 1). [Pg.326]


See other pages where Ill-conditioned is mentioned: [Pg.330]    [Pg.888]    [Pg.51]    [Pg.75]    [Pg.75]    [Pg.207]    [Pg.283]    [Pg.73]    [Pg.405]    [Pg.130]    [Pg.394]    [Pg.62]    [Pg.326]    [Pg.327]    [Pg.469]    [Pg.133]    [Pg.141]    [Pg.152]    [Pg.310]    [Pg.310]    [Pg.345]   
See also in sourсe #XX -- [ Pg.85 , Pg.86 , Pg.218 , Pg.237 , Pg.238 , Pg.247 , Pg.250 , Pg.251 , Pg.252 , Pg.253 , Pg.254 , Pg.307 , Pg.317 , Pg.318 , Pg.324 , Pg.325 ]




SEARCH



Ill conditioning matrix

Ill-Conditioning of Matrix A and Partial Remedies

Ill-conditioned equation

Ill-conditioned estimation problems

Ill-conditioned matrix

Ill-conditioned problem

Ill-conditioned systems

Ill-conditioning

Ill-conditioning

Illness Condition

Illness Condition

Singular or Ill-Conditioned Jacobian

© 2024 chempedia.info