Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Misfit functional

Functional is often called a misfit functional. Thus, the parametric... [Pg.43]

The functional P (m, d ) on the elements of this line is a nonnegative quadratic function of (3 (because a misfit functional j4(m) — d is a quadratic functional as well). Therefore, it cannot reach its minimum for two different values of (3. [Pg.45]

To justify this approach we will examine more carefully the properties of all three functionals involved in the regularization method the Tikhonov parametric functional and the stabilizing and misfit functionals. [Pg.52]

Figure 2-6 L-curve represents a simple curve for all possible a of the misfit functional, (rv), versus stabilizing functional, s(a), plotted in log-log scale. The distinct corner, separating the vertical and the horizontal branches of this curve, corresponds to the quasi-optimal value of the regularization parameter a. Figure 2-6 L-curve represents a simple curve for all possible a of the misfit functional, (rv), versus stabilizing functional, s(a), plotted in log-log scale. The distinct corner, separating the vertical and the horizontal branches of this curve, corresponds to the quasi-optimal value of the regularization parameter a.
It is based on plotting for all possible a the curve of the misfit functional, f(a),versus the. stabilizing functional, s(a) (where we use notations (2.82)). The L-curve illustrates the trade-off between the best fitting (minimizing a misfit) and most reasonable stabilization (minimizing a stabilizer). In a case where a is selected to be too small, the minimization of the parametric functional P (m) is equivalent to the minimization of the misfit functional therefore i a) decreases, while s(q) increases. [Pg.55]

Functional / is called a misfit functional. It can be written in the form... [Pg.63]

The problem of minimization of the misfit functional (3.7) can be solved using variational calculus. Let us calculate the first variation of / (m) ... [Pg.63]

Note that the normal system can be obtained formally by multiplication of the original system (3.2) by the transposed matrix AT. However, in general cases, the pseudosolution mo is not equivalent to the solution of the original system, because the new system described by equation (3.8) is not equivalent to the original system (3.2) if matrix A is not square. The main characteristic of the pseudo-solution is that it provides the minimum of the misfit functional. [Pg.64]

Equation (3.13) is a natural generalization to rectangular matrices of formula (E.14) from Appendix E for square matrices. Thus, minimization of the misfit functional opens a way to construct a generalized inverse matrix for any matrix, rectangular or square, with the only limitation being that the elements of the diagonal matrix Q are not equal to zero Q, 7 0, z = 1,2,. L. [Pg.64]

Let us introduce some weighting factors w for estimation of the residuals Ti. The reason for the weighing is that in practice some observations are made with more accuracy than others. In this case one would like the prediction errors rj of the more accurate observations to have a greater weight than the inaccurate observations. To accomplish this weighting we define the weighted misfit functional as follows... [Pg.68]

The problem of minimization of the weighted misfit functional can be solved by calculating the first variation of this functional and setting it equal to zero ... [Pg.68]

Different modifications of least square solutions of linear inverse problems have resulted from the straightforward minimization of the corresponding misfit functionals. However, all these solutions have many limitations and are very sensitive to small variations of the observed data. An obvious limitation occurs when the inverse matrices (A A) or (A W A)" do not exist. However, even when the inverse matrices exist, they can still be ill-conditioned (become nearly singular). In this case our solution would be extremely unstable and unrealistic. To overcome these difficulties we have to apply regularization methods. [Pg.74]

Equation (4.4) is called the Euler equation. The element m at which a misfit functional achieves a minimum is a solution of the corresponding Euler equation. In the case of discrete data and model parameters, the Euler equation becomes a normal equation (3.8) for the corresponding system of linear equations (3.4). [Pg.92]

Note that the Euler equation can be obtained formally by applying the adjoint operator. A, to both sides of equation (4.1). However, the Euler equation (4.4) is not in general cases equivalent to the original inverse problem (4.1). The main characteristic of the Euler equation is that it provides the minimum of the misfit functional. The Euler equation (4.4) is equivalent to the original equation (4.1) if each of these equations has a unique solution in M. Note that, in this case, the operator A A is always a positive self-adjoint operator, because... [Pg.92]

We have established in Chapter 2 that, in the case where >1 is a linear operator, D and M are Hilbert spaces, and s(m) is a quadratic functional, the solution of the minimization problem (4.99) is unique. Let us find the equation for the minimum of the functional F (m). We will use the same technique for solving this problem that we considered above for the misfit functional minimization. [Pg.114]

Wc start our discussion with the nrost important and clc arly understandable method of. stee[)("st dc seent. The idea of this method can be cx[)lained using the csxarnplc of misfit functional minimization ... [Pg.121]

Note that, in general cases, the misfit functional (5.2) may have several minima. We will distinguish three types of minimums strong local minimum, weak local minima, and global minimum. [Pg.122]

Note that there is no global minimum of the misfit functional if the solution of the original inverse problem (5.1) is nonunique. We know also that in this case we have to apply regularization theory to solve the ill-posed inverse problem. In this section, however, we will assume that misfit functional (5.2) has a global minimum, so there is only one point at which ( (m) assumes its least value. Our main goal will be to find this point. [Pg.122]

Remark 1 It can be proved that the direction, l(m), determined above, is the steepest ascent direction. This means that, along this direction, the misfit functional increases most rapidly local to m. [Pg.124]

The absolute value of the first variation of the misfit functional can be estimated using equation (5.8) and the Schwarz inequality (A.38) ... [Pg.124]

The iteration ju oce.ss (5.14) together with the condition (5.15) gives us a nu-nunical scheme foi- th(> slec ix sl (k. sccmt method applied to misfit functional mini-mizat ion. [Pg.125]

Figure 5-2 The plot of the misfit functional value as a function of model parameters tn. The vector of the steepest ascent, l(m ), shows the direction of "climbing on the hill" along the misfit functional surface. The intersection between the vertical plane P drawn through the direction of the steepest descent at point m and the misfit functional surface is shown by a solid parabola-type curve. The steepest descent step begins at a point 0(m ) and ends at a point 0(m +i) at the minimum of this curve. The second parabola-type curve (on the left) is drawn for one of the subsequent iteration points. Repeating the steepest descent iteration, we move along the set of mutually orthogonal segments, as shown by the solid arrows in the space M of the model parameters. Figure 5-2 The plot of the misfit functional value as a function of model parameters tn. The vector of the steepest ascent, l(m ), shows the direction of "climbing on the hill" along the misfit functional surface. The intersection between the vertical plane P drawn through the direction of the steepest descent at point m and the misfit functional surface is shown by a solid parabola-type curve. The steepest descent step begins at a point 0(m ) and ends at a point 0(m +i) at the minimum of this curve. The second parabola-type curve (on the left) is drawn for one of the subsequent iteration points. Repeating the steepest descent iteration, we move along the set of mutually orthogonal segments, as shown by the solid arrows in the space M of the model parameters.
Figure 5-3 The top part of the figure shows the isolines of the misfit functional map and the steepest descent path of the iterative solutions in the space of model parameters. The bottom part presents a magnified element of this map with just one iteration step shown, from iteration (n. — 1) to iteration number ti. According to the line search principle, the direction of the steepest ascent at iteration number n must be perpendicular to the misfit isoline at the minimum point along the previous direction of the steepest descent. Therefore, many steps may be required to reach the global minimum, because every subsequent steepest descent direction is perpendicular to the previous one, similar to the path of experienced slalom skiers. Figure 5-3 The top part of the figure shows the isolines of the misfit functional map and the steepest descent path of the iterative solutions in the space of model parameters. The bottom part presents a magnified element of this map with just one iteration step shown, from iteration (n. — 1) to iteration number ti. According to the line search principle, the direction of the steepest ascent at iteration number n must be perpendicular to the misfit isoline at the minimum point along the previous direction of the steepest descent. Therefore, many steps may be required to reach the global minimum, because every subsequent steepest descent direction is perpendicular to the previous one, similar to the path of experienced slalom skiers.
The iteration process (5.14) together with the linear line search described by formula (5.21) gives us a numerical scheme for the steepest descent method for misfit functional minimization. Thus, this algorithm for the steepest descent method can be summarized as follows ... [Pg.130]

To determine this specific direction, Am, let us calculate the misfit functional for this first iterat ion... [Pg.131]

Figure 5-4 The plot of the misfit functional value as a function of model parameters m. In the framework of the Newton method one tries to solve the problem of minimization in one step. The direction of this step is shown by the arrows in the space M of model parameters and at the misfit surface. Figure 5-4 The plot of the misfit functional value as a function of model parameters m. In the framework of the Newton method one tries to solve the problem of minimization in one step. The direction of this step is shown by the arrows in the space M of model parameters and at the misfit surface.
In a previous section, we considered the problem et)f minimization eef the misfit functional. lietvvevcr, wo know this problem is ill-poseel and unstable. To finel the sta.ble solution for the minimization problem we have to consider the regularized minimization problem,... [Pg.143]

Therefore, on each iteration of the re-weighted RCG method we actually minimize the parametric functional with the different stabilizers, because the weighting matrix Wen is updated on each iteration. In order to insure the convergence of the misfit functional to the global minimum, we use adaptive regularization and decrease the ttn+i, if 7 > 1 ... [Pg.162]

The gravity inver.se probkun can be formulated as the minimization of the misfit functional ... [Pg.177]


See other pages where Misfit functional is mentioned: [Pg.202]    [Pg.52]    [Pg.52]    [Pg.63]    [Pg.63]    [Pg.73]    [Pg.99]    [Pg.122]    [Pg.125]    [Pg.131]    [Pg.133]    [Pg.139]    [Pg.154]    [Pg.154]    [Pg.177]   
See also in sourсe #XX -- [ Pg.43 , Pg.52 , Pg.63 , Pg.73 , Pg.91 ]




SEARCH



© 2024 chempedia.info