Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Iterative solutions of the linear inverse problem

Functional / is called a uiisjit fiiri(di( nal. It can be written in the form [Pg.91]

It is well knowm that the best way to solve an opt.irnizat.ion yiroblem for a com ventional function is based on dillbrentiating the function and ccpiat.irig the derivatives to zero. A similar a]rproach can be applied, in ])rindi)le, to functionals. However, in [Pg.91]

The problem of minimization of the misfit functional (4.3) can be solved using variational calculus. Let us calculate the first variation of / (m)  [Pg.92]

From the last formula we have the following operator equation  [Pg.92]

Equation (4.4) is called the Euler equation. The element m at which a misfit functional achieves a minimum is a solution of the corresponding Euler equation. In the case of discrete data and model parameters, the Euler equation becomes a normal equation (3.8) for the corresponding system of linear equations (3.4). [Pg.92]


Before moving to the iterative Born inversion technique, we introduce a fast imaging technique based on a Born approximation. Let us recall formula (5.91) for an approximate regularized solution of the linear inverse problem. In the case of equation (10.16), this formula takes the form... [Pg.292]

In the second part, I describe basic methods of solution of the linear inverse problem using regularization, paying special attention to iterative inversion methods. [Pg.631]

It may appear as if this is no great improvement, since finding a solution to a linear equation system with direct methods requires about n3 operations, about half as many as the inversion. However, the solution of the linear equation system can be accomplished by iterative methods where, in each step, some product jv is formed. Superficially, this cuts down, the number of operations, but still requires the Jacobian to be computed and stored. However, for a very large class of important problems, such a product can be efficiently computed without the need of precalculating or storing the Jacobian. [Pg.31]

As for the solution of the linear system, the standard approach based on the inversion of D matrix (see equation (48)) becomes unmanageable for very large solutes due to both the computational time and the disk memory occupation it requires. To deal with these cases an iterative procedure has been developed, [112] which is able to solve equation (48) without defining and inverting the full D matrix. A specific two-step extrapolation technique proved very effective in the solution of this problem, especially for the PCM variant based on the normal... [Pg.502]

Of course, it is usually not enough to use only one iteration for the solution of a nonlinear inverse problem in the framework of the Newton method (because we used the linearized approximation (5.38)). However, we can construct an iterative process based on the relationship (5.43) ... [Pg.134]

The classical formulation of the ASC problem presented by Drummond (1988) has some aspects of interest. The supertensor formalism he introduces is similar, although not equal, to the BEM matrix formulation shown in eqs. (51-54). The set of linear equations there which was introduced there may be used in classical, or quantum iterative solutions of the problem (for the quantum use, see Grant et al., 1990, and Coitino et al., 1995a), or, alternatively, in a direct calculation, via the inversion of the D matrix (see eqs. 52-53). Drummond s formulation makes easier the handling of the equivalent supermatrix he defines. This approach has not been tested in quantum calculations. [Pg.57]

Moreover, in Section 3 we construct a Newton-type algorithm for finding the 3D velocity distribution from 3D travel time measurements for the local inverse kinematic problem. Initially, as a first approximation, we choose a sound velocity that increases linearly with the depth. This is since it was shown in [5] that with this choice of linearization our problem reduces to a sequence of 2D Radon transforms in discs. Om case is much harder, since we consider solving a nonlinear problem, and therefore we need to solve a direct 3D problem on each iteration. However, we can show that, in our case, already the second iteration is often much better than the solution from the linearized approximation. [Pg.268]

If we work at a fixed nuclear conformation (clamped nuclei), Ve( will be a function of pMe, i.e. Ve ( k 5 ). Schrodinger equation (36) is not linear as the Hamiltonian depends on the eigenfunction 4>. To solve this equation three methods have been devised, i.e. a) iterative solution, b) closure solution, and c) matrix inversion. All methods are of current us. According to the type of problem, it is convenient to adopt a different procedure, a) Iterative solution. [Pg.31]

However the coefficients only appear linearly in the above set of equations so that the solution to the problem is simply a matrix inversion no iteration is needed to satisfy the self-consistency requirement. It is easy to see that the size of this matrix inversion may become quite large if a large basis is used typically one is inverting a matrix of dimension n(m — n) for m basis functions and n occupied orbitals. [Pg.702]

A problem is that for a biomolecule m is very large and the dimensionality of the Jacobian matrix is correspondingly large, and the construction and inversion of the matrix becomes considerably time-consuming. To overcome this problem, we choose Dab such that exp[—Uat( i)/(fcBT ) + riab(ri)] in the closure equation is sufficiently smaller than 1.0 for ri < Dab- Namely, Dab chosen is smaller than but close to aab-As a result, the Jacobian matrix becomes almost independent of the solute molecule-water and -ion correlation functions. The matrix can then be treated as part of the input data It is constant against changes in all the iteration variables. In other words, the construction of the matrix is required only once. At each N-R iterative step, the linear set of equations written as... [Pg.162]

In general, closed-form solutions that allow direct calculation of position from radionavigation observables do not exist. However, the inverse calculations can be done. Given one s position and clock offset, one can predict GPS or GLONASS pseudo-ranges or LORAN TDs or TOAs. These expressions are nonlinear but can be linearized about a point and the problem solved by iteration. The basic approach can be summarized... [Pg.1857]


See other pages where Iterative solutions of the linear inverse problem is mentioned: [Pg.92]    [Pg.94]    [Pg.96]    [Pg.98]    [Pg.100]    [Pg.102]    [Pg.104]    [Pg.106]    [Pg.108]    [Pg.110]    [Pg.112]    [Pg.114]    [Pg.116]    [Pg.92]    [Pg.94]    [Pg.96]    [Pg.98]    [Pg.100]    [Pg.102]    [Pg.104]    [Pg.106]    [Pg.108]    [Pg.110]    [Pg.112]    [Pg.114]    [Pg.116]    [Pg.91]    [Pg.2334]    [Pg.478]    [Pg.121]    [Pg.143]    [Pg.190]    [Pg.632]    [Pg.2334]    [Pg.150]    [Pg.265]    [Pg.185]    [Pg.164]    [Pg.89]    [Pg.692]   


SEARCH



ITER

Inverse iteration

Inverse problem

Inversion problem

Iterated

Iteration

Iteration iterator

Iterative

Linear problems

Linear solute

Solution of the problem

The Inversion Problem

© 2024 chempedia.info