Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Initial guess, improved

Equations (16.3)-(16.7) are solved with an iterative solution algorithm. Using the crossflow solution as an initial guess, improved estimates for Xij and are obtained from Equations (16.9) and (16.10), respectively, which follow from Equations (16.3) and (16.4) ... [Pg.337]

To. solve the Kohn-Sham equations a self-consistent approach is taken. An initial guess of the density is fed into Equation (3.47) from which a set of orbitals can be derived, leading to an improved value for the density, which is then used in the second iteration, and so on until convergence is achieved. [Pg.149]

There are some systems for which the default optimization procedure may not succeed on its own. A common problem with many difficult cases is that the force constants estimated by the optimization procedure differ substantially from the actual values. By default, a geometry optimization starts with an initial guess for the second derivative matrix derived from a simple valence force field. The approximate matrix is improved at each step of the optimization using the computed first derivatives. [Pg.47]

Fig. 4.1. Newton s method for solving a nonlinear equation with one unknown variable. The solution, or root, is the value of x at which the residual function R(x) crosses zero. In (a), given an initial guess. vl0,), projecting the tangent to the residual curve to zero gives an improved guess v( l ). By repeating this operation (b), the iteration approaches the root. Fig. 4.1. Newton s method for solving a nonlinear equation with one unknown variable. The solution, or root, is the value of x at which the residual function R(x) crosses zero. In (a), given an initial guess. vl0,), projecting the tangent to the residual curve to zero gives an improved guess v( l ). By repeating this operation (b), the iteration approaches the root.
To improve an initial guess (x ° y ), we reach above this point and project tangent planes from the surfaces of Ri and R2. The improved guess is the point... [Pg.58]

From the initial guess x°, we calculate the m values of f(x°) and their derivatives relative to each Xj. Solving the least-square system, we get an improved estimate of x, that we use as the initial value for the next iteration until the values cease to change significantly. Indicating the fcth estimate by the superscript k, we can write... [Pg.274]

In the second loop, the matrix of species concentrations C is computed rowwise by the Newton-Raphson function. Each solution is analysed individually. To expedite the computations, the initial guesses for the component concentrations are the result of the previous solution (apart from the first one). If the Newton-Raphson function returns an error these initial guesses need to be improved. [Pg.57]

Non-linear regression calculations are extensively used in most sciences. The goals are very similar to the ones discussed in the previous chapter on Linear Regression. Now, however, the function describing the measured data is non-linear and as a consequence, instead of an explicit equation for the computation of the best parameters, we have to develop iterative procedures. Starting from initial guesses for the parameters, these are iteratively improved or fitted, i.e. those parameters are determined that result in the optimal fit, or, in other words, that result in the minimal sum of squares of the residuals. [Pg.148]

As expected in an iterative algorithm, we start from an initial guess for the parameters. This parameter vector is subsequently improved by the addition of an appropriate parameter shift vector 8p, resulting in a better, but probably still not perfect, fit. From this new parameter vector the process is repeated until the optimum is reached. [Pg.148]

Obviously, we have to make sure that the initial position for the parabola is sensible. In any iterative process the choice of initial guesses is important. Fitting a parabola at x=30 does not result in an improvement also recall Figure 3-21. [Pg.199]

However, in view of experimental errors, 2N additional equations are unlikely to be sufficient to solve the phase problem. In practice, we can only expect a statistically meaningful solution if we include many more equations and identify the solution that agrees most with all equations simultaneously. Eurther-more, since Eq. 1 is non-linear in 4)(hp, we cannot expect to find an analytic solution. Hence, we have to make initial guesses for the unknowns and improve from there. [Pg.144]

Most optimization procedures work in an iterative manner. An initial guess is made at where the minimum may lie and various techniques are applied to improve that estimate. This is generally achieved by random to very sophisticated sampling methods. If the objective function /(x) is differentiable then... [Pg.158]

As a dual point with regard to efficiency, note diat SCF convergence in DFT is sometimes more problematic than in HF. Because of die similarities between the KS and HF orbitals, diis problem can often be very effectively alleviated by using die HF orbitals as an initial guess for die KS orbitals. Because die HF orbitals can usually be generated quite quickly, the extra step can ultimately be time-saving if it sufficiently improves the KS SCF convergence. [Pg.274]

There are three common schemes that can be used to improve the initial guess and the successive solutions. These are... [Pg.401]

Now, in the Hartree-Fock method (the Roothaan-Hall equations represent one implementation of the Hartree-Fock method) each electron moves in an average field due to all the other electrons (see the discussion in connection with Fig. 53, Section 5.23.2). As the c s are refined the MO wavefunctions improve and so this average field that each electron feels improves (since J and K, although not explicitly calculated (Section 5.2.3.63) improve with the i// s ). When the c s no longer change the field represented by this last set of c s is (practically) the same as that of the previous cycle, i.e. the two fields are consistent with one another, i.e. self-consistent . This Roothaan-Hall-Hartree-Fock iterative process (initial guess, first F, first-cycle c s, second F, second-cycle c s, third F, etc.) is therefore a self-consistent-field procedure or SCF procedure, like the Hartree-Fock procedure... [Pg.205]

The method uses the fact that any response F (v) always has a broader distribution than the input distribution W(y). Hence, if a distribution Fj(v) is broader than F (v), the assumed W (y) must be sharpened to give a response closer to F (v). Using F ( v) as the initial guess for W(y), subsequent improved estimates are calculated by ... [Pg.251]

In contrast to methods where the sum of squares, ssq, is minimized directly, the NGL/M type of algorithm requires the complete vector or matrix of residuals to drive the iterative refinement toward the minimum. As before, we start from an initial guess for the rate constants, k0. Now, the parameter vector is continuously improved by the addition of the appropriate ( best ) parameter shift vector Ak. The shift vector is calculated in a more sophisticated way that is based on the derivatives of the residuals with respect to the parameters. [Pg.230]


See other pages where Initial guess, improved is mentioned: [Pg.26]    [Pg.278]    [Pg.288]    [Pg.217]    [Pg.406]    [Pg.148]    [Pg.195]    [Pg.120]    [Pg.121]    [Pg.220]    [Pg.492]    [Pg.548]    [Pg.311]    [Pg.94]    [Pg.5]    [Pg.91]    [Pg.155]    [Pg.490]    [Pg.173]    [Pg.174]    [Pg.254]    [Pg.287]    [Pg.283]    [Pg.126]    [Pg.195]    [Pg.206]    [Pg.226]    [Pg.458]    [Pg.148]    [Pg.195]    [Pg.492]    [Pg.499]    [Pg.359]   
See also in sourсe #XX -- [ Pg.376 ]




SEARCH



GUESS

Guessing

Improving the initial guess

Initial guess

© 2024 chempedia.info