Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Newton-Raphson Algorithm

In general, non-linear problems cannot be resolved explicitly, i.e. there is no equation that allows the computation of the result in a direct way. Usually such systems can be resolved numerically in an iterative process. In most instances, this is done via a truncated Taylor series expansion. This downgrades the problem to a linear one that can be resolved with a stroke of the brush or the Matlab / and commands see The Pseudo-Inverse (p.ll 7). [Pg.48]

The result of each step of the iterative process is an approximation only, which, hopefully, is better than the previous one. Naturally, the process is iteratively continued until there some appropriate termination criterion is met. [Pg.49]

As a reminder, we write the Taylor series expansion for a function J x) of one single variable x. [Pg.49]

This equation looks essentially the same if we deal with a vector x of variables and a function f that has several components  [Pg.49]

We truncate the series after the first derivative and what remains is  [Pg.49]


The Newton-Raphson algorithm is further developed into a fairly generally applicable tool for the solving of sets of non-linear equations. [Pg.3]

Figure 3-11. A flow diagram for the Newton-Raphson algorithm. Figure 3-11. A flow diagram for the Newton-Raphson algorithm.
The Newton-Raphson algorithm we have developed can deal with any equilibrium situation. There is no limit to the number of components or species. The disadvantage is that the computations are iterative and this is clearly unsuitable for Excel applications. While it is possible to resolve complicated equilibria, it is inconvenient for complete titrations, as only one cell at a time can be evaluated. However, there are important special cases that can be solved explicitly. We deal with one here. [Pg.64]

For the 1-dimensional case it is possible to represent the basic ideas graphically. This allows natural understanding and good insight into the potential shortfalls of the Newton-Raphson algorithm. We have chosen a truly irrational function... [Pg.69]

Figure 3-23. The Newton-Raphson algorithm with an unfortunate initial guess. Figure 3-23. The Newton-Raphson algorithm with an unfortunate initial guess.
As done previously, in The Newton-Raphson Algorithm (p.48), we neglect all but the first two terms in the expansion. This leaves us with an approximation that is not very accurate but, since it is a linear equation, is easy to deal with. Algorithms that include additional higher terms in the Taylor expansion, often result in fewer iterations but require longer computation times due to the calculation of higher order derivatives. [Pg.149]

The central function Rcalc EqAH2. m computes the residuals and is again very similar to the ones we developed earlier. First, the total concentrations are recalculated this needs to be part of the calculation of the residuals, as we want to be able to fit initial concentrations (s. c 0) as well. Subsequently these total concentrations are passed to the Newton-Raphson function NewtonRaphson, m in order to calculate all species concentrations, see The Newton-Raphson Algorithm (p.48). The differences between measured and calculated pH define the residuals. Note that any variable used in this function to calculate the residuals can theoretically be a parameter to be fitted to the data. [Pg.174]

The direction given by —H(0s) lVU 0s) is a descent direction only when the Hessian matrix is positive definite. For this reason, the Newton-Raphson algorithm is less robust than the steepest descent method hence, it does not guarantee the convergence toward a local minimum. On the other hand, when the Hessian matrix is positive definite, and in particular in a neighborhood of the minimum, the algorithm converges much faster than the first-order methods. [Pg.52]

Beste et al. [104] compared the results obtained with the SMB and the TMB models, using numerical solutions. All the models used assumed axially dispersed plug flow, the linear driving force model for the mass transfer kinetics, and non-linear competitive isotherms. The coupled partial differential equations of the SMB model were transformed with the method of lines [105] into a set of ordinary differential equations. This system of equations was solved with a conventional set of initial and boundary conditions, using the commercially available solver SPEEDUP. Eor the TMB model, the method of orthogonal collocation was used to transfer the differential equations and the boimdary conditions into a set of non-linear algebraic equations which were solved numerically with the Newton-Raphson algorithm. [Pg.838]

Use of Eq. 38 on a nonquadratic surface in an iterative fashion forms the basis of the Newton-Raphson algorithm.160 For the multidimensional case,... [Pg.56]

Goal Seek is based on the principle of the Newton-Raphson algorithm, discussed in more detail in section 8.1, which finds a first estimate by determining the slope of the function, then extrapolates this to find the zero, uses this new estimate to obtain a second estimate, and so on. [Pg.129]

The Newton-Raphson algorithm must start with a reasonably close first estimate, x0, of the desired value xA for which l (xA) = A. If the function Fix) were linear between x = x0 and xA) we could find xA simply from... [Pg.311]

Fig. 8.1-1 The Newton-Raphson algorithm finds the root of an equation through iteration, of which the first three steps are shownhere. Fig. 8.1-1 The Newton-Raphson algorithm finds the root of an equation through iteration, of which the first three steps are shownhere.
When the initial estimate is far off, the Newton-Raphson method may not converge in fact, when the initial value is located at an x-value where F(x) goes through a minimum or maximum, the denominator in (8.1-1) will become zero, so that (8.1-1) will place the next iteration at either + °° or — . Furthermore, the Newton-Raphson algorithm will find only one root at a time, regardless of how many roots there are. On the other hand, when the method works, it is usually very efficient and fast. Exercise 8.1 illustrates how the Newton-Raphson algorithm works. [Pg.312]

Another method for finding the minimum of a function, one that requires the function to be twice differentiable and that its derivatives can be calculated, is the Newton-Raphson algorithm. The algorithm begins with an initial estimate, xi, of the minimum, x. The goal is to find the value of x where the first derivative equals zero and the second derivative is a positive value. Taking a first-order Taylor series approximation to the first derivative (dY/dx) evaluated at xi... [Pg.96]

In the 3245 version of EQ3/6, the Newton-Raphson algorithm was modified to treat activity coefficients of aqueous species as known constants during a Newton-Raphson step. The... [Pg.110]


See other pages where The Newton-Raphson Algorithm is mentioned: [Pg.279]    [Pg.286]    [Pg.288]    [Pg.48]    [Pg.75]    [Pg.109]    [Pg.99]    [Pg.97]    [Pg.101]    [Pg.10]    [Pg.12]    [Pg.377]    [Pg.76]    [Pg.52]    [Pg.53]    [Pg.301]    [Pg.172]    [Pg.183]    [Pg.180]    [Pg.181]    [Pg.268]    [Pg.377]    [Pg.400]    [Pg.377]    [Pg.377]    [Pg.654]    [Pg.334]    [Pg.57]    [Pg.97]    [Pg.97]    [Pg.346]    [Pg.435]    [Pg.436]   


SEARCH



Newton-Raphson

Raphson

The Algorithms

© 2024 chempedia.info