Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Rosenbrock Method

The first equation is a description of the (controlled) current approximation, and the last equation expresses the unity value of the outer boundary value. [Pg.167]

For the basics of this method, see Chap. 4. There it was mentioned that Bieniasz introduced this method to electrochemical simulation [100], preferring ROWDA3, a third-order variant that also has a smooth response. There exists a second-order variant with a smooth response, ROS2, due to Lang [347], which might be more appropriate if second-order spatial derivative approximations are to be used. Coefficients for some variants are given in Appendix A. The object here is to describe the way Rosenbrock methods are used in the present context. The Bieniasz paper [100] shows the way (but the standard symbols, as used in Chap. 4, are used here, rather than those used by Bieniasz). [Pg.167]

The set (9.56) (or one like it, with whatever boundary conditions we might have) is written in the compact form [Pg.168]

We are now ready to invoke the Rosenbrock method. A number s of fcj vectors must be computed, s being the order chosen. The general equation for each one is an extension of that given for a pure ode set on page 70, (4.70), to the present DAE case, introducing the selection matrix S and following Bieniasz [100] (though with the more common notation)  [Pg.169]

there appear the Jacobian Fc, which is in fact J as defined above in (9.60), the function F itself, applying at partly augmented T and C values, and, in case of time-dependent systems, the time derivative Ft, written in short form, as it is applied to the present T and C. This last term is often zero, if the system does not include functions of time. [Pg.169]

Rosenbrock (1960) did not merely highlight the difficulties that may arise with narrow valleys, but also proposed a method that should work even in such circumstances. [Pg.89]

Rosenbrock su ested rotating the axes at the end of each search cycle that involves all dimensions from xq to x y of the optimization problem. [Pg.90]

In its original version, the method managed a by increasing or decreasing them according to the successful/unsuccessfiil results obtained using a specific a . Other authors modified the algorithm to perform one-dimensional searches in the axis directions. For the sake of clarity, the variant of one-dimensional searches is considered below. [Pg.90]

1) At a certain point in the search, the axes Pi, P21 Pny available. The same axes of the variables are generally used as the first set of axes. Starting from xo, we obtain xi by a one-dimensional search along pj, X2 by a one-dimensional search along P2, and so on, up to x y. [Pg.90]

2) As a is the step (not necessarily different from zero) along p , the axes are sorted so as to have decreasing a . By using r to denote the number of a 0, the largest step in the previous searches is in the vector pj and the smallest nonzero step is in the vector [Pg.90]

Although (9.61) may look formidable, there are some conveniences. First of all, for linear systems, the first matrix term on the left-hand side is a constant and can be evaluated once and for all. We write [Pg.202]


There are, however, implicit variants of RK, and these may have promise. There are several classes of these, see a thorough text on the subject [284,286]. One of these classes, the Rosenbrock method, has been recently examined [100,113, and see references therein] and found very efficient. This is described in its own Sect. 9.4, below. [Pg.159]

An obvious alternative choice of method, given the probably nonlinear form of the isotherm boundary condition is to use a Rosenbrock method. Then, the two boundary conditions are simply the first two equations in a whole DAE set, the first of the pair (10.3) being an algebraic equation, the second an ode. The Rosenbrock method is described in Chap. 9, Sect. 9.4 starting on page 167. [Pg.191]

Simulations must thus handle the nonlinear boundary conditions. Some have taken the easy way out and used explicit methods [123,429]. Bieniasz [105] used the Rosenbrock method (see Chap. 9), which makes sense because it effectively deals with nonlinearities without iterations at a given time step. [Pg.194]

The Rosenbrock method is described for odes in Chap. 4 and for electrochemical simulations, that is, DAEs, in Chap. 9. There are four variants, two of... [Pg.285]

Identification of the theoretical and experimental transfer functions in order to estimate the effective diffusivity De is obtained by minimizing a relative error function taken between the two transfer functions. The Rosenbrock method of optimization has been used. All the measurements have been made at room temperature and something close to normal atmospheric pressure. The only parameter that changes is the carrier gas flow rate. [Pg.326]

Below, we describe four algorithms that are able to handle small and medium dimension problems even in the presence of relatively narrow valleys without using any gradient or Hessian the Rosenbrock method (1960), the Hooke-Jeeves method (1961), the Simplex method (Spendley et al, 1962 Nelder and Mead, 1965), and the Optnov method (Buzzi-Ferraris, 1967). Note that their current structure is slightly different from the original one. [Pg.87]

The Rosenbrock method is slightly modified using this technique the first two points are still the same, but it is necessary to continue with the following ones ... [Pg.90]

The Rosenbrock method (and its variants) has the following pros and cons. [Pg.91]

It is important to realize why even the Rosenbrock method becomes inefficient when the function valleys are particularly narrow. [Pg.91]

Figure 3.3 The Rosenbrock method fails with very narrow valleys. Figure 3.3 The Rosenbrock method fails with very narrow valleys.
In contrast with algorithms using univariate search, the Rosenbrock method is a so-called acceleration method, which makes the direction and/or the distance (in this case both) of the parameter jumps dependent on the degree of success of the previous parameter jumps. With p parameters, the algorithm proceeds as follows ... [Pg.288]

Fig. 9.2 illustrates the Rosenbrock method for a model with two parameters. Because the algorithm attempts searching along the axis of a valley of the objective function S, many evaluations of S can be omitted compared with, for example, the algorithm with univariate... [Pg.289]

The Rosenbrock method applied to the objective function of a regression analysis with a model consisting of two parameters, and 2- The figure shows the first stage of the optimization. [Pg.289]

Simulations must thus handle the nonlinear boundary conditions. Some have taken the easy way out and used explicit methods [15-18], others used hopscotch [12, 19], ADI (for a two-dimensional problem) [20, 21] and other methods [4, 5, 22-26]. Bieniasz [27] used the Rosenbrock method (see Chap. 9), which makes sense because it effectively deals with nonlinearities without iterations at a given time step. Some have simulated both resistance and capacitive effects [12, 15, 16, 20-22, 25]. [Pg.242]

The Rosenbrock method is described for odes in Chap. 4 and for electrochemical simulations, that is, DAEs, in Chap. 9. There are four variants, two of them second-order with respect to the time interval, and two of them third-order, that are considered in these chapters. Although only two variants recommend themselves, the constants for all four are given here. For the notation and the meaning of the variant names, see these chapters. The notation is in some cases not that of the (cited) sources. Constants that are left out can be taken as zero. [Pg.445]


See other pages where The Rosenbrock Method is mentioned: [Pg.102]    [Pg.68]    [Pg.167]    [Pg.167]    [Pg.169]    [Pg.171]    [Pg.268]    [Pg.89]    [Pg.288]    [Pg.152]    [Pg.152]    [Pg.200]    [Pg.201]    [Pg.203]    [Pg.205]    [Pg.411]    [Pg.480]   


SEARCH



Rosenbrock

Rosenbrock method

© 2024 chempedia.info