Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Approximations error orders

Process Hazards Analysis. Analysis of processes for unrecogni2ed or inadequately controUed ha2ards (see Hazard analysis and risk assessment) is required by OSHA (36). The principal methods of analysis, in an approximate ascending order of intensity, are what-if checklist failure modes and effects ha2ard and operabiHty (HAZOP) and fault-tree analysis. Other complementary methods include human error prediction and cost/benefit analysis. The HAZOP method is the most popular as of 1995 because it can be used to identify ha2ards, pinpoint their causes and consequences, and disclose the need for protective systems. Fault-tree analysis is the method to be used if a quantitative evaluation of operational safety is needed to justify the implementation of process improvements. [Pg.102]

The expansion of the approximation error = Lh v — Lv in powers of h is aimed at achieving the order of approximation as high as possible. Indeed, we might have... [Pg.59]

In addition to stability considerations, the order of the approximation error is also a function of the system index. From the results of Brenan and Petzold (1987), systems of equations of higher index can be considered simply by choosing the appropriate IRK method with the appropriate integration error constraints. Based on error and stability considerations, Logsdon and Biegler (1989) concluded that minimum order requirements for collocation methods are the following ... [Pg.241]

Here ip(x) = u -(- f(x) is the approximation error of scheme (44). For all sufficiently smooth functions u(x) it is well-known that tp(x) is a quantity of order 0(/i2), thus causing the same type of the problem for the function z(x) as occured for the function y(x). Because of this fact, estimate (48) is still valid for z(x) ... [Pg.114]

The error of approximation. In order to evaluate the accuracy of scheme (4) (6), the solution y = y- of problem (4)-(6) should be compared with the solution u = u(x,t) of problem (I). Since u = u(x,t) is the continuous solution of problem (I), we may set u- = u(xittj) and deal then with the difference z = -y- —u-. For this, the first step in the estimation of the grid function zf on the relevant layer is connected with norms of proper form, for example,... [Pg.303]

With hmax = ma,x(xi+i — x/), the global error order of the classical Runge-Kutta method is of order 4, or 0(h/nax), provided that the solution function y of (1.13) is 5 times continuously differentiable. The global error order of a numerical integrator measures the maximal error committed in all approximations of the true solution y(xi) in the computed y values y. Thus if we use a constant step of size h = 10 3 for example and the classical Runge-Kutta method for an IVP that has a sufficiently often differentiable solution y, then our global error satisfies... [Pg.40]

From the above treatment, the error orders of the approximations can be determined. First, a definition of what is meant here is required. With equal intervals of length h, orders are expressed as powers of that length. Here we have arbitrarily spaced points, and thus a set of different hk- In computations to confirm error order expectations, the following scheme can serve. Refer all hk as displacements from point i, as above (3.45). A given derivative can then be computed. Then, all points around the reference point Xj are moved to a given fraction a of their original displacements from the reference point, so that now there is a new set of displacements,... [Pg.48]

It turns out that the order of this approximation is also the global error order of the calculation using the Euler method. An alternative way to proceed is to go from the Taylor expansion for y(t + St), as in (3.3),... [Pg.53]

In addition to this, it is often important to have also exact mathematical criteria of the truncation errors performed when an infinite continued fraction is replaced by its approximant of order n. In a number of situations these criteria do exist and are also economical. Keeping in line with the general spirit of our review, we focus on error bounds in continued fractions, describing specific (though sufficiently general) physical problems. We do not dwell on the interesting theory of error bounds in more abstract situations. [Pg.125]

There seems little doubt that in radiation induced polymerizations the reactive entity is a free cation (vinyl ethers are not susceptible to free radical or anionic polymerization). The dielectric constant of bulk isobutyl vinyl ether is low (<4) and very little solvation of cations is likely. Under these circumstances, therefore, the charge density of the active centre is likely to be a maximum and hence, also, the bimolecular rate coefficient for reaction with monomer. These data can, therefore, be regarded as a measure of the reactivity of a non-solvated or naked free ion and bear out the high reactivity predicted some years ago [110, 111]. The experimental results from initiation by stable carbonium ion salts are approximately one order of magnitude lower than those from 7-ray studies, but nevertheless still represent extremely high reactivity. In the latter work the dielectric constant of the solvent is much higher (CHjClj, e 10, 0°C) and considerable solvation of the active centre must be anticipated. As a result the charge density of the free cation will be reduced, and hence the lower value of fep represents the reactivity of a solvated free ion rather than a naked one. Confirmation of the apparent free ion nature of these polymerizations is afforded by the data on the ion pair dissociation constant,, of the salts used for initiation, and, more importantly, the invariance, within experimental error, of ftp with the counter-ion used (SbCl or BF4). Overall effects of solvent polarity will be considered shortly in more detail. [Pg.93]

For a uniform Cartesian grid, this approximation is of second-order accuracy. Even for a non-uniform grid, the error reduction with respect to grid refinement is similar to that of a second-order approximation. Higher order polynomials can be used to estimate the required gradients. For example, a fourth-order approximation for the gradient at face e on the uniform Cartesian grid can be written ... [Pg.156]

For equally spaced points, the first derivative of function f(x) is approximated (to error order of Ajc ) as follows ... [Pg.469]

In all relaxation measurement experiments, spectra are recorded as a function of t, a time during which relaxation is allowed to occur. The variation of the intensity of a particular signal in the spectrum as a function of t then constitutes the relaxation decay curve, which we must analyze to retrieve motional data. As this curve will generally have an exponential form (or at least approximately), in order to define this curve properly, it is desirable to arrange to spread the t points of measurement non-linearly and to concentrate them near the t — 0 end of the curve where the signal intensity varies most rapidly. A logarithmic distribution of t points is usually optimal. It is essential that the spin system is fully relaxed before acquiring the next scan or spectrum and also that the t data measurement is continued until the complete decay curve is measured, otherwise errors will inevitably result in its analysis. [Pg.90]

There are many ways one can try to reduce the computational burden. Ideally, one would find numerical methods which are guaranteed to retain accuracy while speeding the calculations, and it would be best if the procedure were completely automatic i.e. it did not rely on the user to provide any special information to the numerical routine. Unfortunately, often one is driven to make physical approximations in order to make it feasible to reach a solution. Common approximations of this type are the quasi-steady-state approximation (QSSA), the use of reduced chemical kinetic models, and interpolation between tabulated solutions of the differential equations (Chen, 1988 Peters and Rogg, 1993 Pope, 1997 Tonse et al., 1999). All of these methods were used effectively in the 20th century for particular cases, but all of these approximated-chemistry methods share a serious problem it is hard to know how much error is... [Pg.30]

As a result, interval analysis tends to overestimate error bounds. However, there are clever ways to reduce this overestimation, for example by appropriately grouping terms. One of the best and most computationally efficient ways to minimize overestimation of the bounds on the approximation error is to replace terms f Y) in w(Y) by their Taylor models, e.g. the first-order Taylor model is given by Eq. (14) ... [Pg.35]


See other pages where Approximations error orders is mentioned: [Pg.71]    [Pg.115]    [Pg.303]    [Pg.365]    [Pg.377]    [Pg.381]    [Pg.456]    [Pg.778]    [Pg.151]    [Pg.224]    [Pg.127]    [Pg.127]    [Pg.138]    [Pg.427]    [Pg.71]    [Pg.115]    [Pg.365]    [Pg.377]    [Pg.381]    [Pg.456]    [Pg.778]    [Pg.263]    [Pg.152]    [Pg.43]    [Pg.293]    [Pg.79]    [Pg.148]    [Pg.345]    [Pg.311]    [Pg.525]    [Pg.427]    [Pg.14]   
See also in sourсe #XX -- [ Pg.47 ]

See also in sourсe #XX -- [ Pg.54 ]




SEARCH



Approximation error

Approximations order

Errors order

Reduced-order models approximation errors

© 2024 chempedia.info