Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error propagation nonlinear

The physical interpretation of the results presented in figs 12.2-12.4 is the following. Like any autocatalytic processes, chemical reactions (12.123)-(12.125) lead to saturation effects due to the balance between self-replication and consumption (disappearance) processes. The saturation effects are nonlinear and as a result the experimental errors propagate nonlinearly, which explains the error distortion displayed in fig. 12.2. [Pg.195]

In this chapter, three chemometric methods of increasing importance to SEC are examined nonlinear regression, graphics and error propagation analysis. These three methods are briefly described with emphasis on SEC applications and on critical concerns in their correct implementation. In addition to the specific references cited, further information on these methods and others may be found in a recent book vdiich examines chemometrics in both SEC and HPIiC together (J ) as well as in periodic reviews (2) ... [Pg.203]

Analytic expressions are available for assessing the propagation of errors through linear systems. Such approaches can be used as well when the variances are sufficiently small that the system can be linearized about its expectation value. Numerical techniques are generally needed to assess the propagation of errors through nonlinear systems. [Pg.46]

Analytic meaiis do not exist to solve the problem of propagation of errors through nonlinear systems. Monte Carlo simulations can be used to eissess the magnitude and distribution of propagated errors. [Pg.47]

In the following we analyze the influence of errors on our approach and its accuracy and compare the results with those obtained by using linearized kinetics. We consider a nonlinear kinetic example for which a detailed analytical study is possible. We compare that exact solution with the first-order response theory based on appropriate tracer measurements, and also compare it with the response of the linearized kinetic example. An important interest here is in the effects of error propagations in the analysis due to the application to measurements of poor precision. [Pg.192]

An important consideration has been omitted in [3-5], which are devoted to this approach, and that is the usually rather large experimental errors associated with microarray measurements. It is important to know how such errors propagate in the calculations and their effects on the proper identification of the connectivity matrix. A simple example worked out in detail in section 12.5 shows the possible multiplicative effects of such errors in nonlinear kinetic equations. Until this problem is addressed, the approach of linearization must be viewed with caution. [Pg.210]

The error estimation by means of the formula (4.21) is in most cases laborious. First, the calculation of the autocorrelation time is time-consuming and tedious. Second, to estimate the statistical error of nonlinear functions of the mean f(0) requires the consideration of error propagation. Fortunately, both problems can easily be solved approximately by the binning methods to be described now [80 82]. [Pg.88]

Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training. Figure 16 Root-mean-squared error progression plot for Fletcher nonlinear optimization and back-propagation algorithms during training.
First-order error analysis is a method for propagating uncertainty in the random parameters of a model into the model predictions using a fixed-form equation. This method is not a simulation like Monte Carlo but uses statistical theory to develop an equation that can easily be solved on a calculator. The method works well for linear models, but the accuracy of the method decreases as the model becomes more nonlinear. As a general rule, linear models that can be written down on a piece of paper work well with Ist-order error analysis. Complicated models that consist of a large number of pieced equations (like large exposure models) cannot be evaluated using Ist-order analysis. To use the technique, each partial differential equation of each random parameter with respect to the model must be solvable. [Pg.62]

Thus, the additional approximations underlying the NEE are paraxiality both in the free propagator and in the nonlinear coupling, and a small error in the chromatic dispersion introduced when the background index of refraction is replaced by a constant, frequency independent value in both the spatio-temporal correction term and in the nonlinear coupling term. Note that the latter approximations are usually not serious at all. [Pg.268]

The exact form of the matrices Qi and Q2 depends on the type of partial differential equations that make up the system of equations describing the process units, i.e., parabolic, elliptic, or hyperbolic, as well as the type of applicable boundary conditions, i.e., Dirichlet, Neuman, or Robin boundary conditions. The matrix G contains the source terms as well as any nonlinear terms present in F. It may or may not be averaged over two successive times corresponding to the indices n and n + 1. The numerical scheme solves for the unknown dependent variables at time t = (n + l)At and all spatial positions on the grid in terms of the values of the dependent variables at time t = nAt and all spatial positions. Boundary conditions of the Neuman or Robin type, which involve evaluation of the flux at the boundary, require additional consideration. The approximation of the derivative at the boundary by a finite difference introduces an error into the calculation at the boundary that propagates inward from the boundary as the computation steps forward in time. This requires a modification of the algorithm to compensate for this effect. [Pg.1956]

Standard errors and confidence intervals for functions of model parameters can be found using expectation theory, in the case of a linear function, or using the delta method (which is also sometimes called propagation of errors), in the case of a nonlinear function (Rice, 1988). Begin by assuming that 0 is the estimator for 0 and X is the variance-covariance matrix for 0. For a linear combination of observed model parameters... [Pg.106]


See other pages where Error propagation nonlinear is mentioned: [Pg.202]    [Pg.207]    [Pg.214]    [Pg.448]    [Pg.47]    [Pg.605]    [Pg.167]    [Pg.119]    [Pg.2309]    [Pg.17]    [Pg.22]    [Pg.359]    [Pg.114]    [Pg.97]    [Pg.46]    [Pg.54]    [Pg.535]    [Pg.104]    [Pg.135]    [Pg.56]    [Pg.155]    [Pg.680]    [Pg.93]    [Pg.468]    [Pg.509]    [Pg.309]    [Pg.504]    [Pg.59]    [Pg.666]    [Pg.220]    [Pg.57]    [Pg.403]    [Pg.197]    [Pg.57]   
See also in sourсe #XX -- [ Pg.45 ]




SEARCH



Error propagation

© 2024 chempedia.info