Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sequential error model

Analysis of Incident Root Causes Using the Sequential Error Model... [Pg.81]

Samanta, P. K. et al, Multiple-Sequential Failure Model Evaluation of and Procedures for Human Error Dependency, BNL May 1985. [Pg.470]

An important feature of the replication-mutation kinetics of Eq. (2) is its straightforward accessibility to justifiable model assumptions. As an example we discuss the uniform error model [18,19] This refers to a molecule which is reproduced sequentially, i.e. digit by digit from one end of the (linear) polymer to the other. The basic assumption is that the accuracy of replication is independent of the particular site and the nature of the monomer at this position. Then, the frequency of mutation depends exclusively on the number of monomers that have to be exchanged in order to mutate from 4 to Ij, which are counted by the Hamming distance of the two strings, d(Ij,Ik) ... [Pg.12]

Within NONMEM, a generalized least-squares-like (GLS-like) estimation algorithm can be developed by iterating separate, sequential models. In the first step, the model is fit using one of the estimation algorithms (FO-approximation, FOCE, etc.). The individual predicted values are saved in a data set that is formatted the same as the input data set, i.e., the output data set contains the original data set plus one more variable the individual predicted values. The second step then models the residual error based on the value of the individual predicted values given in the previous step. So, for example, suppose the residual error was modeled as a proportional error model... [Pg.230]

The error model used in minimization assumes that the residuals have zero mean value and are normally distributed. The latter assumption could be violated, since not all of the deviations between the model and the data are of a stochastic nature. Analytical techniques, particularly automation of off-line analysis, such as gas and liquid chromatography and development of online analytical techniques (UV, FTIR, flow and sequential injection analysis) suppress the random scattering in the data to a minimum, and beautiful experimental curves can be plotted. Still, a lot of deviations appear between experimental and predicted data. The main reason originates from systematic deviations, which are easily recognized by graphical consideration of data sets, e.g. plotting the residuals as a function of dependent or independent variables. [Pg.446]

The second classification is the physical model. Examples are the rigorous modiiles found in chemical-process simulators. In sequential modular simulators, distillation and kinetic reactors are two important examples. Compared to relational models, physical models purport to represent the ac tual material, energy, equilibrium, and rate processes present in the unit. They rarely, however, include any equipment constraints as part of the model. Despite their complexity, adjustable parameters oearing some relation to theoiy (e.g., tray efficiency) are required such that the output is properly related to the input and specifications. These modds provide more accurate predictions of output based on input and specifications. However, the interactions between the model parameters and database parameters compromise the relationships between input and output. The nonlinearities of equipment performance are not included and, consequently, significant extrapolations result in large errors. Despite their greater complexity, they should be considered to be approximate as well. [Pg.2555]

The applications of the SRK, GEMS, stepladder and sequential block diagram models to human error in process safety can be summarized as follows ... [Pg.81]

FIGURE 2.9. Sequential Model of Error Causation Chain (based on Rasmussen, 1982). [Pg.82]

APPENDIX 2C CASE STUDY ILLUSTRATING THE USE OF THE SEQUENTIAL MODEL OF ERROR IN INCIDENT ANALYSIS... [Pg.100]

Further links exist between the PIF concept and topics considered in previous chapters. In Chapter 2 the sequential model developed by Rasmussen to represent the error process from its initiator to its consequences was described (Figure 2.9). In this process, the PIFs were shown as being involved in both the initiating event and the internal error mechanisms. In the application example of the model in Appendix 2C, the PIF which constituted the initiating event was the distracting environment, and poor ergonomics of the panel was a PIF which influenced the internal error mechanism. [Pg.104]

The ability of the sequential design to discriminate among the rival models should be examined as a function of the standard error in the measurements (oe). For this reason, artificial data were generated by integrating the governing ODEs for Model 1 with "true" parameter values kt=0.31, k2=0.18, k3=0.55 and k4=0.03 and by adding noise to the noise free data. The error terms are taken from independent normal distributions with zero mean and constant standard deviation (oE). [Pg.215]

One weakness of some multimedia models that must be considered by the user is inconsistency of time scales. For example, if we employ monthly averaged air concentrations to get rainout values on fifteen-minute interval inputs to a watershed model, large errors can obviously occur. The air-land-water-simulation (ALWAS) developed by Tucker and co-workers (12) overcomes this limitation by allowing for sequential air quality outputs to provide deposition data to drive a soil model. This in turn is coupled to a surface water model. [Pg.98]

Commercial process simulators mainly use a form of SQP. To use LP, you must balance the nonlinearity of the plant model (constraints) and the objective function with the error in approximation of the plant by linear models. Infeasible path, sequential modular SQP has proven particularly effective. [Pg.525]

In such statistically designed experiments one wants to exclude the random effects of a limited number of features by varying them systematically, i.e. by variation of the so-called factors. At the same time the order in which the experiments are performed should be randomized to avoid systematic errors in experimentation. In another basic type of experiment, sequential experiments, the set-up of an experiment depends on the results obtained from previous experiments. For help in deciding which design is preferable, see Section 3.6. In principle, statistical design is one recommendation of how to perform the experiments. The design should always be based on an exact question or on a working hypothesis. These in turn are often based on models. [Pg.71]

Implementation of dynamic simulators has led to interesting research issues. For example, many have been implemented in a sequential modular format. To carry out the integration correctly from the point of view of correctly assessing integration errors, each unit model can receive as input a current estimate for the state variables (variables x), the unit input stream variables, and any independent input variables specified versus time... [Pg.516]

The analysis of a supersaturated design is usually conducted by using some type of sequential model-fitting procedure, such as stepwise regression. Abraham et al. (1999) and Holcomb et al. (2003) have studied the performance of analysis methods for supersaturated designs. Techniques such as stepwise model fitting and all-possible-regressions type methods may not always produce consistent and reliable results. Holcomb et al. (2003) showed that the performance of an analysis technique in terms of its type I and type II error rate can depend on several factors,... [Pg.17]

Leave-one-out error rates for sequential forward selection 276 models... [Pg.264]

A second approach has been to use the regression procedure to obtain an estimate for the error structure of the data. A sequential regression is employed in which the parameters for an assumed error structure model, e.g., equations (21.19) and (21.20), are obtained directly from regression to the data. In more recent work, the error variance model was replaced by... [Pg.419]

A refined philosophical approach toward the use of impedance spectroscopy is outlined in Figure 23.1, where the triangle evokes the concept of an operational amplifier for which the potential of input channels must be equal. Sequential steps are taken until the model provides a statistically adequate representation of the data to within the independently obtained stochastic error structure. The different aspects that comprise the philosophy are presented in this section. [Pg.450]


See other pages where Sequential error model is mentioned: [Pg.344]    [Pg.10]    [Pg.81]    [Pg.5]    [Pg.132]    [Pg.16]    [Pg.265]    [Pg.379]    [Pg.423]    [Pg.520]    [Pg.350]    [Pg.350]    [Pg.265]    [Pg.489]    [Pg.312]    [Pg.152]    [Pg.283]    [Pg.297]    [Pg.379]    [Pg.379]    [Pg.263]    [Pg.223]    [Pg.132]    [Pg.282]    [Pg.4728]    [Pg.421]    [Pg.449]    [Pg.539]   


SEARCH



Error model

Sequential model

© 2024 chempedia.info