Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Basic Error Model

The occurrence of measurement errors is related to the performance characteristics of the assay, primarily bias, imprecision, and specificity as defined above. The overall influence of these factors may be incorporated in an error model. [Pg.368]

If the method is a reference method without bias and nonspecificity, the target value equals the true value, i.e., [Pg.368]

Given a field method, some bias or nonspecificity may be present, and the target and true values ai e Ukely to differ somewhat. For example, if we measure creatinine with a chromogenic method, which co-determines some other components with creatinine in serum, we will likely obtain a higher target value than when we use a specific isotope-dilution mass spectrometry (ID-MS) reference method (i.e.. [Pg.368]

Distribution o1 target value deviations from the true value for a population of patient sampies [Pg.369]

For example, the chromogenic creatinine method may on average determine creatinine values 15% too high, which then constitutes the mean bias. For individual samples, the particular bias may be slightly higher or lower than 15% depending on the actual chromogenic content. [Pg.369]


Figure 14-7 Outline of basic error model for measurements by a field method. Upper part The distribution of repeated measurements of the same sample, representing a normal distribution around the target value (vertical line) of the sample with a dispersion corresponding to the analytical standard deviation, Oa- Middle part Schematic outline of the dispersion of target value deviations from the respective true values for a population of patient samples, A distribution of an arbitrary form is displayed.The vertical line indicates the mean of the distribution. Lower part The distance from zero to the mean of the target value deviations from the true values represents the mean bias of the method. Figure 14-7 Outline of basic error model for measurements by a field method. Upper part The distribution of repeated measurements of the same sample, representing a normal distribution around the target value (vertical line) of the sample with a dispersion corresponding to the analytical standard deviation, Oa- Middle part Schematic outline of the dispersion of target value deviations from the respective true values for a population of patient samples, A distribution of an arbitrary form is displayed.The vertical line indicates the mean of the distribution. Lower part The distance from zero to the mean of the target value deviations from the true values represents the mean bias of the method.
In the previous chapter, a comprehensive description was provided, from four complementary perspectives, of the process of how human errors arise during the tasks typically carried out in the chemical process industry (CPI). In other words, the primary concern was with the process of error causation. In this chapter the emphasis will be on the why of error causation. In terms of the system-induced error model presented in Chapter 1, errors can be seen as arising from the conjunction of an error inducing environment, the intrinsic error tendencies of the human and some initiating event which triggers the error sequence from this imstable situation (see Figure 1.5, Chapter 1). This error sequence may then go on to lead to an accident if no barrier or recovery process intervenes. Chapter 2 describes in detail the characteristics of the basic human error tendencies. Chapter 3 describes factors which combine with these tendencies to create the error-likely situation. These factors are called performance-influencing factors or PIFs. [Pg.102]

Most importantly, has the modeler conceptualized the reaction process correctly The modeler defines a reaction process on the basis of a concept of how the process occurs in nature. Many times the apparent failure of a calculation indicates a flawed concept of how the reaction occurs rather than error in a chemical analysis or the thermodynamic data. The failed calculation, in this case, is more useful than a successful one because it points out a basic error in the modeler s understanding. [Pg.26]

Table 5.2 Mean unsigned errors (kcal moL ) in predicted heats of formation from basic NDDO models... Table 5.2 Mean unsigned errors (kcal moL ) in predicted heats of formation from basic NDDO models...
An important feature of the replication-mutation kinetics of Eq. (2) is its straightforward accessibility to justifiable model assumptions. As an example we discuss the uniform error model [18,19] This refers to a molecule which is reproduced sequentially, i.e. digit by digit from one end of the (linear) polymer to the other. The basic assumption is that the accuracy of replication is independent of the particular site and the nature of the monomer at this position. Then, the frequency of mutation depends exclusively on the number of monomers that have to be exchanged in order to mutate from 4 to Ij, which are counted by the Hamming distance of the two strings, d(Ij,Ik) ... [Pg.12]

It is pertinent to ask at this point how well the foregoing predictions work. In fact, there is a basic error in the Carothers model because an infinite network forms when there is one cross-link per weight average molecule (Section 7.9). That is... [Pg.174]

This simple example is very instructive and shows the basic key features of classical error-correction. First, one has to assume a particular and physically motivated error model, one cannot fight a completely unknown enemy Then, one applies the following generic scheme. First, one encodes information on well-chosen states of an extended and redundant system of bits. For instance, in the repetition code, the original bit of information is encoded on two particular states of a three bit system. The idea is clear redundancy prevents information from serious damage due to the errors and assures very likely recovery (let us emphasize that one uses the same kind of trick in everyday life when asking someone to repeat a sentence or a question to make sure of every word). Finally, after the transmission through the noisy channel, one decodes... [Pg.140]

Based on the numerical experiments, we suggest the following two basic assumptions to be used in the error modeling ... [Pg.101]

A residual error model should, by necessity, be part of the basic PM model. It is useful to start with a combination of additive and proportional error models. If the data does not support either of the error models, the estimate of one of the errors would tend toward zero. As a note of caution, if the base model has not been optimized, especially the structural model component, an initial estimate of an infi-nitely small value for the additive component of the residual error model may lead to an erroneous elimination of that component of the error model. This should be avoided. It is important to let the nature of the data determine the type of error model to be used. For instance, radioactive decay may be better characterized with a power error model. [Pg.229]

Data entered as ASCII, SAS data set or any ODBC-compliant database file and can be extracted from NONMEM data files Covariates specified as variables Library of structural PK/PD models available custom models may be defined based on WinNonlin syntax error models defined via equation editor Dose events entered separate from concentration events Data entered as ASCII, SAS data set, or any ODBC-compliant database file Covariates specified as variables Library of structural PK/PD models available custom models may be defined based on Visual Basic syntax error models defined via Windows interface... [Pg.330]

The basic Domino Model is inadequate for complex systems and other models were developed (see Safeware [115], chapter 10), but the assumption that there is a single or root cause of an accident unfortunately persists as does the idea of dominos (or layers of Swiss cheese) and chains of failures, each directly causing or leading to the next one in the chain. It also lives on in the emphasis on human error in identifying accident causes. [Pg.17]

There are several elaborations of the basic ARX model, where different disturbance models are introduced. These include well known model types, such as ARMAX, Output-Error, and Box-Jenkins (Knudsen, 1994 Stoicaeto/., 1985 Van Overschee and DeMoor, 1996). [Pg.185]

The goal of cabbration methods is to provide the DUT with an electrically pure connection to the test system terminals. This means that the signal at the test ports should have zero magnitude, no phase shift, and a characteristic impedance of Zg. Mathematically, it means to place an error model between the test setup and the DUT, so that it can account for any errors due to the testing device. Calibration is a procedme that basically quantifies these errors [33-36]. [Pg.98]

Thus, the basic tests of the model indicate reasonable fit to the data, randomly distributed errors, and a linear relationship. Therefore, this model can be used to predict the behavior of the electrochemical potential. [Pg.92]

At a later stage, the basic model was extended to comprise several organic substrates. An example of the data fitting is provided by Figure 8.11, which shows a very good description of the data. The parameter estimation statistics (errors of the parameters and correlations of the parameters) were on an acceptable level. The model gave a logical description of aU the experimentally recorded phenomena. [Pg.183]

In recent years some theoretical results have seemed to defeat the basic principle of induction that no mathematical proofs on the validity of the model can be derived. More specifically, the universal approximation property has been proved for different sets of basis functions (Homik et al, 1989, for sigmoids Hartman et al, 1990, for Gaussians) in order to justify the bias of NN developers to these types of basis functions. This property basically establishes that, for every function, there exists a NN model that exhibits arbitrarily small generalization error. This property, however, should not be erroneously interpreted as a guarantee for small generalization error. Even though there might exist a NN that could... [Pg.170]

If basic assumptions concerning the error structure are incorrect (e.g., non-Gaussian distribution) or cannot be specified, more robust estimation techniques may be necessary. In addition to the above considerations, it is often important to introduce constraints on the estimated parameters (e.g., the parameters can only be positive). Such constraints are included in the simulation and parameter estimation package SIMUSOLV. Beeause of numerical inaccuracy, scaling of parameters and data may be necessary if the numerical values are of greatly differing order. Plots of the residuals, difference between model and measurement value, are very useful in identifying systematic or model errors. [Pg.114]


See other pages where Basic Error Model is mentioned: [Pg.368]    [Pg.368]    [Pg.13]    [Pg.81]    [Pg.185]    [Pg.285]    [Pg.130]    [Pg.228]    [Pg.28]    [Pg.16]    [Pg.197]    [Pg.213]    [Pg.268]    [Pg.66]    [Pg.187]    [Pg.1770]    [Pg.411]    [Pg.2]    [Pg.376]    [Pg.361]    [Pg.362]    [Pg.395]    [Pg.22]   


SEARCH



Analytical methods basic error model

Error model

Measurement basic error model

Statistics basic error model

© 2024 chempedia.info