Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Ideal data

Ideally, data bases will have been developed within a company such that predetermined PIFs are associated with particular categories of task. If this is not the case, the analyst decides on a suitable set of PIFs. In this example, it is assumed that the main PIFs which determine the likelihood of error are time stress, level of experience, level of distractions, and quality of procedures. (See Section 5.3.2.6.)... [Pg.235]

The effectiveness of the above described computational procedure was tested by generating an analytical ("ideal") data curve by calculating the isocyanate concentration as a function of time assuming rate constants of k = 1.0/min and k2 = 1.0 A/mol/min and initial concentrations of blocked isocyanate and hydroxyl of 1.0 M. The objective function, for various values of k. and k, for this "ideal" data was calculated and a contour plot for constant values of F was generated and is shown in Figure 2. [Pg.244]

The effect of this normalization procedure can be seen in the contour plot of Figure 11. The minimum, rather than being a well as in the procedure based on concentration now is more of a valley in which a wide range of values of k. and k will provide reasonable solutions to the equation. Values for k1 or from. 8 to 1.3 /min and for k2 of from. 5 to 1.5 Vmol/min can result in answers with F = 0.0057 The trajectory of the minimization procedure is shown in Figure 11. The function rapidly finds the valley floor and then travels through the valley until it reaches the minimum. A similar trajectory is shown in Figure 12 in which the search is started from a different point. In the case of "ideal" data the procedure will still find the minimum along the valley floor. [Pg.250]

As pointed out by Liebman et al., given a perfect model, an ideal data reconciliation scheme would use all information (process measurements) from the startup of the process until the current time. Unfortunately, such a scheme would necessarily result in an optimization problem of ever-increasing dimension. For practical implementation we can use a moving time window to reduce the optimization problem to manageable dimensions. A window approach was presented by Jang et al. (1986) and extended later by Liebman et al. (1992). [Pg.170]

The choice of a data transfer protocol (middle layer) is also important to the operation of a federated information system. An ideal data transfer protocol would be architecture-independent, provide reliable data transfer over existing network protocols and, ideally, it would already be globally disseminated. Today (in 2003), the choice seems obvious HTTP (HyperText Transfer Protocol). [Pg.248]

For ideal data, all the samples are classified correctly and the table only has numbers on the diagonal. [Pg.245]

For ideal data all of the K nearest neiglibors for the training set samples belong to the correct known class. [Pg.245]

For ideal data, the values are consistent with known class membership. All samples that are known to belong to a given class are classified in that class while other samples are excluded. [Pg.262]

The proposed tiers are not obligatory but contain extrapolation tools that can be used differently in a number of situations and, in some cases, regulatory protocols, in which certain combinations of extrapolation methods are prescribed as methods that must be used for the assessment, such as in the formal registration of pesticides. The proposed tiered system is based on a scientific classification of the available extrapolation method types. With ideal data and concepts for extrapolation, this scheme may be expected to yield reduced degrees of overestimation of risk when moving up the tiers from Tier-0 to Tier-4 (i.e., risks are more precisely estimated in the higher tiers). [Pg.320]

FIGURE 12.4 Plots of loglO of singular values vs. estimated concentration for RAFA. Bold line represents correct factor for quantitation (a) errorless, ideal data, (b) 2.5% relative error added to each sample, (c) 5.0% relative error added to each sample, (d) only one interferent with 5.0% relative error added to each sample. [Pg.484]

An ideal data set is generated from this expression ... [Pg.285]

Whenever possible, line and bar graphs should be constructed with the use of computer graphing programs (CricketGraph, Excel, Lotus, etc.). Aside from the fact that they produce graphs that are uniform and visually attractive, they have the added capability of fitting non-ideal data to a best-fit line or polynomial equation. This operation of fitting a set of data to a best-fit line becomes extremely im-... [Pg.11]

Supply required accurate/up-to-date drawings and documents to the Team Leader (Table 6 provides a listing of ideal data requirements for a facility or system review during the design phase of the project). [Pg.14]

Helliwell, J. R., Ealick, S., Doing, P., Irving, T., and Szebenyi, M. Towards the measurement of ideal data for macromolecular crystallography using synchrotron sources such as the ESRF. Acta Cryst. D49, 120-128 (1993). [Pg.278]

Conductivity of electrolyte presents multiplicative parameter to the current density, which was chosen as k=l. Different types and structure of data were tested (ideal data and data subjected to errors). [Pg.180]

The recovering of current density from data on electric potential, satisfying Laplace s equation was studied. In experiments, it is difficult or expensive to obtain many measurements and therefore numerical integration cannot be performed. The recovered results revealed high accuracy with synthetic ideal function, as for ideal data, so does for data subjected to high errors. The method uses complex variable theory where one can obtain holomorphic function, related to the electric potential and its derivative related with the current density. [Pg.183]

The literature and commercial companies abound with computational solubility models. Many data sets have been studied, with many different descriptor sets, and using a multitude of statistical methods. It appears that diverse drug-like data sets are often predicted by our best methods with an RMSE of 0.8-1 log unit. This compares with an error in replicate measurements of approximately 0.5 log unit. A common view is that there is still room for improvement in the computational modeling of solubility. There are a number of suggestions that the quality control of the ideal data set is still lacking. This may be true for some literature data set compilations, but it is... [Pg.65]

But Priestley accused Lavoisier of achieving mathematical certainty in chemistry only by abandoning the method of analysis in favour of a synthetic style of inquiry and presentation in which oxygen and hydrogen combined together to form pure water. Besides dismissing the acidic solution that Priestley always obtained in this experiment as an impurity-effect due to the presence of nitrogen in the reactants, Lavoisier developed sophisticated experimental procedure and elaborate laboratory apparatus in order to eliminate all impurity-effects from his results and to produce the idealized data necessary for the formulation of true equations in chemistry . Priestley criticized these idealized experiments for... [Pg.249]

Fig. 5.4 Competition analysis (idealized data). The labeled ligand, which is held at a constant concentration at or below its Kj, is displaced by increasing concentrations of the examined ligand. The x value of the curve s inflection point represents log IC50. Fig. 5.4 Competition analysis (idealized data). The labeled ligand, which is held at a constant concentration at or below its Kj, is displaced by increasing concentrations of the examined ligand. The x value of the curve s inflection point represents log IC50.
Ideally, data should be taken during the course of the fermentation about gas rate, gas absorption, dissolved oxygen level, dissolved carbon dioxide level, yield of desired product, and other parameters which might influence the decision on the overall process. Figure 41 shows a typical set of data for this situation. [Pg.223]

In an ideal data-pipelining case, the first data record of a dataset has passed entirely through the branching network of tasks in a data pipeline even before the reading of... [Pg.427]

Verification of Uncertainty Factors. As summarized in several publications, uncertainty factors are currently recommended to estimate acceptable intakes for systemic toxicants (1,13,18). The selection of these factors in general reflects the uncertainty inherent with the use of different human or animal toxicity data (i.e., the weight of evidence plays a major role in the selection of uncertainty factors). For example, an uncertainty factor of less than 10 and perhaps even 1 may be used to estimate an ADI if sufficient data of chronic duration are available on a chemical s critical toxic effect in a known sensitive human population. That is to say that this ideal data base is sufficiently predictive of the population threshold dose therefore, uncertainty factors are not warranted. An overall uncertainty factor of 10 might be used to estimate an acceptable intake based on chronic human toxicity data and would reflect the expected intraspecies variability to the adverse effects of a chemical in the absence of chemical-specific data. An overall uncertainty factor of 100 might be used to estimate ADIs with sufficient chronic animal toxicity data this would reflect the expected intra- and interspecies variability in lieu of chemical-specific data. However, this overall factor of 100 might be used with subchronic human data in this case the 100-fold factor would reflect intraspecies variability and a subchronic exposure extrapolation. [Pg.457]

Fig. 12.13 Schematic representation of a variogram. The mean distances of point-pairs (h) and the corresponding variances of their measured values (7(h)) are plotted. The relation between individual values diminishes with increasing distance, i.e. the 7(h)-values increase. However, the structural dependence between the point-pairs is only valid up to a specific distance (range). From this point the variances tend to scatter around a certain value (sill), which represents the total variance of all values. The nugget effect, or the apparent mismatch of the variogram to go through the origin, indicates for a regionalized variable that it is highly variable over distances less than the sampling/cluster interval. A spherical model was adapted to the idealized data. Fig. 12.13 Schematic representation of a variogram. The mean distances of point-pairs (h) and the corresponding variances of their measured values (7(h)) are plotted. The relation between individual values diminishes with increasing distance, i.e. the 7(h)-values increase. However, the structural dependence between the point-pairs is only valid up to a specific distance (range). From this point the variances tend to scatter around a certain value (sill), which represents the total variance of all values. The nugget effect, or the apparent mismatch of the variogram to go through the origin, indicates for a regionalized variable that it is highly variable over distances less than the sampling/cluster interval. A spherical model was adapted to the idealized data.
The minimum and maximum values obtained for laboratory 5 were decreased and increased respectively by approximately 20% to deliberately increase the range in data for this sample in order to present a situation where less than ideal data were present in a dataset. Raw data as reported by the laboratories and collated as recommended by ISO 5725-2 are given in Table 9.8. Laboratory averages and standard deviations for each laboratory i at level j are given in Tables 9.9 and 9.10, respectively. [Pg.314]


See other pages where Ideal data is mentioned: [Pg.61]    [Pg.145]    [Pg.788]    [Pg.246]    [Pg.162]    [Pg.359]    [Pg.193]    [Pg.193]    [Pg.86]    [Pg.47]    [Pg.216]    [Pg.302]    [Pg.62]    [Pg.324]    [Pg.247]    [Pg.180]    [Pg.181]    [Pg.314]    [Pg.394]    [Pg.57]    [Pg.63]    [Pg.114]    [Pg.260]    [Pg.50]    [Pg.190]    [Pg.37]   


SEARCH



© 2024 chempedia.info