Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Costs of Data Generation

A lack of understanding of the cost of data generation by the designers and especially CAE program vendors. [Pg.960]

If the cost of specimen molding is added, the minimum cost of material data generation easily adds up to 20,000 per material (Table 11.27). Even if one takes advantage of group test discounts, it is reasonable to expect the cost of data generation would be at least 15,000 per material. [Pg.961]

There are various reasons why the cost of data generation is so expensive, the chief among them being ... [Pg.961]

The validity of pharmacoeconomic data is invariably diminished by two important factors a failure to account for all direct and indirect cost outcomes, and the difficulty of assigning costs to human experiences. In schizophrenia, validity is further reduced by the near-impossibility of conducting trials over several years, or even decades, so as to approach the reality of what is usually a lifelong illness. Given these observations, it would be imprudent to act on the minutiae of data generated in even the best-conducted trials, but it may well be appropriate to draw broad conclusions. [Pg.20]

Fourier transform ion-cyclotron (FT-ICR-MS) provides the highest mass resolution and accuracy, and enables the determination of the elemental compositions of metabolites, which facilitates annotation procedures for unknown compounds (95). Direct infusion analysis of plant extract without a previous separation and/or derivatization can be achieved however, its use is very restricted due to the equipment cost, the difficulties in hardware handling, and the extremely large amount of data generated. Takahashi et al. applied this technique to elucidate the effects of the overexpression of the YK1 gene in stress-tolerant GM rice (96). More than 850 metabolites could be determined, and the metabolomics fingerprint in callus, leaf, and panicle was significantly different from one another. [Pg.366]

The public health implications required a rapid, yet reliable assessment of these sites. A cost-effective procedure was developed to analyze for 2,3,7,8-TCDD down to 1 part per billion (ppb) in hundreds of soil samples ( ). From late 1982 thru 1985, over 13500 samples were analyzed by 50 different laboratories using procedures based on isotope dilution and high resolution gas chromatography with low resolution mass spectrometry. A comprehensive quality assurance program was implemented to assure the reliability of data generated during this massive monitoring effort ( 3). [Pg.260]

Economic evaluations show that the cost of energy generation with CANDU 6 is competitive with that from the other reactor types, and with that from coal plants in many areas of the world. Data supplied to OECD by the Republic of South Korea, the only country in the world operating CANDU, PWR and coal generating plants, indicates that CAJTOU 6 is competitive with both the larger PWR plants and with coal plants in that country. [Pg.179]

A large investment is made by oil and gas companies in acquiring open hole log data. Logging activities can represent between 5% and 15% of total well cost. It is important therefore to ensure that the cost of acquisition can be justified by the value of information generated and that thereafter the information is effectively managed. [Pg.131]

Classic parameter estimation techniques involve using experimental data to estimate all parameters at once. This allows an estimate of central tendency and a confidence interval for each parameter, but it also allows determination of a matrix of covariances between parameters. To determine parameters and confidence intervals at some level, the requirements for data increase more than proportionally with the number of parameters in the model. Above some number of parameters, simultaneous estimation becomes impractical, and the experiments required to generate the data become impossible or unethical. For models at this level of complexity parameters and covariances can be estimated for each subsection of the model. This assumes that the covariance between parameters in different subsections is zero. This is unsatisfactory to some practitioners, and this (and the complexity of such models and the difficulty and cost of building them) has been a criticism of highly parameterized PBPK and PBPD models. An alternate view assumes that decisions will be made that should be informed by as much information about the system as possible, that the assumption of zero covariance between parameters in differ-... [Pg.543]


See other pages where Costs of Data Generation is mentioned: [Pg.125]    [Pg.38]    [Pg.960]    [Pg.125]    [Pg.38]    [Pg.960]    [Pg.148]    [Pg.531]    [Pg.1043]    [Pg.273]    [Pg.107]    [Pg.423]    [Pg.72]    [Pg.148]    [Pg.259]    [Pg.311]    [Pg.69]    [Pg.96]    [Pg.82]    [Pg.15]    [Pg.1626]    [Pg.430]    [Pg.76]    [Pg.111]    [Pg.383]    [Pg.240]    [Pg.105]    [Pg.234]    [Pg.207]    [Pg.35]    [Pg.142]    [Pg.143]    [Pg.149]    [Pg.133]    [Pg.319]    [Pg.934]    [Pg.798]    [Pg.102]    [Pg.223]    [Pg.152]    [Pg.156]    [Pg.21]    [Pg.7]    [Pg.974]   


SEARCH



Cost Data

Data generation

© 2024 chempedia.info