Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistical risk assessment models

When the project was started in 2002, European exposure factor data were scattered within numerous national and international institutions. ExpoFacts has created no new data, but instead compiled the existing data into one Internet database, where it can be easily found, screened, and downloaded from. Data were collected from the EU countries, candidate countries to EU, and EFTA countries. As a result, the ExpoFacts database contains data from 30 European countries. In addition to the population time use patterns and exposure route information, e.g., dietary statistics, the database contains socio-demographic and physiologic information to enable database use as a tool for population-wide exposure modeling and risk assessment. [Pg.325]

The shortcomings pertaining to the estloiatlon of exposure which have been described are very serious, and these Issues will have to be resolved before statistical risk assessment models can be utilized as the basis for regulatory decisions on the registration of pesticides. [Pg.441]

The next part of the procedure involves risk assessment. This includes a deterrnination of the accident probabiUty and the consequence of the accident and is done for each of the scenarios identified in the previous step. The probabiUty is deterrnined using a number of statistical models generally used to represent failures. The consequence is deterrnined using mostiy fundamentally based models, called source models, to describe how material is ejected from process equipment. These source models are coupled with a suitable dispersion model and/or an explosion model to estimate the area affected and predict the damage. The consequence is thus determined. [Pg.469]

Uncertainty on tlie other hand, represents lack of knowledge about factors such as adverse effects or contaminant levels which may be reduced with additional study. Generally, risk assessments carry several categories of uncertainly, and each merits consideration. Measurement micertainty refers to tlie usual eiTor tliat accompanies scientific measurements—standard statistical teclmiques can often be used to express measurement micertainty. A substantial aniomit of uncertainty is often inlierent in enviromiiental sampling, and assessments should address tliese micertainties. There are likewise uncertainties associated with tlie use of scientific models, e.g., dose-response models, and models of environmental fate and transport. Evaluation of model uncertainty would consider tlie scientific basis for the model and available empirical validation. [Pg.406]

The following example is based on a risk assessment of di(2-ethylhexyl) phthalate (DEHP) performed by Arthur D. Little. The experimental dose-response data upon which the extrapolation is based are presented in Table II. DEHP was shown to produce a statistically significant increase in hepatocellular carcinoma when added to the diet of laboratory mice (14). Equivalent human doses were calculated using the methods described earlier, and the response was then extrapolated downward using each of the three models selected. The results of this extrapolation are shown in Table III for a range of human exposure levels from ten micrograms to one hundred milligrams per day. The risk is expressed as the number of excess lifetime cancers expected per million exposed population. [Pg.304]

Typically extrapolations of many kinds are necessary to complete a risk assessment. The number and type of extrapolations will depend, as we have said, on the differences between condition A and condition B, and on how well these differences are understood. Once we have characterized these differences as well as we can, it becomes necessary to identify, if at all possible, a firm scientific basis for conducting each of the required extrapolations. Some, as just mentioned, might be susceptible to relatively simple statistical analysis, but in most cases we will find that statistical methods are inadequate. Often, we may find that all we can do is to apply an assumption of some sort, and then hope that most rational souls find the assumption likely to be close to the truth. Scientists like to be able to claim that the extrapolation can be described by some type of model. A model is usually a mathematical or verbal description of a natural process, which is developed through research, tested for accuracy with new and more refined research, adjusted as necessary to ensure agreement with the new research results, and then used to predict the behavior of future instances of the natural process. Models are refined as new knowledge is acquired. [Pg.212]

Data used to describe variation are ideally representative of some population of risk assessment interest. Representativeness was a focus of an earlier workshop on selection of distributions (USEPA 1998). The role of problem formulation is emphasized. In case of representativeness issues, some adjustment of the data may be possible, perhaps based on a mechanistic or statistical model. Statistical random-effects models may be useful in situations where the model includes distributions among as well as within populations. However, simple approaches may be adequate, depending on the assessment tier, such as an attempt to characterize quantitatively the consequences of assuming the data to be representative. [Pg.39]

Bayesian statistics are applicable to analyzing uncertainty in all phases of a risk assessment. Bayesian or probabilistic induction provides a quantitative way to estimate the plausibility of a proposed causality model (Howson and Urbach 1989), including the causal (conceptual) models central to chemical risk assessment (Newman and Evans 2002). Bayesian inductive methods quantify the plausibility of a conceptual model based on existing data and can accommodate a process of data augmentation (or pooling) until sufficient belief (or disbelief) has been accumulated about the proposed cause-effect model. Once a plausible conceptual model is defined, Bayesian methods can quantify uncertainties in parameter estimation or model predictions (predictive inferences). Relevant methods can be found in numerous textbooks, e.g., Carlin and Louis (2000) and Gelman et al. (1997). [Pg.71]

A probabilistic risk assessment (PRA) deals with many types of uncertainties. In addition to the uncertainties associated with the model itself and model input, there is also the meta-uncertainty about whether the entire PRA process has been performed properly. Employment of sophisticated mathematical and statistical methods may easily convey the false impression of accuracy, especially when numerical results are presented with a high number of significant figures. But those who produce PR As, and those who evaluate them, should exert caution there are many possible pitfalls, traps, and potential swindles that can arise. Because of the potential for generating seemingly correct results that are far from the intended model of reality, it is imperative that the PRA practitioner carefully evaluates not only model input data but also the assumptions used in the PRA, the model itself, and the calculations inherent within the model. This chapter presents information on performing PRA in a manner that will minimize the introduction of errors associated with the PRA process. [Pg.155]

Finally, the diversity of extrapolation techniques relates to the diversity of technical solutions that have been defined in the face of the various extrapolation problems. Methods may range from simple to complex, or from empirical-statistical methods that describe sets of observations (but do not aim to explain them) to mechanism-based approaches (in which a hypothesized mechanism was guiding in the derivation of the extrapolation method). In addition, they may range from those routinely accepted in formal risk assessment frameworks to unique problem-specific approaches, and from laboratory-based extrapolations consisting of 1 or various kinds of modeling to physical experiments that are set up to mimic the situation of concern (with the aim to reduce the need for extrapolation modeling). [Pg.283]

For food allergens, validated animal models for dose-response assessment are not available and human studies (double-blind placebo-controlled food challenges [DBPCFCs]) are the standard way to establish thresholds. It is practically impossible to establish the real population thresholds this way. Such population threshold can be estimated, but this is associated with major statistical and other uncertainties of low dose-extrapolation and patient recruitment and selection. As a matter of fact, uncertainties are of such order of magnitude that a reliable estimate of population thresholds is currently not possible. The result of the dose-response assessment can also be described as a threshold distribution rather than a single population threshold. Such distribution can effectively be used in probabilistic modeling as a tool in quantitative risk assessment (see Section 15.2.5)... [Pg.389]

Iman, R.L. and Conover, W.J. (1980). Small sample sensitivity analysis techniques for computer models, with an application to risk assessment. Communications in Statistics A—Theory and Methods, 9, 1749-1842. [Pg.326]

One problem encountered when assessing exposure of human populations to contaminated land is spatial heterogeneity of pollution. To overcome this problem, Gay and Korre (2006) propose the combinations of spatial statistical methods for mapping soil concentrations, and probabilistic human health risk assessment methods. They applied geostatistical methods to map As concentrations in soil. Subsequently, an age-stratified human population was mapped across the contaminated area, and the intake of As by individuals was calculated using a modified version of the Contaminated Land Exposure Assessment (CLEA) model. This approach allowed a... [Pg.32]

Random error can over- or underestimate risk and is generally not as severe as bias. Moreover, the magnitude of error can be estimated with statistical techniques. Assessment of confounding, synergism, or effect modification can be accomplished in the analytical phase (by stratification or multivariate modeling), providing sufficient data have been collected on those factors. Restriction or randomization procedures also can be used in the design phase to minimize confounders. [Pg.230]


See other pages where Statistical risk assessment models is mentioned: [Pg.11]    [Pg.290]    [Pg.312]    [Pg.10]    [Pg.303]    [Pg.74]    [Pg.102]    [Pg.143]    [Pg.522]    [Pg.142]    [Pg.53]    [Pg.124]    [Pg.218]    [Pg.118]    [Pg.361]    [Pg.431]    [Pg.501]    [Pg.26]    [Pg.102]    [Pg.19]    [Pg.30]    [Pg.32]    [Pg.72]    [Pg.76]    [Pg.224]    [Pg.418]    [Pg.87]    [Pg.151]    [Pg.200]    [Pg.209]    [Pg.296]    [Pg.38]   


SEARCH



Model assessment

Modeling Statistics

Risk model

Statistical assessment

Statistical modeling

Statistical models

© 2024 chempedia.info