Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Probabilistic method

1 Statistical methods based upon empirical data [Pg.49]

Statistical methods that are based upon analysis of empirical data without prior assumptions about the type and parameter of distributions are typically termed frequentist methods, although sometimes the term classical is used (e.g. Morgan Henrion, 1990 Warren-Hicks Butcher, 1996 Cullen Frey, 1999). However, the term classical is sometimes connotated with thought experiments (e.g. what happens with a roll of a die) as opposed to inference from empirical data (DeGroot, 1986). Therefore, we use the term frequentist . [Pg.49]

Frequentist methods are fundamentally predicated upon statistical inference based on the Central Limit Theorem. For example, suppose that one wishes to estimate the mean emission factor for a specific pollutant emitted from a specific source category under specific conditions. Because of the cost of collecting measurements, it is not practical to measure each and every such emission source, which would result in a census of the actual population distribution of emissions. With limited resources, one instead would prefer to randomly select a representative sample of such sources. Suppose 10 sources were selected. The mean emission rate is calculated based upon these 10 sources, and a probability distribution model could be fit to the random sample of data. If this process is repeated many times, with a different set of 10 random samples each time, the results will vary. The variation in results for estimates of a given statistic, such as the mean, based upon random sampling is quantified using a sampling distribution. From sampling distributions, confidence intervals are obtained. Thus, the commonly used 95% confidence interval for the mean is a frequentist inference [Pg.49]

The ability to make a judgement regarding representativeness of data is closely related to data quality issues. For example, representativeness is closely associated with the data quality issue of appropriateness. The data quality issues of accuracy and integrity are closely related to quantification or characterization of variability and uncertainty. The data quality issue of transparency pertains to clear documentation that, in turn, can provide enough information for an analyst to judge the representativeness and characterize variability and uncertainty. [Pg.50]

There are often data sets used to estimate distributions of model inputs for which a portion of data are missing because attempts at measurement were below the detection limit of the measurement instrument. These data sets are said to be censored. Commonly used methods for dealing with such data sets are statistically biased. An example includes replacing non-detected values with one half of the detection limit. Such methods cause biased estimates of the mean and do not provide insight regarding the population distribution from which the measured data are a sample. Statistical methods can be used to make inferences regarding both the observed and unobserved (censored) portions of an empirical data set. For example, maximum likelihood estimation can be used to fit parametric distributions to censored data sets, including the portion of the distribution that is below one or more detection limits. Asymptotically unbiased estimates of statistics, such as the mean, can be estimated based upon the fitted distribution. Bootstrap simulation can be used to estimate uncertainty in the statistics of the fitted distribution (e.g. Zhao Frey, 2004). Imputation methods, such as [Pg.50]

5 Probabilistic methods. Other QSAR-like probabilistic approaches have also been developed for compound database mining. Binary QSAR (BQ) is discussed here as an example (Labute 1999). BQ is based on Bayes theorem of conditional probabilities  [Pg.35]

A description of a population s exposure should include an estimation of the variability. Individuals within the population are likely to be exposed to different doses or concentrations because of [Pg.339]

Toxicological Risk Assessments of Chemicals A Practical Guide [Pg.340]

Default Reference Values for Body Weights in Dermal Toxicity Studies [Pg.340]

Source Modified iom EC, 2003. Technical guidance document in support of Commission Directive 93/67/EEC on Risk Assessment for new notified substances, Commission Regulation (EC) No 1488/94 on Risk Assessment for existing substances and Directive 98/8/EC of the European ParUament and of the Council concerning the placing of biocidal products on the market. http / ecb.jrc.it/tgd. [Pg.340]

Default Reference Values for Ventilation Rates (Volume of Inhaled Air Per Unit Time) [Pg.340]


Comer, J. and Kjerengtroen, L. 1996 Probabilistic Methods in Design introductory topics. SAE Paper No. 961792. [Pg.384]

Ellingwood, B. and Galambos, T. V. 1984 General Specihcations for Structural Design Loads. In Shinozuka, M. and Yao, J. (eds). Probabilistic Methods in Structural Engineering. NY ASCE. [Pg.385]

Probable reserves are those unproved reserves which analysis of geological and engineering data suggest are more likely than not to be recoverable. In this context, when probabilistic methods are used, there should be at least a. SO percent probability that the quantities actually recovered will equal or exceed the sum of estimated proved plus probable reseiwes. [Pg.1010]

Billinton, R., and Chen, H. (1997). Determination of Load Cari ylng Capacity Benefits of Wind Energy Conversion Systems. Proceedings of the Probabilistic Methods Applied to Power Systems. Stli International Conference September 21-25, 1997. Vancouver, BC, Canada. [Pg.1195]

Data interpretation methods can be categorized in terms of whether the input space is separated into different classes by local or nonlocal boundaries. Nonlocal methods include those based on linear and nonlinear projection, such as PLS and BPN. The class boundary determined by these methods is unbounded in at least one direction. Local methods include probabilistic methods based on the probability distribution of the data and various clustering methods when the distribution is not known a priori. [Pg.45]

Table 3 describes the main parts of an environmental risk assessment (ERA) that are based on the two major elements characterisation of exposure and characterisation of effects [27, 51]. ERA uses a combination of exposure and effects data as a basis for assessing the likelihood and severity of adverse effects (risks) and feeds this into the decision-making process for managing risks. The process of assessing risk ranges from the simple calculation of hazard ratios to complex utilisation of probabilistic methods based on models and/or measured data sets. Setting of thresholds such as EQS and quality norms (QN) [27] relies primarily on... [Pg.406]

Only within about the last ten years have probabilistic methods been accepted in evaluation of potential explosion accidents by the Department of Defense in the United States. Such methods have a much longer history of development in Europe, particularly in Switzerland, and consequently are in much wider use there. [Pg.46]

The determination of the estimated levels of exposure is obviously a critical component of the risk assessment process. Both pesticide residue levels and food consumption estimates must be considered. Methods for determining exposure are frequently classified as deterministic and probabilistic methods (Winter, 2003). [Pg.266]

While deterministic methods are still quite useful in determining long-term, chronic exposures to pesticides, they are being replaced with probabilistic methods for the analysis of acute (short-term) exposures. These probabilistic methods take advantage of improvements in computational capabilities. [Pg.268]

Finally, the concept of hormesis (Section 4.12), threshold of toxicological concern (Section 4.13), and probabilistic methods for effect assessment (Section 4.14) will be briefly addressed. [Pg.80]

Probabilistic methods can be applied in dose-response assessment when there is an understanding of the important parameters and their relationships, such as identification of the key determinants of human variation (e.g., metabolic polymorphisms, hormone levels, and cell replication rates), observation of the distributions of these variables, and valid models for combining these variables. With appropriate data and expert judgment, formal approaches to probabilistic risk assessment can be applied to provide insight into the overall extent and dominant sources of human variation and uncertainty. [Pg.203]

Probabilistic exposure models attempt to provide inputs to exposure models by representing variability or uncertainty via frequency or probability distributions. Probabilistic methods can be used in the exposure assessment because pertinent variables (e.g., concentration, intake rate, exposure duration, and body weight) have been identified, their distributions can be observed, and the formula for combining the variables to estimate the exposure is well defined. [Pg.341]

For further information on probabilistic methods in exposure assessment, the reader is recommended to consult the textbook by Cullen and Frey (1999). [Pg.342]

The phase problem of X-ray crystallography may be defined as the problem of determining the phases ( ) of the normalized structure factors E when only the magnitudes E are given. Since there are many more reflections in a diffraction pattern than there are independent atoms in the corresponding crystal, the phase problem is overdetermined, and the existence of relationships among the measured magnitudes is implied. Direct methods (Hauptman and Karle, 1953) are ab initio probabilistic methods that seek to exploit these relationships, and the techniques of probability theory have identified the linear combinations of three phases whose Miller indices sum to... [Pg.132]

The Pellston workshop in February 2002, which produced this book, aimed to develop guidance and increased consensus on the use of uncertainty analysis methods in ecological risk assessment. The workshop focused on pesticides, and used case studies on pesticides, because of the urgent need created by the rapid move to using probabilistic methods in pesticide risk assessment. However, it was anticipated that the conclusions would also be highly relevant to other stressors, especially other contaminants. [Pg.8]

What are the implications of probabilistic methods for problem formulation ... [Pg.8]

Some important criticisms encountered by pioneering efforts to apply probabilistic methods to pesticides could have been avoided by appropriate structuring of the conceptual model. These criticisms have included... [Pg.16]

Optimizing the use of probabilistic methods within the regulatory assessment process, and especially within tiered assessments, was recognized as one of the key issues that were given special consideration at the PeUston workshop that developed this book. [Pg.28]

Some analysts suggest the use of 2nd-order probabilistic methods to overcome the limitations outlined above. The idea is to strictly separate variability from incerti-tnde. Second-order Monte Carlo simnlation is often offered as a way to effect this separation. Unfortunately, this approach is not without its own problems. Second-order Monte Carlo simulation... [Pg.92]

WHAT ARE THE IMPLICATIONS OF PROBABILISTIC METHODS FOR PROBLEM FORMULATION ... [Pg.166]

The use of uncertainty analysis and probabilistic methods requires systematic and detailed formulation of the assessment problem. To facilitate this, a) risk assessors and risk managers should be given training in problem formulation, b) tools to assist appropriate problem formulation should be developed, and c) efforts should be made to develop generic problem formulations (including assessment scenarios, conceptual models, and standard datasets), which can be used as a starting point for assessments of particular pesticides. [Pg.173]

Limitations on data availability are a recurrent concern in discussions about uncertainty analysis and probabilistic methods, but arguably these methods are most needed when data are limited. More work is required to develop tools, methods, and guidance for dealing with limited datasets. Specific aspects that require attention are the treatment of sampling error in probability bounds analysis, and the use of qualitative information and expert judgment. [Pg.174]


See other pages where Probabilistic method is mentioned: [Pg.153]    [Pg.345]    [Pg.33]    [Pg.33]    [Pg.133]    [Pg.202]    [Pg.405]    [Pg.1010]    [Pg.304]    [Pg.471]    [Pg.396]    [Pg.2]    [Pg.253]    [Pg.268]    [Pg.203]    [Pg.339]    [Pg.1]    [Pg.143]    [Pg.197]   
See also in sourсe #XX -- [ Pg.637 ]




SEARCH



© 2024 chempedia.info