Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Underlying distributions, extreme-value

Many distribution functions can be apphed to strength data of ceramics but the function that has been most widely apphed is the WeibuU function, which is based on the concept of failure at the weakest link in a body under simple tension. A normal distribution is inappropriate for ceramic strengths because extreme values of the flaw distribution, not the central tendency of the flaw distribution, determine the strength. One implication of WeibuU statistics is that large bodies are weaker than small bodies because the number of flaws a body contains is proportional to its volume. [Pg.319]

From a purely practical point of view the range or a quantile can serve as indicator. Quantiles are usually selected to encompass the central 60-90% of an ordered set the influence of extreme values diminishes the smaller this %-value is. No assumptions as to the underlying distribution are made. [Pg.69]

Section 1.6.2 discussed some theoretical distributions which are defined by more or less complicated mathematical formulae they aim at modeling real empirical data distributions or are used in statistical tests. There are some reasons to believe that phenomena observed in nature indeed follow such distributions. The normal distribution is the most widely used distribution in statistics, and it is fully determined by the mean value p. and the standard deviation a. For practical data these two parameters have to be estimated using the data at hand. This section discusses some possibilities to estimate the mean or central value, and the next section mentions different estimators for the standard deviation or spread the described criteria are fisted in Table 1.2. The choice of the estimator depends mainly on the data quality. Do the data really follow the underlying hypothetical distribution Or are there outliers or extreme values that could influence classical estimators and call for robust counterparts ... [Pg.33]

An extreme, large positive, value may sometimes be a manifestation of an underlying distribution of data that is heavily skewed. Transforming the data to be more symmetric may then be something to consider. [Pg.171]

Extreme-Value Parameters and Corresponding Effective Superiorities for Various Underlying Distributions"... [Pg.185]

Readers will encounter the term Gaussian in reference to certain interest-rate models. Put simply a Gaussian process describes one that follows a normal distribution under a probability density functiOTi. The distribution of rates in this way for Gaussiam models implies that interest rates can attain negative values under positive probability, which makes the models undesirable for some market practitioners. Nevertheless such models are popular because they are relatively straightforward to implement and because the probability of the model generating negative rates is low and occurs only under certain extreme circumstances. [Pg.252]

Assuming that the underlying distributions of 8 are type I extreme-value-distributed, the probability of observing the rth worker in claimant status j is... [Pg.72]

In most instances, a group of ceramic or glass samples produced under nominally identical conditions will have worst flaws that vary in severity and location. As a consequence, strength values for those samples will vary, often over a rather wide range. The distribution of failure stresses is usually analyzed in terms of the extreme value statistics developed by Weibull. The most common functional form used in these statistical treatments is... [Pg.172]

When block maxima are utilized, the generalized extreme value (GEV) distribution is fitted to a sample of annual maximum values of the variable under consideration over a period of time. Because of limited annual-maxima data, quantile estimates corresponding to large return periods tend to be highly uncertain. An extension of the classical extreme value analysis leads to the incorporation of a... [Pg.1045]

Risk assessment pertains to characterization of the probability of adverse health effects occurring as a result of human exposure. Recent trends in risk assessment have encouraged the use of realistic exposure scenarios, the totality of available data, and the uncertainty in the data, as well as their quality, in arriving at a best estimate of the risk to exposed populations. The use of "worst case" and even other single point values is an extremely conservative approach and does not offer realistic characterization of risk. Even the use of arithmetic mean values obtained under maximum use conditions may be considered to be conservative and not descriptive of the range of exposures experienced by workers. Use of the entirety of data is more scientific and statistically defensible and would provide a distribution of plausible values. [Pg.36]

Precision is the closeness of agreement between independent test results obtained under stipulated conditions. Precision depends only on the distribution of random errors and does not relate to the true value. It is calculated by determining the standard deviation of the test results from repeat measurements. In numerical terms, a large number for the precision indicates that the results are scattered, i.e. the precision is poor. Quantitative measures of precision depend critically on the stipulated conditions. Repeatability and reproducibility are the two extreme conditions. [Pg.57]

This distribution, together with the numerical value of the statistic, allows an assessment of how unusual the data are, assuming that the hypothesis is valid. The p value is the probability that the observed value of the statistic (or values even more extreme) occur. The data are declared significant at a particular level (a) if p < a, the data are considered sufficiently unusual relative to the hypothesis and the hypothesis is rejected. Standard, albeit arbitrary, values of a are taken as 0.05 and 0.01. Let us suppose that a particular data set gives p = 0.02. From the frequentist vantage, this means that, if the hypothesis were true and the whole experiment were to be repeated many times under identical conditions, in only 2% of such trials would the value of the statistic be more unusual or extreme than the value actually observed. One then prefers to believe that the data are not, in fact, unusual and concludes that the assumed hypothesis is untenable. [Pg.72]


See other pages where Underlying distributions, extreme-value is mentioned: [Pg.735]    [Pg.54]    [Pg.164]    [Pg.232]    [Pg.207]    [Pg.3739]    [Pg.184]    [Pg.63]    [Pg.63]    [Pg.63]    [Pg.738]    [Pg.749]    [Pg.1932]    [Pg.212]    [Pg.2755]    [Pg.46]    [Pg.84]    [Pg.34]    [Pg.118]    [Pg.622]    [Pg.5]    [Pg.236]    [Pg.262]    [Pg.274]    [Pg.29]    [Pg.32]    [Pg.306]    [Pg.934]    [Pg.139]    [Pg.577]    [Pg.206]    [Pg.501]    [Pg.48]    [Pg.271]    [Pg.301]    [Pg.92]   


SEARCH



Extreme

Extreme value distribution

Extremities

Extremizer

Values underlying

© 2024 chempedia.info