Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Normal distribution application

The population of differences is normally distributed with a mean [L ansample size is 10 or greater in most situations. [Pg.497]

Another consideration when using the approach is the assumption that stress and strength are statistically independent however, in practical applications it is to be expected that this is usually the case (Disney et al., 1968). The random variables in the design are assumed to be independent, linear and near-Normal to be used effectively in the variance equation. A high correlation of the random variables in some way, or the use of non-Normal distributions in the stress governing function are often sources of non-linearity and transformations methods should be considered. [Pg.191]

Lognormal distribution Similar to a normal distribution. However, the logarithms of the values of the random variables are normally distributed. Typical applications are metal fatigue, electrical insulation life, time-to-repair data, continuous process (i.e., chemical processes) failure and repair data. [Pg.230]

One of tlte principal applications of the normal distribution in reliability calculations and liazard and risk analysis is tlte distribution of lime to failure due to wearout. Suppose, for example, tliat a production lot of a certain electronic device is especially designed to withstand liigh temperatures and intense vibrations lias just come off the assembly line. A sample of 25 devices from tlie lot is tested under tlie specified heal and vibration conditions. Time to failure, in hours, is recorded for each of the 25 devices. Application of Eqs. (19.10.1) and... [Pg.588]

When n < 0.7, the ln[—ln(l — a)] against In t plots show curvature and linearity is improved if the latter parameter is replaced by t. This reduces the Weibull distribution to a log-normal distribution. Since both exponential and normal distributions are special cases of the more general gamma distribution, Kolar-Anic and Veljkovic [441] compared the applicability of the Weibull and the gamma distributions. The shape parameter of the latter (e) was shown to depend exclusively on the shape parameter of the former (n). [Pg.56]

The optimization of empirical correlations developed from the ASPEN-PLUS model yielded operating conditions which reduced the steam-to-slurry ratio by 33%, increased throughput by 20% while maintaining the solvent residual at the desired level. While very successful in this industrial application the approach is not without shortcomings. The main disadvantage is the inherent assumption that the data are normally distributed, which may or may not be valid. However, previous experience had shown the efficacy of the assumption in other similar situations. [Pg.106]

There are two statistical assumptions made regarding the valid application of mathematical models used to describe data. The first assumption is that row and column effects are additive. The first assumption is met by the nature of the smdy design, since the regression is a series of X, Y pairs distributed through time. The second assumption is that residuals are independent, random variables, and that they are normally distributed about the mean. Based on the literature, the second assumption is typically ignored when researchers apply equations to describe data. Rather, the correlation coefficient (r) is typically used to determine goodness of fit. However, this approach is not valid for determining whether the function or model properly described the data. [Pg.880]

The values of the elements of the weighting matrices R, depend on the type of estimation method being used. When the residuals in the above equations can be assumed to be independent, normally distributed with zero mean and the same constant variance, Least Squares (LS) estimation should be performed. In this case, the weighting matrices in Equation 14.35 are replaced by the identity matrix I. Maximum likelihood (ML) estimation should be applied when the EoS is capable of calculating the correct phase behavior of the system within the experimental error. Its application requires the knowledge of the measurement... [Pg.256]

Characterization of Chance Occurrences To deal with a broad area of statistical applications, it is necessary to characterize the way in which random variables will vary by chance alone. The basic foundation for this characteristic is laid through a density called the gaussian, or normal, distribution. [Pg.72]

Also, in many applications involving count data, the normal distribution can be used as a close approximation. In particular, the approximation is quite close for the binomial distribution within certain guidelines. [Pg.72]

One must note that probability alone can only detect alikeness in special cases, thus cause-effect cannot be directly determined - only estimated. If linear regression is to be used for comparison of X and Y, one must assess whether the five assumptions for use of regression apply. As a refresher, recall that the assumptions required for the application of linear regression for comparisons of X and Y include the following (1) the errors (variations) are independent of the magnitudes of X or Y, (2) the error distributions for both X and Y are known to be normally distributed (Gaussian), (3) the mean and variance of Y depend solely upon the absolute value of X, (4) the mean of each Y distribution is a straight-line function of X, and (5) the variance of X is zero, while the variance of Y is exactly the same for all values of X. [Pg.380]

The stochastic tools used here differ considerably from those used in other fields of application, e.g., the investigation of measurements of physical data. For example, in this article normal distributions do not appear. On the other hand random sums, invented in actuary theory, are important. In the first theoretical part we start with random demand and end with conditional random service which is the basic quantity that should be used to decide how much of a product one should produce in a given period of time. [Pg.111]

The hypothesis of a normal distribution is a strong limitation that should be always kept in mind when PCA is used. In electronic nose experiments, samples are usually extracted from more than one class, and it is not always that the totality of measurements results in a normally distributed data set. Nonetheless, PCA is frequently used to analyze electronic nose data. Due to the high correlation normally shown by electronic nose sensors, PCA allows a visual display of electronic nose data in either 2D or 3D plots. Higher order methods were proposed and studied to solve pattern recognition problems in other application fields. It is worth mentioning here the Independent Component Analysis (ICA) that has been applied successfully in image and sound analysis problems [18]. Recently ICA was also applied to process electronic nose data results as a powerful pre-processor of data [19]. [Pg.156]

An application of the confidence interval concept central to most statistical assessment is the t2-test for small normal samples. Let us consider a normally distributed variable X with mean p and variance a2. It will be demonstrated below that for m observations with sample mean x and variance s2, the variable U defined as... [Pg.196]

This criterion, and others, can be derived using maximum likelihood arguments (H8). It has been shown that Eq. (64) is applicable (a) when each of the responses has normally distributed error (b) when the data on each response are equally precise, and (c) when there is no correlation between the measurements of the three responses. These assumptions are rather restrictive. [Pg.130]

Note that z can be larger than the number of objects, n, if for instance repeated CV or bootstrap has been applied. The bias is the arithmetic mean of the prediction errors and should be near zero however, a systematic error (a nonzero bias) may appear if, for instance, a calibration model is applied to data that have been produced by another instrument. In the case of a normal distribution, about 95% of the prediction errors are within the tolerance interval 2 SEP. The measure SEP and the tolerance interval are given in the units of v, and are therefore most useful for model applications. [Pg.127]

Transformations of the data may be used to extend the applicability of a particular standard distribution, in practice usually the normal distribution. For example, a log-normal random variable is a random variable that is normal after logarithmic transformation. Power transformations are also widely used, e.g., with Box-Cox transformations. [Pg.34]

An approach that is sometimes helpful, particularly for recent pesticide risk assessments, is to use the parameter values that result in best fit (in the sense of LS), comparing the fitted cdf to the cdf of the empirical distribution. In some cases, such as when fitting a log-normal distribution, formulae from linear regression can be used after transformations are applied to linearize the cdf. In other cases, the residual SS is minimized using numerical optimization, i.e., one uses nonlinear regression. This approach seems reasonable for point estimation. However, the statistical assumptions that would often be invoked to justify LS regression will not be met in this application. Therefore the use of any additional regression results (beyond the point estimates) is questionable. If there is a need to provide standard errors or confidence intervals for the estimates, bootstrap procedures are recommended. [Pg.43]

Ideally, one would like to describe various size distributions by some relatively simple mathematical function. Because there is no single theoretical basis for a particular function to describe atmospheric aerosols, various empirical matches have been carried out to the experimentally observed size distributions some of these are discussed in detail elsewhere (e.g., see Hinds, 1982). Out of the various mathematical distribution functions for fitting aerosol data, the log-normal distribution (Aitchison and Brown, 1957 Patel et al., 1976) has emerged as the mathematical function that most frequently provides a sufficiently good fit, and hence we briefly discuss its application to the size distribution of atmospheric aerosols. [Pg.358]

The GUM approach described here has the advantage that each uncertainty component is designed to have the properties of a standard deviation, and so the rules for combining standard deviations of the normal distribution can be followed. The complete equation will be given, but it may be simplified to useable equations for the majority of applications. [Pg.187]

As we see in Chapter 2, the normal distribution comes about when a large number of purely random factors is responsible for the distribution. It is mainly applicable to particles... [Pg.635]

It does, however, indicate the way in which the normal distribution is attained. The absolute skewness mllml and the kurtosis mjml both tend to zero as x-1. The difficulty in the practical application lies in the truncation error and the fact that the higher terms decrease in a somewhat irregular manner. It may often be of value, however, to estimate the skewness from the third moment and even in such a complicated example as that of 4 this is not impracticable. [Pg.146]

In developing a procedure for bacteriological testing of milk, samples were tested in an apparatus that includes two components bottles and kivets. All six combinations of two bottle types and three kivet types were tested ten times for each sample. The table contains data on the number of positive tests in each of ten testings. If we remember section 1.1.1 then the obtained values of positive tests are a random variable with the binomial distribution. For a correct application of the analysis of variance procedure, the results should be normally distributed. It is therefore possible to transform the obtained results by means of arcsine mathematical transformation for the purpose of example of three-way analysis of variance with no replications, no such transformations are necessary. The experiment results are given in the table ... [Pg.103]

A third and often neglected reason for the need for care fill application of chemometric methods is the problem of the type of distribution of environmental data. Most basic and advanced statistical methods are based on the assumption of normally distributed data. But in the case of environmental data, this assumption is often not valid. Figs. 1-7 and 1-8 demonstrate two different types of experimentally found empirical data distribution. Particularly for trace amounts in the environment, a log-normal distribution, as demonstrated for the frequency distribution of N02 in ambient air (Fig. 1-7), is typical. [Pg.13]

To demonstrate the accuracy, two dust and two soil reference materials were analyzed with the described method. The mean value of the correlation coefficients between the certified and the analyzed amounts of the 16 elements in the samples is r = 0.94. By application of factor analysis (see Section 5.4) the square root of the mean value of the communahties of these elements was computed to be approximately 0.84. As frequently happens in the analytical chemistry of dusts several types of distribution occur [KOM-MISSION FUR UMWELTSCHUTZ, 1985] these can change considerably in proportion to the observed sample size. In the example described the major components are distributed normally and most of the trace components are distributed log-normally. The relative ruggedness of multivariate statistical methods against deviations from the normal distribution is known [WEBER, 1986 AHRENS and LAUTER, 1981] and will be tested using this example by application of factor analysis. [Pg.253]

Quantitative studies by means of parametric statistical methods are, however, often very unreliable because of high environment-related variations very often amounting to several orders of magnitude [FORSTNER and WITTMANN, 1983 EINAX, 1990], In other words environmental data sets often contain values which are extremely high or low, i.e. they are outliers in the statistical sense. Also, because environmental data are often not normally distributed, the application of parametric statistical methods results in distorted reflections of reality. [Pg.341]

The underlying assumptions of the Student s t-test include simple random and systematic sampling and a normal distribution of the sample mean. The upper limit of the confidence interval for the mean concentration is compared to the action level to determine whether solid waste contains a contaminant of concern at a hazardous level. (The calculation is conducted according to Equation 10, Appendix 1.) A contaminant of concern is not considered to be present at a hazardous level, if the upper limit of the confidence interval is below the action level. Otherwise, the opposite conclusion is reached. Example 5.13 demonstrates the application of this test for deciding whether the waste is hazardous or not. [Pg.293]

In the applications of gas-solid flows, there are three typical distributions in particle size, namely, Gaussian distribution or normal distribution, log-normal distribution, and Rosin-Rammler distribution. These three size distribution functions are mostly used in the curve fitting of experimental data. [Pg.19]

Section 4.3 Applications of the Normal Distribution in Chemistry and Physics... [Pg.67]


See other pages where Normal distribution application is mentioned: [Pg.186]    [Pg.401]    [Pg.392]    [Pg.92]    [Pg.40]    [Pg.232]    [Pg.173]    [Pg.76]    [Pg.224]    [Pg.900]    [Pg.321]    [Pg.39]    [Pg.28]    [Pg.156]    [Pg.459]    [Pg.90]    [Pg.515]    [Pg.37]    [Pg.38]    [Pg.66]   
See also in sourсe #XX -- [ Pg.66 , Pg.67 , Pg.68 , Pg.69 , Pg.70 , Pg.71 , Pg.72 , Pg.73 ]




SEARCH



Applications of the Normal Distribution in Chemistry and Physics

Distribution normalization

Ergonomics Application of Means, Standard Deviations, and the Normal Distribution

Normal distribution

Normalized distribution

© 2024 chempedia.info