Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Common Statistical Distributions

A probability plot is a graph that compares the data set against some expected statistical distribution by comparing the actual quantiles against the theoretical quantiles. Such probability plots are also often called Q — Q ov P — P plots. The most common statistical distribution for comparison is the normal distribution. The exact values plotted on each of the axes depend on the desired graph and software used. In general, the theoretical values are plotted on the x-axis, while the actual values are plotted on the y-axis. Occasionally, the actual values are modified in... [Pg.14]

Table 8.2 lists some common statistical functions in Excel. Most of these functions as written only work on newer versions of Excel (2010 or newer). A detailed explanation of the functions and differences can be found in Sect. 2.4 Common Statistical Distributions. [Pg.365]

Discrete distribution and continuous distribution are the two most common statistical distributions. The binomial distribution is a typical example of discrete distribution. The normal (also known as Gaussian) distribution and the Weibull distribution are two examples of continuous distributions. If a coin is tossed forty times and the number of heads-up occurrences is recorded, and this exercise is repeated seventy-five times, then the distribution of the heads-up occurrence frequency observed is binominal. The total number of events in this exercise is seventy-five. The highest possible frequency is forty, and the lowest one is zero. The frequencies of numbers... [Pg.211]

The more common approach is the actual positioning of random lines on a surface to create a statistical distribution of fragment sizes. One example of this, suggested by Mott and Linfoot (1943), is a construction of randomly positioned and oriented infinite lines as illustrated in Fig. 8.23. If the random lines are restricted to horizontal or vertical orientation an analytic solution can be obtained for the cumulative fragment number (Mott and Linfoot,... [Pg.302]

A microscopic description characterizes the structure of the pores. The objective of a pore-structure analysis is to provide a description that relates to the macroscopic or bulk flow properties. The major bulk properties that need to be correlated with pore description or characterization are the four basic parameters porosity, permeability, tortuosity and connectivity. In studying different samples of the same medium, it becomes apparent that the number of pore sizes, shapes, orientations and interconnections are enormous. Due to this complexity, pore-structure description is most often a statistical distribution of apparent pore sizes. This distribution is apparent because to convert measurements to pore sizes one must resort to models that provide average or model pore sizes. A common approach to defining a characteristic pore size distribution is to model the porous medium as a bundle of straight cylindrical or rectangular capillaries (refer to Figure 2). The diameters of the model capillaries are defined on the basis of a convenient distribution function. [Pg.65]

In analytical chemistry one of the most common statistical terms employed is the standard deviation of a population of observations. This is also called the root mean square deviation as it is the square root of the mean of the sum of the squares of the differences between the values and the mean of those values (this is expressed mathematically below) and is of particular value in connection with the normal distribution. [Pg.134]

In addition to the statistical distributions inherent in an individual polymer, distributions are further broadened by the commercial practice of blending. We commonly blend two, three, four, or even more polymers of similar or dissimilar types in order to achieve the specific properties required. [Pg.31]

Modem guidelines demand that particular sub-groups (such as children) are considered, including the intakes at the upper end of the intake range (commonly 90th or 97.5th percentiles). This means that the statistical distributions of food additive concentrations in food and food consumption patterns must be taken into account. [Pg.65]

SuPAES). In common with the previously mentioned PEMs, initial SuPAES materials (see Section 3.3.2.1 for later work on block copolymer derivatives of SuPAES) had a statistical distribution of sulfonic acid groups along the polymer backbone. However, instead of using postsulfonation techniques, sulfonic acid groups were introduced via direction copolymerization that is, suitable sulfonic acid precursor groups were introduced into one of the monomers (13). The advantages of this method are threefold ... [Pg.144]

The Fermi-Dirac and Maxwell-Boltzmann statistical distribution functions are widely used in semiconductor physics, with the latter commonly used as an approximation to the former. The point of this problem is to make you familiar with these distribution functions their forms, their temperature dependencies, and under what conditions they become interchangeable. Throughout this problem, use the energy of silicon s valence band (Evb) as the zero of your energy scale. [Pg.82]

Note The branch of statistics concerned with measurements that follow the normal distribution are known as parametric statistics. Because many types of measurements follow the normal distribution, these are the most common statistics used. Another branch of statistics designed for measurements that do not follow the normal distribution is known as nonparametric statistics.)... [Pg.15]

As a general rule 1000 particles are not required to obtain a statistical distribution unless the range of sizes varies over wide limits. For most measurements 200 particles are sufficient. The use of the filar micrometer is tedious and incurs serious eye-strain. To overcome this difficulty particles may be projected on a large ruled screen or grid. The particles will then be sufficiently large to be measured by eyw The use of photographs is also quite common. [Pg.69]

This common measure is the variance of the residence-time distribution. In the absence of reaction, a sudden change in inlet conditions will be followed by a spread-out change in outlet conditions. The spreading can be described by common statistical parameters, the mean, variance, skewness, and so on. [Pg.345]

The first moment of a force or weight about an axis is defined as the product of the force and the distance from the axis to the line of action of the force. In this case it is commonly known as the torque. The concept has been extended to more abstract applications such as the moment of an area with respect to a plane and moments of statistical distributions. It is then referred to as the appropriate first moment (the term torque is not used). [Pg.47]

A commonly encountered statistical distribution is the binomial distribution. This distribution deals with the behavior of binary outcomes such as the flip of a coin (heads/tails), the gender of a child (boy/girl), or the determination if a tablet has acceptable potency (pass/fail). When dealing with a sequence of independent binary outcomes, such as multiple flips of a coin or determining whether the potencies of 20 tablets are individually acceptable, the binomial distribution can be used. The probability of observing x successes in n outcomes is C x,n) p (f Binomial expansion for X = 1 to n is C Q,n)p q + +... [Pg.3490]

Another method is to model the statistical distribution of the price (or margin), as illustrated in Figure 6.4e. At its simplest, this method involves taking the average price, adjusted for inflation, over a recent period. This method can miss long-term trends in the data, and few prices follow any of the more commonly used distributions. It is useful, however, in combination with sensitivity analysis methods such as Monte Carlo Simulation (see Section 6.8). [Pg.340]

Having introduced the normal distribution and discussed its basic properties, we can move on to the common statistical tests for comparing sets of data. These methods and the calculations performed are referred to as significance tests. An important feature and use of the normal distribution function is that it enables areas under the curve, within any specified range, to be accurately calculated. The function in Equation (1) is integrated numerically and the results presented in statistical tables as areas under the normal curve. From these tables, approximately 68% of observations can be expected to lie in the region bounded by one standard deviation from the mean, 95% within jjl 2o, and more than 99% within x 3a. [Pg.6]

A common feature of these errors is that they are likely to occur independently of each other. As we shall see, this simplifies the analysis of experiments, since it is often possible to use known statistical distributions to assess the significance of the results obtained, e.g. the normal distribution, or other distributions related to the normal distribution, such as the t distribution or the F distribution. Significance tests based on these distributions are discussed later in this chapter. [Pg.45]

Regulatory agencies have traditionally accepted only two-sided hypotheses because, theoretically, one could not rule out harm (as opposed to simply no effect) associated with the test treatment. If the value of a test statistic (for example, the Z-tesl statistic) is in the critical region at the extreme left or extreme right of the distribution (that is, < -1.96 or > 1.96), the probability of such an outcome by chance alone under the null hypothesis of no difference is 0.05. However, the probability of such an outcome in the direction indicative of a treatment benefit is half of 0.05, that is, 0.025. This led to a common statistical definition of "firm" or "substantial" evidence as the effect was unlikely to have occurred by chance alone, and it could therefore be attributed to the test treatment. Assuming that two studies of the test treatment had two-sided p values < 0.05 with the direction of the treatment effect in favor of a benefit, the probability of the two results occurring by chance alone would be 0.025 X 0.025, that is, 0.000625 (which can also be expressed as 1/1600). [Pg.129]

The split-pool protocol is normally carried out on resin beads. There are limitations when generating mixtures of compounds. Due to the statistical distribution of the solid support at each splitting step, the synthesis will lead to over- and under representation within the library. In order to ensure that 95 % of all possible compound members of the library are included with a probability greater than 99% [48,49], the split-pool synthesis should be carried out with an approximately threefold amount of resin beads (termed 3-fold redundancy). For the commonly used resins (about 100 pm diameter bead), 1 g of the support material corresponds to several million resin beads, so that from a statistical point of view, libraries of the order of > 10s different compounds are possible in practice [48-50], Depending on the loading capacity of the resin bead, quantities of about 200 pmol (0.1 mg compound, Mr = 500) can be obtained per resin bead. [Pg.6]


See other pages where Common Statistical Distributions is mentioned: [Pg.44]    [Pg.44]    [Pg.46]    [Pg.48]    [Pg.50]    [Pg.44]    [Pg.44]    [Pg.46]    [Pg.48]    [Pg.50]    [Pg.157]    [Pg.67]    [Pg.137]    [Pg.256]    [Pg.20]    [Pg.251]    [Pg.49]    [Pg.197]    [Pg.256]    [Pg.23]    [Pg.160]    [Pg.177]    [Pg.75]    [Pg.161]    [Pg.249]    [Pg.178]    [Pg.158]    [Pg.52]    [Pg.420]    [Pg.3683]    [Pg.52]    [Pg.269]    [Pg.5]    [Pg.335]    [Pg.72]   


SEARCH



Distribution statistics

Statistical distributions

© 2024 chempedia.info