Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Simple Statistics

In the previous three chapters we have considered how to estimate uncertainties on a given quantity starting with an estimate of the uncertainties in the original measurement. We now turn our attention to how this can be done if we have several repeat measurements of the same quantity. To do this we need to apply some simple statistical techniques. [Pg.26]

Both have an average of 25, found by adding up the five numbers in the set and dividing by 5. However, if these were a set of repeat measurements of a particular quantity then we would probably consider the first set to be more reliable, as they are all closer to the average than those in the second set. They also have a smaller spread, as given by the difference between the maximum and minimum values. [Pg.26]

This spread can be quantified by calculating the standard deviation, s, for each set of data. This can be found from the equation  [Pg.26]

This equation may seem quite complicated at first, but it is easier to apply once we understand the meaning of all the symbols. First of all, x, stands for the individual measurements made. There are 5 values of x, so i takes the values 1, 2, 3, 4, and 5. We thus have, in the first set of data, xi = 21, X2 = 24, X3 = 25, X4 = 26, and xs = 29. The quantity x is the average of these, which we have already established as 25. The number of data is , in this case 5. The symbol S (capital Greek sigma) represents a summation, so we simply add all the terms which appear to the right of it. It is easiest to do this using a table, as shown below. [Pg.26]

The summation sign in the equation tells us to add up all the terms in the far right column, which gives 34. Note that the individual terms are all positive since the square of a negative number is positive. Substituting into the equation now gives [Pg.26]


Molecules are usually represented as 2D formulas or 3D molecular models. WhOe the 3D coordinates of atoms in a molecule are sufficient to describe the spatial arrangement of atoms, they exhibit two major disadvantages as molecular descriptors they depend on the size of a molecule and they do not describe additional properties (e.g., atomic properties). The first feature is most important for computational analysis of data. Even a simple statistical function, e.g., a correlation, requires the information to be represented in equally sized vectors of a fixed dimension. The solution to this problem is a mathematical transformation of the Cartesian coordinates of a molecule into a vector of fixed length. The second point can... [Pg.515]

But decision making in the real world isn t that simple. Statistical decisions are not absolute. No matter which choice we make, there is a probability of being wrong. The converse probability, that we are right, is called the confidence level. If the probability for error is expressed as a percentage, 100 — (% probability for error) = % confidence level. [Pg.17]

The data used to generate the maps is taken from a simple statistical analysis of the manufacturing process and is based on an assumption that the result will follow a Normal distribution. A number of component characteristics (for example, a length or diameter) are measured and the achievable tolerance at different conformance levels is calculated. This is repeated at different characteristic sizes to build up a relationship between the characteristic dimension and achievable tolerance for the manufacture process. Both the material and geometry of the component to be manufactured are considered to be ideal, that is, the material properties are in specification, and there are no geometric features that create excessive variability or which are on the limit of processing feasibility. Standard practices should be used when manufacturing the test components and it is recommended that a number of different operators contribute to the results. [Pg.54]

Zhang JH, Chung TD, Oldenburg KR (1999) A simple statistical parameter for use in evaluation and validation of high throughput screening assays. J Biomol Screen 4(2) 67-73... [Pg.587]

It has been suggested that the discrepancies between the value of k ikK observed and that predicted on the basis of simple statistics may reflect the greater sensitivity of combination to steric factors. Beckhaus and Rtichardt164 reported a correlation between log(A )/ ,<.,) (after statistical correction) and Taft steric parameters for a scries of alkyl radicals. [Pg.40]

The simple statistical treatment of radical polymerization can be traced back to Schultz27 Texts by Flory2S and Bamford et al.29 are useful references. [Pg.240]

In favor of this hypothesis is the fact that in both coupling variants the observed diastereomer distribution is roughly in accord with a simple statistical model, excluding any direct interaction between the relatively bulky (by the trimethylsilyl groups and the Cp rings) monomer units. [Pg.154]

Results from the analysis of the RM and the certified value and their uncertainties are compared using simple statistical tests (Ihnat 1993,1998a). If the measured concentration value agrees with the certified value, the analyst can deduce with some confidence that the method is applicable to the analysis of materials of similar composition. If there is disagreement, the method as applied exhibits a bias and underlying causes of error should be sought and corrected, or their effects minimized. [Pg.217]

They include simple statistics (e.g., sums, means, standard deviations, coefficient of variation), error analysis terms (e.g., average error, relative error, standard error of estimate), linear regression analysis, and correlation coefficients. [Pg.169]

A simple statistical test for the presence of systematic errors can be computed using data collected as in the experimental design shown in Figure 34-2. (This method is demonstrated in the Measuring Precision without Duplicates sections of the MathCad Worksheets Collabor GM and Collabor TV found in Chapter 39.) The results of this test are shown in Tables 34-9 and 34-10. A systematic error is indicated by the test using... [Pg.176]

A relatively simple statistical downscaling technique which may be applied quickly to a large number of models is the use of correction factors based on monthly relationships between observed data collected at a particular weather station and the relevant RCM control data set for the appropriate grid-cell [40], These monthly differences (for temperature) and ratios (for precipitation) between the control and the point observations (i.e. not the gridded interpolated CRU data set) can then be used to correct the daily RCM control and scenario data. This gives bias-corrected scenarios of temperature and precipitation, which can then be used as input to hydrological models for the exploration of various management and policy formulations. [Pg.308]

Many attempts have been made to deduce thermodynamics from statistical mechanics, and no one can doubt the intimate relationship or even the complete identity of the two sciences. Nevertheless, it has not hitherto been found possible to proceed by a single path unambiguously from simple statistical assumptions to the laws of thermodynamics. This we hope to have accomplished in this paper and the following. [Pg.6]

If this type of dose and response (in this case, the response is death) information is available, a simple statistical technique is applied to estimate the LD50 - the dose that will on average cause death in 5 of 10 animals, or 50% of the animals in any similar group were the test to be repeated. [Pg.70]

Typically extrapolations of many kinds are necessary to complete a risk assessment. The number and type of extrapolations will depend, as we have said, on the differences between condition A and condition B, and on how well these differences are understood. Once we have characterized these differences as well as we can, it becomes necessary to identify, if at all possible, a firm scientific basis for conducting each of the required extrapolations. Some, as just mentioned, might be susceptible to relatively simple statistical analysis, but in most cases we will find that statistical methods are inadequate. Often, we may find that all we can do is to apply an assumption of some sort, and then hope that most rational souls find the assumption likely to be close to the truth. Scientists like to be able to claim that the extrapolation can be described by some type of model. A model is usually a mathematical or verbal description of a natural process, which is developed through research, tested for accuracy with new and more refined research, adjusted as necessary to ensure agreement with the new research results, and then used to predict the behavior of future instances of the natural process. Models are refined as new knowledge is acquired. [Pg.212]


See other pages where Simple Statistics is mentioned: [Pg.51]    [Pg.593]    [Pg.84]    [Pg.85]    [Pg.479]    [Pg.1963]    [Pg.139]    [Pg.646]    [Pg.172]    [Pg.41]    [Pg.235]    [Pg.43]    [Pg.823]    [Pg.826]    [Pg.439]    [Pg.268]    [Pg.361]    [Pg.103]    [Pg.318]    [Pg.272]    [Pg.104]    [Pg.85]    [Pg.113]    [Pg.139]    [Pg.19]    [Pg.220]    [Pg.146]    [Pg.69]    [Pg.208]    [Pg.899]    [Pg.155]    [Pg.179]    [Pg.321]    [Pg.294]   


SEARCH



Copolymer Statistics Within the Framework of Simple Models

Modifications to Simple Statistical Theory---Non-Gaussian Statistics

Simple Statistical Descriptions of Long-chain Molecules

Simple Statistical Model Isotherm

Simple Statistical Treatment of Liquids and Gases

The Quantum Statistical Mechanics of a Simple Model System

© 2024 chempedia.info