Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Elementary Statistical Measures

Statistics is a tool for characterizing a large amount of data by a few key quantities and it may therefore also be considered as information compression. Consider a data [Pg.549]

The average is the middle point, or the centre of gravity , of the data set but it does not tell how wide the data point distribution is. The data sets 3.0,4.0,5.0,6.0,7.0 and 4.8,4.9,5.0,5.1,5.2 have the same average value of 5.0. [Pg.549]

In computational chemistry, the mean may depend on an external parameter, such as time. In a molecular dynamics simulation, for example, the average energy NVT ensemble) or temperature NVE ensemble) will depend on the simulation time. Indeed, a plot of the average energy or temperature against time can be used as a measure of whether the system is sufficiently (thermal) equilibrated to provide a realistic model. [Pg.549]

The width or the spread of the data set can be characterized by the second moment, [Pg.549]

The normalization factor is M - 1 when the average is calculated from eq. (17.1) if the exact average is known from another source, the normalization is just N. For large samples the difference between the two is minute and the normalization is often taken as N. The square root of the variance (i.e. a) is called the standard deviation.The above two data sets have standard deviations of 1.6 and 0.2, clearly showing that the first set contains elements further from the mean than the second. [Pg.549]


An essential component of calculations is to calibrate new methods, and to use the results of calculations to predict or rationalize the outcome of experiments. Both of these types of investigation compare two types of data and the interest is in characterizing how well one set of data can represent or predict the other. Unfortunately, one or both sets of data usually contain noise , and it may be difficult to decide whether a poor correlation is due to noisy data or to a fundamental lack of connection. Statistics is a tool for quantifying such relationships. We will start with some philosophical considerations and move into elementary statistical measures, before embarking on more advanced tools. [Pg.547]

This section gives some of the more elementary statistical parameters that may be used to charaeterize and analyze data and discover the underlying relationships among variables that may be hidden by the overlaid variation or noise. Although everything discussed in this section is available in standard texts, a review of the more elementary statistical principles is presented to address the basic problems of measurement quality. This is given... [Pg.20]

The following tables are presented in a format that is compatible with the needs of analytical chemists the significance level P = 0.05 has been used in most cases, and it has been assumed that the number of measurements available is fairly small. Except where stated otherwise, these abbreviated tables have been reproduced from Elementary Statistics Tables by Henry R. Neave, published by George Allen 8c Unwin Ltd. (Tables 1-3 and 6-12). The reader requiring statistical data corresponding to significance levels and/or numbers of measurements not covered in the tables is referred to these sources. [Pg.5264]

For example, the measured pressure exerted by an enclosed gas can be thought of as a time-averaged manifestation of the individual molecules random motions. When one considers an individual molecule, however, statistical thermodynamics would propose its random motion or pressure could be quite different from that measured by even the most sensitive gauge which acts to average a distribution of individual molecule pressures. The particulate nature of matter is fundamental to statistical thermodynamics as opposed to classical thermodynamics, which assumes matter is continuous. Further, these elementary particles and their complex substmctures exhibit wave properties even though intra- and interparticle energy transfers are quantized, ie, not continuous. Statistical thermodynamics holds that the impression of continuity of properties, and even the soHdity of matter is an effect of scale. [Pg.248]

Polymer identification starts with a series of preliminary tests. In contrast to low molecular weight organic compounds, which are frequently satisfactorily identified simply by their melting or boiling point, molecular weight and elementary composition, precise identification of polymers is difficult by the presence of copolymers, the statistical character of the composition, macromolecular properties and, by potential polymeric-analogous reactions. Exact classification of polymers is not usually possible from a few preliminary tests. Further physical data must be measured and specific reactions must be carried out in order to make a reliable classification. The efficiency of physical methods such as IR spectroscopy and NMR spectroscopy as well as pyrolysis gas chromatography makes them particularly important. [Pg.102]

With regard to the molecular origin of these hindrances, it should be mentioned that the linear dimension of Aerosil (300 m g ) particles (about 7 nm) is comparable with the mean average distance between primarily filler particles in the PDMS matrix. Since the Gaussian chain statistics might be applied for PDMS chains in the filled rubbers [18], it is easy to show that the chain portions between the primarily filler particles should contain about 30-80 elementary units. This value is in the same range as the apparent number of elementary chain units between topological hindrances as measured by NMR... [Pg.798]

J s. A. SCHMITT Measuring Uncertainty. An Elementary Introduction to Bayesian Statistics, Addison-Wesley, Reading, Mass., 1969. [Pg.561]

The meaning of the law of transformation is, that every atom in a certain measure has the same explosion probability clearly the law is a purely statistical one. This has been confirmed in two diffinent ways. First, it has been found quite impossible by ordinary pliysical means (say high temperatures) to accelerate or retard the process of disintegration, or to affect it in any way. Secondly, it has been, found possible to determine not only the mean number of particles t3mitted per second, but also the fluctuations about the mean and it turns out that these obey the regular statistical laws (Appendix IV, p. 26G), Radioactive disintegration is the prototype of an elementary process, which the ideas of classical physics are powerless to explain, but with which modern quantum theory is quite capable of dealing. [Pg.24]

A collection of several thousand such chains may be thought of as constituting a numerical model of the chain reaction it is analyzed, also by the computing machine, by statistical methods identical to the ones used for analyzing experimental observations of physical processes. In the language of probability, each chain is a point in a sample space. The probability measure in this space is necessarily very complicated, but from our knowledge of the elementary processes in a chain, it follows that such a probability measure exists and that our simulation procedure actually samples the distribution described by that measure. [Pg.194]

The most popular and most important inverse problem is the estimation of reaction rate constants, see, for example, Deuflhard et al. (1981) Hosten (1979), or Vajda et al. 1987). Using the terminology introduced above is the function that gives the solution of the kinetic differential equation as a function of the reaction, while F o provides the values of the solution at discrete time points together with a certain error. In this case a subset of V with the same mechanism is delineated and the aim is to select a reaction from this set in such a way that the solution of the kinetic differential equation be as close to the measurements as possible by a prescribed, usually quadratic, norm. As the solution is a nonlinear function of the parameters, therefore a final solution to the general problem seems to be unobtainable both because a global optimum usually cannot be determined and because the estimates cannot be well-characterised from the statistical point of view. In addition to these problems, reaction rate consants only have a physicochemical meaning if they are universal, i.e. the reaction rate constant of a concrete elementary reaction must be the same whenever it is estimated from any complex chemical reaction. [Pg.74]

Comparison of quantities observable in electron-positron collisions of energies near the Z mass with the predictions of the Standard Model of elementary particles (Amsier et al. 2008). Note the high precision of the calculations and the good agreement between theory and experiment. At present the largest deviation is in the electron-positron forward-backward asymmetry and the anomalous magnetic moment of the muon. If a measured quantity has two uncertainties, the first one is the statistical and the second the systematic error... [Pg.471]


See other pages where Elementary Statistical Measures is mentioned: [Pg.549]    [Pg.549]    [Pg.289]    [Pg.310]    [Pg.22]    [Pg.21]    [Pg.289]    [Pg.310]    [Pg.16]    [Pg.21]    [Pg.178]    [Pg.524]    [Pg.163]    [Pg.291]    [Pg.323]    [Pg.313]    [Pg.94]    [Pg.80]    [Pg.242]    [Pg.234]    [Pg.174]    [Pg.439]    [Pg.440]    [Pg.479]    [Pg.294]    [Pg.41]    [Pg.183]    [Pg.140]    [Pg.121]    [Pg.160]    [Pg.955]    [Pg.12]    [Pg.257]    [Pg.122]    [Pg.136]    [Pg.1183]   


SEARCH



Statistical measure

Statistical methods elementary measures

Statistics measures

© 2024 chempedia.info