Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistical Mean Value

J. Manz Let me add a comment on Professor W. H. Miller s remark that he would never make himself, but I can express this as the chairman of this session. In fact, Professor Miller s extension of the standard RRKM-theory allows to predict not only the statistical mean values of the rate coefficients, but also their fluctuations. This is an important achievement in the theory of chemical reaction theory over the past couple of years and it should be adequate to call it the RRKMM theory (Ramspeiger-Rice-Kassel-Marcus-Miller) [1]. [Pg.812]

Since Un = 0, the non-perturbed statistical mean values (95) in the expansions (100a—c) reduce to isotropic mean values over the molecular orientations (101), and by (104) and (121) we have ... [Pg.145]

Statistical Perturbation Calculus.—Statistical Mean Value. Let Q denote some physical quantity (electric polarization, magnetic polarization, exs,t%y), in general a function of the continuously varying canonic variables F (generalized momeita, generalized co-orclassical statistical mechanics, the mean value of Q is defined as ... [Pg.341]

Above, one still has to compute the statistical mean values ... [Pg.380]

Synthesis of weather conditions in a given area, characterized by long-term statistics (mean values, variances, probabilities of extreme values, etc.) of the meteorological elements in that area (WMO 1992, p. 112). [Pg.327]

According to the exact definition (22,IV), the collision diameter is related to the partition function therefore, it has the meaning of a statistical mean value of the distance of closest approach of molecules A and B in a very fast "non-adiabatic" collision. [Pg.249]

Reference material sets which are certified by the International Confederation for Thermal Analysis and Calorimetry (ICTAC) are available through the US National Institute of Standards and Testing (NIST), and are listed in Appendix 2.2. High-purity metals and organic compounds including polymers have been certified. If the standard reference material must be dispensed with a syringe into the sample vessel (for example cyclohexane), care must be taken to ensure that only one droplet is formed in the sample vessel. Multiple transition peaks will be observed if there is more than one droplet present. The transition temperatures listed in Appendix 2.2 are the statistical mean values of measurements made in a number of laboratories and institutes. The ICTAC reference materials are certified for temperature calibration only and not for enthalpy calibration. The reference temperatures in Appendix 2.1 should be used if very accurate calibration of the instrument is required. In order to determine the heat capacity Cp ) of a sample, sapphire (a-alumina, AI2 O3) is used as a standard reference material. The Cp of... [Pg.29]

Diffusion may be considered as random motion. If we know where the particles are located at different times, as well as their velocity, we may follow their motion by solving Newton s eqnations and use molecular dynamics. A statistical mean value is obtained for varied boundary conditions and the results are sampled. [Pg.170]

Degree of Substitution and DS Distribution. For cellulose esters, the substitution level is usually expressed in terms of degree of substitution (DS) that is, the average number of substituents per anhydroglucose unit (AGU). Cellulose contains three hydroxyl groups in each AGU unit that can be substituted therefore DS can have a value between 0 and 3. Because DS is a statistical mean value, a value of 1 does not assure that every AGU has a single substituent. Any given cellulose ester molecule is a mixture of tri-, di-, mono-, and unsubstituted monomers. The physical properties commonly associated with commercial cellulose acetates. [Pg.1112]

Also, since a polymer is a mixture of molecules differing in molecular weight, the value of a measurement obtained by such methods is nothing but a statistical mean value assuming... [Pg.265]

The prior knowledge is assumed to be the discrete structure of the image, the statistical independence of the noise values, their stationarity and zero mean value. For this case, the image reconstruction problem can be represented as an adaptive stochastic estimation process [9] with the structure shown in Fig. 1. [Pg.122]

A statistical measure of the average deviation of data from the data s mean value (s). [Pg.56]

Statistical test for comparing two mean values to see if their difference is too large to be explained by indeterminate error. [Pg.85]

A statistical analysis allows us to determine whether our results are significantly different from known values, or from values obtained by other analysts, by other methods of analysis, or for other samples. A f-test is used to compare mean values, and an F-test to compare precisions. Comparisons between two sets of data require an initial evaluation of whether the data... [Pg.97]

In this problem you will collect and analyze data in a simulation of the sampling process. Obtain a pack of M M s or other similar candy. Obtain a sample of five candies, and count the number that are red. Report the result of your analysis as % red. Return the candies to the bag, mix thoroughly, and repeat the analysis for a total of 20 determinations. Calculate the mean and standard deviation for your data. Remove all candies, and determine the true % red for the population. Sampling in this exercise should follow binomial statistics. Calculate the expected mean value and expected standard deviation, and compare to your experimental results. [Pg.228]

Once a significant difference has been demonstrated by an analysis of variance, a modified version of the f-test, known as Fisher s least significant difference, can be used to determine which analyst or analysts are responsible for the difference. The test statistic for comparing the mean values Xj and X2 is the f-test described in Chapter 4, except that Spool is replaced by the square root of the within-sample variance obtained from an analysis of variance. [Pg.696]

The principal tool for performance-based quality assessment is the control chart. In a control chart the results from the analysis of quality assessment samples are plotted in the order in which they are collected, providing a continuous record of the statistical state of the analytical system. Quality assessment data collected over time can be summarized by a mean value and a standard deviation. The fundamental assumption behind the use of a control chart is that quality assessment data will show only random variations around the mean value when the analytical system is in statistical control. When an analytical system moves out of statistical control, the quality assessment data is influenced by additional sources of error, increasing the standard deviation or changing the mean value. [Pg.714]

Control charts were originally developed in the 1920s as a quality assurance tool for the control of manufactured products.Two types of control charts are commonly used in quality assurance a property control chart in which results for single measurements, or the means for several replicate measurements, are plotted sequentially and a precision control chart in which ranges or standard deviations are plotted sequentially. In either case, the control chart consists of a line representing the mean value for the measured property or the precision, and two or more boundary lines whose positions are determined by the precision of the measurement process. The position of the data points about the boundary lines determines whether the system is in statistical control. [Pg.714]

Construction of Property Control Charts The simplest form for a property control chart is a sequence of points, each of which represents a single determination of the property being monitored. To construct the control chart, it is first necessary to determine the mean value of the property and the standard deviation for its measurement. These statistical values are determined using a minimum of 7 to 15 samples (although 30 or more samples are desirable), obtained while the system is known to be under statistical control. The center line (CL) of the control chart is determined by the average of these n points... [Pg.715]

Interpreting Control Charts The purpose of a control chart is to determine if a system is in statistical control. This determination is made by examining the location of individual points in relation to the warning limits and the control limits, and the distribution of the points around the central line. If we assume that the data are normally distributed, then the probability of finding a point at any distance from the mean value can be determined from the normal distribution curve. The upper and lower control limits for a property control chart, for example, are set to +3S, which, if S is a good approximation for O, includes 99.74% of the data. The probability that a point will fall outside the UCL or LCL, therefore, is only 0.26%. The... [Pg.718]

Usually, 10 to 20 measurements are made of the isotope ratio for one substance. Sometimes, one or more of these measurements appears to be sufficiently different from the mean value that the question arises as to whether or not it should be included in the set at all. Several statistical criteria are available for reaching an objective assessment of the reliability of the apparently rogue result (Figure 48.10). Such odd results are often called outliers, and ignoring them gives a more precise mean value (lower standard deviation). It is not advisable to remove such data more than once in any one set of measurements. [Pg.361]

The degree of data spread around the mean value may be quantified using the concept of standard deviation. O. If the distribution of data points for a certain parameter has a Gaussian or normal distribution, the probabiUty of normally distributed data that is within Fa of the mean value becomes 0.6826 or 68.26%. There is a 68.26% probabiUty of getting a certain parameter within X F a, where X is the mean value. In other words, the standard deviation, O, represents a distance from the mean value, in both positive and negative directions, so that the number of data points between X — a and X -H <7 is 68.26% of the total data points. Detailed descriptions on the statistical analysis using the Gaussian distribution can be found in standard statistics reference books (11). [Pg.489]

The physics and modeling of turbulent flows are affected by combustion through the production of density variations, buoyancy effects, dilation due to heat release, molecular transport, and instabiUty (1,2,3,5,8). Consequently, the conservation equations need to be modified to take these effects into account. This modification is achieved by the use of statistical quantities in the conservation equations. For example, because of the variations and fluctuations in the density that occur in turbulent combustion flows, density weighted mean values, or Favre mean values, are used for velocity components, mass fractions, enthalpy, and temperature. The turbulent diffusion flame can also be treated in terms of a probabiUty distribution function (pdf), the shape of which is assumed to be known a priori (1). [Pg.520]

The mean value x of a property x is a statistic based on a sample of n items defined by... [Pg.821]

Computer simulation is an experimental science to the extent that calculated dynamic properties are subject to systematic and statistical errors. Sources of systematic error consist of size dependence, poor equilibration, non-bond interaction cutoff, etc. These should, of course, be estimated and eliminated where possible. It is also essential to obtain an estimate of the statistical significance of the results. Simulation averages are taken over runs of finite length, and this is the main cause of statistical imprecision in the mean values so obtained. [Pg.56]

Further discussion of the significance of mean values and standard deviations can be found in Chapters 14 and 28 and in any textbook on statistics. [Pg.54]

Individuals differ in their sensitivity to odor. Figure 14-7 shows a typical distribution of sensitivities to ethylsulfide vapor (17). There are currently no guidelines on inclusion or exclusion of individuals with abnormally high or low sensitivity. This variability of response complicates the data treatment procedure. In many instances, the goal is to determine some mean value for the threshold representative of the panel as a whole. The small size of panels (generally fewer than 10 people) and the distribution of individual sensitivities require sophisticated statistical procedures to find the threshold from the responses. [Pg.207]

The inference from the statistical calculations is that the true mean value of the carbon monoxide from the idling automobile has a 66.7% chance of being between 1.664% and 1.870%. The best single number for the carbon monoxide emission would be 1.767% (the mean value). [Pg.535]

Statistics on the data fields summary statistics (mean, std dev, min, max), percentile values at desired intervals, and linear regression on two numerical data fields. [Pg.372]


See other pages where Statistical Mean Value is mentioned: [Pg.257]    [Pg.111]    [Pg.49]    [Pg.61]    [Pg.148]    [Pg.186]    [Pg.199]    [Pg.342]    [Pg.112]    [Pg.136]    [Pg.198]    [Pg.86]    [Pg.257]    [Pg.111]    [Pg.49]    [Pg.61]    [Pg.148]    [Pg.186]    [Pg.199]    [Pg.342]    [Pg.112]    [Pg.136]    [Pg.198]    [Pg.86]    [Pg.300]    [Pg.695]    [Pg.721]    [Pg.779]    [Pg.780]    [Pg.360]    [Pg.375]    [Pg.1123]   


SEARCH



Mean value

© 2024 chempedia.info