Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Common Statistical Terms

The object of any quantitative chemical analysis is to determine the amount of a particular constituent, X, in the analytical sample. In addition to getting a numerical answer for the amount or percentage of X, the analyst must be confident that the results are sufficiently accurate and can be repeated. Simple statistical concepts are very useful in this regard. [Pg.341]

Suppose a series of measurements of a give substance is repeated several times giving the values Xi, X, X3.X . The following statistical terms are commonly [Pg.341]

When applying statistical concepts to a chromatographic determination, X is considered to be a random variable. We assume that there is no systematic error (either positive or negative) X simply varies in a random manner. Repeated measurements of X are considered to follow a Gaussian, or normal distribution. The [Pg.341]

There is often some confusion regarding the symbol used for standard deviation. The symbol a or is often used in place of S or when the measured value is the average of 10 or more values. Under these conditions S is a very good estimator for a. The term S is used when the standard deviation is not already known. [Pg.342]

A normal distribution curve can be used to determine how good X and are as estimates for the true mean and standard deviation. The density function, j(x) of the normal random variable, X, with mean ju, and variance is given by the equation [Pg.342]


In analytical chemistry one of the most common statistical terms employed is the standard deviation of a population of observations. This is also called the root mean square deviation as it is the square root of the mean of the sum of the squares of the differences between the values and the mean of those values (this is expressed mathematically below) and is of particular value in connection with the normal distribution. [Pg.134]

Table 2.1 summarizes possible conclusions of the decision-making process and common statistical terms used for describing decision errors in hypothesis testing. [Pg.28]

Several terms have been used to define LOD and LOQ. Before we proceed to develop a uniform definition, it would be useful to define each of these terms. The most commonly used terms are limit of detection (LOD) and limit of quantification (LOQ). The 1975 International Union of Pure and Applied Chemistry (lUPAC) definition for LQD can be stated as, A number expressed in units of concentration (or amount) that describes the lowest concentration level (or amount) of the element that an analyst can determine to be statistically different from an analytical blank 1 This term, although appearing to be straightforward, is overly simplified. If leaves several questions unanswered, such as, what does the term statistically different mean, and what factors has the analyst considered in defining the blank Leaving these to the analyst s discretion may result in values varying between analysts to such an extent that the numbers would be meaningless for comparison purposes. [Pg.62]

Some statistical terms are commonly used when describing genetic conditions and other disorders. These terms include ... [Pg.26]

While the word confidence in the previous sentence occurs in its everyday use, the term is also used in Statistics in a precise manner, analogously to the statistical terms Normal and significant. Confidence intervals constitute a range of values that are defined by the lower limit and the upper limit of the interval. These limits are symmetrically placed on either side of the sample mean. A commonly used Cl is the 95% Cl. A commonly expressed view of a 95% Cl is that one can be 95% certain that... [Pg.121]

A common statistical comparison, is between the test material(s) and the control material(s), to detect any differences beyond those that would occur as a consequence of random probability. In general, the smaller the size of the panel, the lower power the test will have, i.e. it will be less likely to identify genuine differences should they exist. Whether this is an issue hinges on the size of difference that the investigator would like to detect, with the optimum panel size determined by the anticipated variability of the results, which may not be known. A pragmatic approach should be taken toward panel size selection, with a sufficient number to allow some meaningful analysis, but that is not unwieldy in terms of running the study or that is prohibitively costly. [Pg.511]

The general occurrence of an event, usually expressed in terms of percentage of some population. Another common statistic in survey studies is incidence, or the number of first-time occurrences of an event during some time period. [Pg.21]

In this book all research questions are addressed and then answered via the construction of two research hypotheses, commonly called the null hypothesis and the alternate hypothesis. (Although another name for the alternate hypothesis, the research hypothesis, has its own appeal, we employ the commonly used term "alternate hypothesis" in this book.) Both of these hypotheses are key components of the procedure of hypothesis testing. This procedure is a statistical way of doing business. It is described and discussed in detail in Chapter 6, but it is beneficial to introduce the main concept here. [Pg.26]

Step 1 reduces infinitely variable analogue numbers to distinct incremental status values that are more readily understood and interpreted. This process uses current sample, historical and statistical data to completely define the meaning of each oil test parameter in simple common language terms. For example ... [Pg.489]

Notice that here we use the concept of distribution in a non-rigorous statistical sense. In rigorous statistical terms distribution usually alludes to the cumulative distribution function. Here, as in common language, by distribution we mean what in rigorous statistical terms is denoted as density function or probability function . [Pg.6]

Analysts commonly perform several replicate determinations in the course of a single experiment. (The value and significance of such replicates are discussed in detail in the next chapter.) Suppose an analyst performs a titrimetric experiment four times and obtains values of 24.69, 24.73, 24.77 and 25.39 ml. (Note that titration values are reported to the nearest 0.01 ml this point is also discussed further in Chapter 2.) All four values are different, because of the variations inherent in the measurements, and the fourth value (25.39 ml) is substantially different from the other three. So can this fourth value be safely rejected, so that (for example) the mean titre is reported as 24.73 ml, the average of the other three readings In statistical terms, is the value 25.39 ml an outlier The important topic of outlier rejection is discussed in detail in Chapters 3 and 6. [Pg.2]

Weighing procedures are normally associated with very small random errors. In routine laboratory work a four-place balance is commonly used, and the random error involved should not be greater than ca. 0.0002 g (the next chapter describes in detail the statistical terms used to express random errors). If the quantity being weighed is normally ca. 1 g or more, it is evident that the random error, expressed as a percentage of the weight involved, is not more than 0.02%. A good standard... [Pg.7]

Although it is an elegant approach to the common problem of matrix interference effects, the method of standard additions has a number of disadvantages. The principal one is that each test sample requires its own calibration graph, in contrast to conventional calibration experiments, where one graph can provide concentration values for many test samples. The standard-additions method may also use larger quantities of sample than other methods. In statistical terms it is an extrapolation method, and in principle less precise than interpolation techniques. In practice, the loss of precision is not very serious. [Pg.126]

In statistical circles, a commonly used term is the myth of small numbers. Assume that an employer had 100 employees who worked 200,000 hours in a year. For the employer s industry, the average OSHA-recordable incident rate is 8, and the employer s OSHA rate was right on average. If the incident distribution is random, more than one incident could have occurred in more than one month. For some months in the year, no recordable incidents would have occurred. Statistically, the exposure sample is not large enough to be credible as a measure of the quality of the safety management system in place. [Pg.540]

Indexation is the most commonly used term to describe this inflation correction or cost escalation. The word indexation comes from the method used to define the height of the escalation. The amount of escalation is calculated by the change in price level between two points in time. This is laid down in the price level index which is published periodically by independent organisations like the Central Bureau of Statistics (CBS). [Pg.1412]

The most common statistic obtained from animal toxicity testing is the median dose. To have meaning, the median dose needs to be reported in the context of the toxicity test from which it is derived. For example, if the toxicity test was for acute lethality, then the median dose is reported as the 50% lethal dose, or LD50 the species and route of exposure are also specified, e.g., LDgoo rraf If it is a long-term or chronic toxicity test with an endpoint other than death, e.g., liver disease, the median dose is reported as the 50% toxic dose, or TDsoorairat-... [Pg.78]

In this design method, the objective is to reduce the variabihty of the controlled variable y when the set point is constant and the process is subject to unknown, random disturbances. In statistical terms, the objective is to minimize the variance of y. This approach is especially relevant for processes where the disturbances are stochastic (that is, random) rather than deterministic (for example, steps or drifts). Sheet-making processes for producing paper and plastic film or sheets are common examples (Featherstone et al., 2000). [Pg.335]

Equation 11.9 uses the explicit expressions for the A numbers, but with the common n term removed. Consider the left inequality for the case s = 8 and n = 2 8(7) < 8 < 9(8). This illustrates that the indistinguishable boltzon result may serve as an approximation to fermion or boson statistics. [Pg.349]

Statistical mechanics provides physical significance to the virial coefficients (18). For the expansion in 1/ the term BjV arises because of interactions between pairs of molecules (eq. 11), the term C/ k, because of three-molecule interactions, etc. Because two-body interactions are much more common than higher order interactions, tmncated forms of the virial expansion are typically used. If no interactions existed, the virial coefficients would be 2ero and the virial expansion would reduce to the ideal gas law Z = 1). [Pg.234]

In this chapter we provide an introductory overview of the imphcit solvent models commonly used in biomolecular simulations. A number of questions concerning the formulation and development of imphcit solvent models are addressed. In Section II, we begin by providing a rigorous fonmilation of imphcit solvent from statistical mechanics. In addition, the fundamental concept of the potential of mean force (PMF) is introduced. In Section III, a decomposition of the PMF in terms of nonpolar and electrostatic contributions is elaborated. Owing to its importance in biophysics. Section IV is devoted entirely to classical continuum electrostatics. For the sake of completeness, other computational... [Pg.134]

In general, tolerance stack models are based on either the wor.st case or statistical approaches, including those given in the references above. The worst case model (see equation 3.1) assumes that each component dimension is at its maximum or minimum limit and that the sum of these equals the assembly tolerance (initially this model was presented in Chapter 2). The tolerance stack equations are given in terms of bilateral tolerances on each component dimension, which is a common format when analysing tolerances in practice. The worst case model is ... [Pg.113]

Objective Provide a basis to judge the relative likelihood (probability) and severity of various possible events. Risks can be expressed in qualitative terms (high, medium, low) based on subjective, common-sense evaluations, or in quantitative terms (numerical and statistical calculations). [Pg.275]


See other pages where Common Statistical Terms is mentioned: [Pg.432]    [Pg.26]    [Pg.39]    [Pg.29]    [Pg.26]    [Pg.79]    [Pg.101]    [Pg.341]    [Pg.432]    [Pg.26]    [Pg.39]    [Pg.29]    [Pg.26]    [Pg.79]    [Pg.101]    [Pg.341]    [Pg.111]    [Pg.231]    [Pg.219]    [Pg.23]    [Pg.291]    [Pg.451]    [Pg.11]    [Pg.562]    [Pg.370]    [Pg.2]    [Pg.286]    [Pg.198]    [Pg.257]    [Pg.40]   


SEARCH



© 2024 chempedia.info