Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Significance test

Having introduced the normal distribution and discussed its basic properties, we can move on to the common statistical tests for comparing sets of data. These methods and the calculations performed are referred to as significance tests. An important feature and use of the normal distribution function is that it enables areas under the curve, within any specified range, to be accurately calculated. The function in Equation (1) is integrated numerically and the results presented in statistical tables as areas under the normal curve. From these tables, approximately 68% of observations can be expected to lie in the region bounded by one standard deviation from the mean, 95% within jjl 2o, and more than 99% within x 3a. [Pg.6]

We can return to the data presented in Table 1 for the analysis of the mineral water. If the parent population parameters, a and po, are known to be 0.82 mg kg- and 10.8 mg kg respectively, then can we answer the question of whether the analytical results given in Table 1 are likely to have come from a water sample with a mean sodium level similar to that providing the parent data. In statistic s terminology, we wish to test the null hypothesis that the means of the sample and the suggested parent population are similar. This is generally written as [Pg.6]

The test statistic for such an analysis is denoted by z and is given by [Pg.7]

X is 11.04 mg kg as determined above, and substituting into Equation (8) values for (jlo and a then [Pg.7]

In the above example it was assumed that the mean value and standard deviation of the sodium concentration in the parent sample were known. In practice this is rarely possible as all the mineral water from the source would not have been analysed and the best that can be achieved is to obtain recorded estimates of p, and a from repetitive sampling. Both the recorded mean value and the standard deviation will undoubtedly vary and there will be a degree of uncertainty in the precise shape of the parent normal distribution curve. This uncertainty, arising from the use of sampled data, can be compensated for by using a probability distribution with a wider spread than the normal curve. The most common such distribution used in practice is Student s t-distribution. The i-distribution curve is of a similar form to the normal function. As the number of samples selected and analysed increases the two functions become increasingly more similar. Using the t-distribution the well known /-test can be performed to establish the likelihood that a given sample is a member of a [Pg.7]

Returning to the data presented in Table 1.1 for the analysis of the mineral water if the parent population parameters, cr and po are 0.82 and 10.8 mg kg  [Pg.6]

Assuming the samples were randomly collected, then the r-statistic is computed from [Pg.8]

One of the most important properties of an analytical method is that it should be free from systematic error. This means that the value which it gives for the amount of the analyte should be the true value. This property of an analytical method may be tested by applying the method to a standard test portion containing a known amount of analyte (Chapter 1). However, as we saw in the last chapter, even if there were no systematic error, random errors make it most unlikely that the measured amount would exactly equal the standard amount. In order to decide whether the difference between the measured and standard amounts can be accounted for by random error, a statistical test known as a significance test can be employed. As its name implies, this approach tests whether the difference between the two results is significant, or whether it can be accounted for merely by random variations. Significance tests are widely used in the evaluation of experimental results. This chapter considers several tests which are particularly useful to analytical chemists. [Pg.39]


Analytical chemists make a distinction between error and uncertainty Error is the difference between a single measurement or result and its true value. In other words, error is a measure of bias. As discussed earlier, error can be divided into determinate and indeterminate sources. Although we can correct for determinate error, the indeterminate portion of the error remains. Statistical significance testing, which is discussed later in this chapter, provides a way to determine whether a bias resulting from determinate error might be present. [Pg.64]

Next, an equation for a test statistic is written, and the test statistic s critical value is found from an appropriate table. This critical value defines the breakpoint between values of the test statistic for which the null hypothesis will be retained or rejected. The test statistic is calculated from the data, compared with the critical value, and the null hypothesis is either rejected or retained. Finally, the result of the significance test is used to answer the original question. [Pg.83]

A statement that the difference between two values can be explained by indeterminate error retained if the significance test does not fail Ho). [Pg.83]

Examples of (a) two-tailed, (b) and (c) one-tailed, significance tests. The shaded areas in each curve represent the values for which the null hypothesis is rejected. [Pg.84]

Significance test in which the null hypothesis is rejected for values at either end of the normal distribution. [Pg.84]

If the significance test is conducted at the 95% confidence level (a = 0.05), then the null hypothesis will be retained if a 95% confidence interval around X contains p,. If the alternative hypothesis is... [Pg.84]

Since significance tests are based on probabilities, their interpretation is naturally subject to error. As we have already seen, significance tests are carried out at a significance level, a, that defines the probability of rejecting a null hypothesis that is true. For example, when a significance test is conducted at a = 0.05, there is a 5% probability that the null hypothesis will be incorrectly rejected. This is known as a type 1 error, and its risk is always equivalent to a. Type 1 errors in two-tailed and one-tailed significance tests are represented by the shaded areas under the probability distribution curves in Figure 4.10. [Pg.84]

The probability of a type 1 error is inversely related to the probability of a type 2 error. Minimizing a type 1 error by decreasing a, for example, increases the likelihood of a type 2 error. The value of a chosen for a particular significance test, therefore, represents a compromise between these two types of error. Most of the examples in this text use a 95% confidence level, or a = 0.05, since this is the most frequently used confidence level for the majority of analytical work. It is not unusual, however, for more stringent (e.g. a = 0.01) or for more lenient (e.g. a = 0.10) confidence levels to be used. [Pg.85]

The most commonly encountered probability distribution is the normal, or Gaussian, distribution. A normal distribution is characterized by a true mean, p, and variance, O, which are estimated using X and s. Since the area between any two limits of a normal distribution is well defined, the construction and evaluation of significance tests are straightforward. [Pg.85]

A typical application of this significance test, which is known as a f-test of A to p, is outlined in the following example. [Pg.85]

Relationship between confidence intervals and results of a significance test, (a) The shaded area under the normal distribution curves shows the apparent confidence intervals for the sample based on fexp. The solid bars in (b) and (c) show the actual confidence intervals that can be explained by indeterminate error using the critical value of (a,v). In part (b) the null hypothesis is rejected and the alternative hypothesis is accepted. In part (c) the null hypothesis is retained. [Pg.85]

Since there is no reason to believe that X must be either larger or smaller than p, the use of a two-tailed significance test is appropriate. The null and alternative hypotheses are... [Pg.86]

The variance for the sample of ten tablets is 4.3. A two-tailed significance test is used since the measurement process is considered out of statistical control if the sample s variance is either too good or too poor. The null hypothesis and alternative hypotheses are... [Pg.87]

Significance testing for comparing two mean values is divided into two categories depending on the source of the data. Data are said to be unpaired when each mean is derived from the analysis of several samples drawn from the same source. Paired data are encountered when analyzing a series of samples drawn from different sources. [Pg.88]

The value of fexp is compared with a critical value, f(a, v), as determined by the chosen significance level, a, the degrees of freedom for the sample, V, and whether the significance test is one-tailed or two-tailed. [Pg.89]

The value of fexp is then compared with a critical value, f(a, v), which is determined by the chosen significance level, a, the degrees of freedom for the sample, V, and whether the significance test is one-tailed or two-tailed. For paired data, the degrees of freedom is - 1. If fexp is greater than f(a, v), then the null hypothesis is rejected and the alternative hypothesis is accepted. If fexp is less than or equal to f(a, v), then the null hypothesis is retained, and a significant difference has not been demonstrated at the stated significance level. This is known as the paired f-test. [Pg.92]

On occasion, a data set appears to be skewed by the presence of one or more data points that are not consistent with the remaining data points. Such values are called outliers. The most commonly used significance test for identifying outliers is Dixon s Q-test. The null hypothesis is that the apparent outlier is taken from the same population as the remaining data. The alternative hypothesis is that the outlier comes from a different population, and, therefore, should be excluded from consideration. [Pg.93]

Significance tests, however, also are subject to type 2 errors in which the null hypothesis is falsely retained. Consider, for example, the situation shown in Figure 4.12b, where S is exactly equal to (Sa)dl. In this case the probability of a type 2 error is 50% since half of the signals arising from the sample s population fall below the detection limit. Thus, there is only a 50 50 probability that an analyte at the lUPAC detection limit will be detected. As defined, the lUPAC definition for the detection limit only indicates the smallest signal for which we can say, at a significance level of a, that an analyte is present in the sample. Failing to detect the analyte, however, does not imply that it is not present. [Pg.95]

This value of fexp is compared with the critical value for f(a, v), where the significance level is the same as that used in the ANOVA calculation, and the degrees of freedom is the same as that for the within-sample variance. Because we are interested in whether the larger of the two means is significantly greater than the other mean, the value of f(a, v) is that for a one-tail significance test. [Pg.697]

Ohm s law the statement that the current moving through a circuit is proportional to the applied potential and inversely proportional to the circuit s resistance (E = iR). (p. 463) on-column injection the direct injection of thermally unstable samples onto a capillary column, (p. 568) one-taUed significance test significance test in which the null hypothesis is rejected for values at only one end of the normal distribution, (p. 84)... [Pg.776]


See other pages where Significance test is mentioned: [Pg.82]    [Pg.83]    [Pg.83]    [Pg.83]    [Pg.83]    [Pg.84]    [Pg.84]    [Pg.84]    [Pg.84]    [Pg.84]    [Pg.84]    [Pg.84]    [Pg.85]    [Pg.85]    [Pg.86]    [Pg.87]    [Pg.96]    [Pg.96]    [Pg.96]    [Pg.695]    [Pg.769]    [Pg.775]    [Pg.778]    [Pg.780]   
See also in sourсe #XX -- [ Pg.64 , Pg.65 ]

See also in sourсe #XX -- [ Pg.62 , Pg.63 , Pg.64 , Pg.65 , Pg.66 , Pg.67 , Pg.68 , Pg.69 , Pg.70 ]

See also in sourсe #XX -- [ Pg.6 ]

See also in sourсe #XX -- [ Pg.386 ]

See also in sourсe #XX -- [ Pg.14 , Pg.153 , Pg.178 , Pg.253 ]

See also in sourсe #XX -- [ Pg.34 , Pg.40 ]




SEARCH



B3 Significance testing

Cluster significance tests

Error in significance testing

Errors in significance tests

Excel significance testing

F-test for the significance

Honestly significant difference test, Tukey

Hypothesis testing significance test

Laboratory tests significance

Least Significant Difference test

Normal probability plots significance testing using

One-tailed significance test

Significance test areawise

Significance testing

Significance testing

Significance testing construction

Significance testing dummy factors

Significance testing normal probability plots

Significance testing problem

Significance tests conclusions from

Significant F-test

Single tailed significance tests

Statistical significance tests

Statistical significance tests, limitations

Statistical test of significance

Statistical tests significance test

Test of significance

Testing Whether Two Slopes Are Significantly Different

Testing Whether an Intercept Is Significantly Different from Zero

Testing the Significance of Influencing Factors

Tests and Their Significance

Tukey s honestly significant difference test

Tukeys honestly significant difference test

Tukey’s Honest Significant Difference tests

Two-tailed significance test

© 2024 chempedia.info