Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The F-Test

The F-test is based on the normal distribution and serves to compare either [Pg.69]

An experimental result in the form of a standard deviation with a fixed limit on the distribution width, or [Pg.69]

Such an experimental result with a second one in order to detect a difference. [Pg.69]

Both cases are amenable to the same test, the distinction being a matter of the number of degrees of freedom/. The F-test is used in connection with the t-test. (See program TTEST.) [Pg.70]

The critical value F is taken from an F-table or is approximated (cf. Section 5.1.3) if [Pg.70]

The previous section suggests a method for testing the adequacy of the model y, [Pg.109]

The test of this hypothesis makes use of the calculated Fisher variance ratio, F. [Pg.109]

An alternative method of obtaining SS f provides the same result and demonstrates the additivity of sums of squares and degrees of freedom. [Pg.110]

At first glance, this ratio might appear to be highly significant. However, the critical value of F at the 95% level of confidence for one degree of freedom in the numerator and one degree of freedom in the denominator is 161 (see Appendix C). Thus, the critical value is not exceeded and the null hypothesis is not rejected. We must retain as adequate the model yj, = 0 + rj,. [Pg.111]

In many situations it is appropriate to decide before a test is made the risk one is willing to take that the null hypothesis will be disproved when it is actually true. If an experimenter wishes to be wrong no more than one time in twenty, the risk a is set at 0.05 and the test has 95% confidence . The calculated value of r or F is compared to the critical 95% threshold value found in tables if the calculated value is equal to or greater than the tabular value, the null hypothesis can be rejected with a confidence equal to or greater than 95%. [Pg.111]


It is possible to compare the means of two relatively small sets of observations when the variances within the sets can be regarded as the same, as indicated by the F test. One can consider the distribution involving estimates of the true variance. With sj determined from a group of observations and S2 from a second group of N2 observations, the distribution of the ratio of the sample variances is given by the F statistic ... [Pg.204]

The larger variance is placed in the numerator. For example, the F test allows judgment regarding the existence of a significant difference in the precision between two sets of data or between two analysts. The hypothesis assumed is that both variances are indeed alike and a measure of the same a. [Pg.204]

As applied in Example 12, the F test was one-tailed. The F test may also be applied as a two-tailed test in which the alternative to the null hypothesis is erj A cr. This doubles the probability that the null hypothesis is invalid and has the effect of changing the confidence level, in the above example, from 95% to 90%. [Pg.204]

The absorbance of solutions of food dyes is used to explore the treatment of outliers and the application of the f-test for comparing means. [Pg.98]

Once a significant difference has been demonstrated by an analysis of variance, a modified version of the f-test, known as Fisher s least significant difference, can be used to determine which analyst or analysts are responsible for the difference. The test statistic for comparing the mean values Xj and X2 is the f-test described in Chapter 4, except that Spool is replaced by the square root of the within-sample variance obtained from an analysis of variance. [Pg.696]

Fisher s least significant difference a modified form of the f-test for comparing several sets of data. (p. 696) flame ionization detector a nearly universal GC detector in which the solutes are combusted in an H2/air flame, producing a measurable current, (p. 570)... [Pg.772]

It should be stressed that there must not be a significant difference between the precisions of the methods. Hence the F-test (Section 4.12) is applied prior to using the -test in equation (5). [Pg.141]

The F-test must be applied to establish that there is no significant difference between the precisions of the two methods. [Pg.141]

This lack of sharpness of the 1-way F-test on REV s is sometimes seen when there is information spanned by some eigenvectors that is at or below the level of the noise spanned by those eigenvectors. Our data sets are a good example of such data. Here we have a 4 component system that contains some nonlinearities. This means that, to span the information in our data, we should expect to need at least 4 eigenvectors — one for each of the components, plus at least one additional eigenvector to span the additional variance in the data caused by the non-linearity. But the F-test on the reduced eigenvalues only... [Pg.114]

So, cross-validation and PRESS both indicate that we should use 5 factors for our calibrations. This indication is sufficiently consistent with the F-test on the REV" and with our "eyeball" inspection of the EV s and REV s, themselves. It can also be worthwhile to look at the eigenvectors themselves. [Pg.117]

In Figure 58 the fifth factors appear quite noisy. Nonetheless, we can imagine that there are still some systematic features in these factors. The fact that these apparent features are not much stronger than the noise is consistent with the results of the F-tests on the REV s. It can be dangerous to decide... [Pg.118]

Applying the F-test it was impossible to discriminate between the two models at a 95% confidence level. [Pg.164]

Alternatively, the experimental error can be given a particular value for each reaction of the series, or for each temperature, based on statistical evaluation of the respective kinetic experiment. The rate constants are then taken with different weights in further calculations (205,206). Although this procedure seems to be more exact and more profoundly based, it cannot be quite generally recommended. It should first be statistically proven by the F test (204) that the standard errors in fact differ because of the small number of measurements, it can seldom be done on a significant level. In addition, all reactions of the series are a priori of the same importance, and it is always a... [Pg.431]

Of course, Sqo Sq if the difference is significant, the hypothesis of a common point of intersection is to be rejected. Quite rigourously, the F test must not be used to judge this significance, but a semiquantitative comparison may be sufficient when the estimated experimental error 6 is taken into consideration. We can then decide whether the Arrhenius law holds within experimental error by comparing Soo/(mi-21) with 6 and whether the isokinetic relationship holds by comparing So/(ml — i— 2) with 5. ... [Pg.441]

Does the found standard deviation, Sx, correspond to expectations The expected value E sx) is a (Greek sigma), again either a theoretical value or an experimental average. This question is answered by the F-test explained in Section 1.7.1. Proving Sx to be different from a is not easily accomplished, especially if n is small. [Pg.27]

Obviously, the f-test also involves its own set of null and alternative hypotheses these are also designated Hq and Hi, but must not be confused with the hypotheses associated with the F-test. [Pg.47]

For standard deviations, an analogous confidence interval CI(.9jr) can be derived via the F-test. In contrast to Cl(Xmean), ClCij ) is not symmetrical around the most probable value because by definition can only be positive. The concept is as follows an upper limit, on is sought that has the quality of a very precise measurement, that is, its uncertainty must be very small and therefore its number of degrees of freedom / must be very large. The same logic applies to the lower limit. s/ ... [Pg.72]

ANOVA) if the standard deviations are indistinguishable, an ANOVA test can be carried out (simple ANOVA, one parameter additivity model) to detect the presence of significant differences in data set means. The interpretation of the F-test is given (the critical F-value for p = 0.05, one-sided test, is calculated using the algorithm from Section 5.1.3). [Pg.377]

Ludden TM, Beal SL, Sheiner LB. Comparison of the Akaike Information Criterion, the Schwarz criterion and the F test as guides to model selection. /PAar-macokinet Biopharm 1994 22 431-45. [Pg.525]

The F-test indicates the dependence of die dependent variables widi the independent variables, P level indicates the statistical significance of the correlafion(Table 4). The F-test results for the relation of the amount of ortho methylol phenols with F/P molar ratio and the reaction temperature were low, however, for the OH/P wt %, the F-test result was very significant, indicating a clear dependence of ortho methylol phenols on the OH/P wt %. It can also be seen that P level values for the relation between the amount of ortho methylol phenols and both F/P molar ratio and reaction temperature are above the set P value of 0.05, while for the OH/P wt%, the P value is under the set value. This data indicated that the relations of dependent variables ortho methylol phenols with independent variable OH/P wt% is statistically significant at the 0.05 significmitx level, while tiie relation of dependent variables ortho methylol phenols with F/P molar ratio and reaction temperature are not statistically significant. [Pg.871]

The significance of the overall regression can be tested by means of the F-test. The quantity... [Pg.550]

Although all the underlying assumptions (local linearity, statistical independence, etc.) are rarely satisfied, Bartlett s jf-test procedure has been found adequate in both simulated and experimental applications (Dumez et al., 1977 Froment, 1975). However, it should be emphasized that only the x2-test and the F-test are true model adequacy tests. Consequently, they may eliminate all rival models if none of them is truly adequate. On the other hand, Bartlett s x2-test does not guarantee that the retained model is truly adequate. It simply suggests that it is the best one among a set of inadequate models ... [Pg.195]

A central concept of statistical analysis is variance,105 which is simply the average squared difference of deviations from the mean, or the square of the standard deviation. Since the analyst can only take a limited number n of samples, the variance is estimated as the squared difference of deviations from the mean, divided by n - 1. Analysis of variance asks the question whether groups of samples are drawn from the same overall population or from different populations.105 The simplest example of analysis of variance is the F-test (and the closely related t-test) in which one takes the ratio of two variances and compares the result with tabular values to decide whether it is probable that the two samples came from the same population. Linear regression is also a form of analysis of variance, since one is asking the question whether the variance around the mean is equivalent to the variance around the least squares fit. [Pg.34]

The p-value for the f-test can be found in the Probt variable in the pvalue data set. [Pg.255]

The f-test in this form can only be applied under the condition that the variances of the two sample subsets, sf and sf, do not differ significantly. This has to be checked by the F-test beforehand. The test statistic f has to be compared to the related quantile of the (-distribution h-a,v where v = mx + n2 — 2. [Pg.109]

The reference value is outside the (upper) confidence limit and therefore the result is classified to be incorrect. This can also be proved by the f-test... [Pg.210]


See other pages where The F-Test is mentioned: [Pg.86]    [Pg.88]    [Pg.813]    [Pg.1763]    [Pg.236]    [Pg.148]    [Pg.115]    [Pg.47]    [Pg.69]    [Pg.224]    [Pg.316]    [Pg.384]    [Pg.130]    [Pg.91]    [Pg.11]    [Pg.45]    [Pg.210]    [Pg.224]    [Pg.225]   


SEARCH



F-test

© 2024 chempedia.info