In the example the test statistic value is 3.38 and Figure 4.3 illustrates the calculation of the p-value, which for these data turns out to be p = 0.336. This is a non-significant result. [Pg.74]

Figure 6.7 shows the influence of the size of the experimental effect. If the mean clearances differ to only a very small extent (as in the two lower cases in Figure 6.7), then the 95 per cent confidence interval will probably overlap zero, bringing a non-significant result. However, with a large effect (as in the two upper cases), the confidence interval is displaced well away from zero and will not overlap it. [Pg.77]

A common misapplication of statistics is to test for equivalence by performing a test for difference, obtaining a non-significant result and then claiming that, because... [Pg.110]

Demonstrating equivalence ft is impossible to demonstrate that there is absolutely no difference between two procedures or treatments, but we may be able to show that there is no difference large enough to matter ( equivalence testing ). We need to demonstrate that the whole of the 95 per cent Cl for the size of any difference lies within the equivalence zone. A non-significant result arising from a test for difference is not an adequate demonstration of equivalence. [Pg.116]

This obviously raises the possibility of abuse. If we initially performed the experiment intending to carry out a two-sided test and obtained a non-significant result, we might then be tempted to change to the one-sided test in order to force it to be significant. [Pg.123]

Care is needed in deciding what practical action should be taken on the basis of this result. Remember that a non-significant result does not preclude a difference. Leaflet B has quite a strong lead over its competitors and our experiment may simply have inadequate power to detect a genuine superiority. This is another demonstration of how frustrating this type of data can be —90 patients recruited and interviewed and we still are not sure if it matters which leaflet we use ... [Pg.207]

W onder of wonders Data that were non-significant are now revealed as significant (P = 0.034). It is usually at about this point that the cynical cry cheat How dare we use this statistical fiddle to convert non-significant results into significant ones Essentially, we need have no qualms about this approach. It is entirely respectable and is definitely superior to the analysis of the original data, because the transformed data are much closer to a normal distribution. The only caveat would be that, if we are... [Pg.226]

However, if the data is severely non-normal we can lose a huge amount of power by using a parametric test. We saw this loss of power when we obtained a non-significant result by applying a /-test to the untransformed (and highly skewed) toxin data. [Pg.232]

We now consider the actual width of these limits (the 95% or 99.9%, or whatever level we have chosen). They fix the accuracy with which we have determined the mean. If the limits are narrow our determination is accurate, if they are wide the determination is not very accurate. These considerations should indicate whether the non-significant result we obtained was due to the real equality of the mean with the expected value or due to the inadequacy of the data. Here, where the mean has a 95% chance of lying anywhere between 0.18 and —2.38, it would seem that our determination is rather inaccurate. )... [Pg.29]

reflex responses between different sides of body was done as shown in Fig. 2. Non-significant results were found for all pairs of comparison at all tapping angles and the Jendrassik maneuver. Such observations were the actual clinical facts where reflexes must be symmetry in... [Pg.198]

See also in sourсe #XX -- [ Pg.97 ]

© 2019 chempedia.info