Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

What else could possibly go wrong

In the last two chapters we established that, in a situation where there is actually no experimental effect, we will correctly declare that the evidence is unconvincing 95 per cent of the time. This leaves us with a small (5 per cent) proportion of cases where the sample misleads us and we declare the evidence significant. We referred to these cases as false positives or type I errors . In this chapter we consider a new and quite different type of error. [Pg.90]

One of the factors that feeds into the calculation of a two-sample /-test is the sample size. If we investigate a case where there is a real difference, but use too small a sample size, this may widen the 95 per cent confidence interval to the point where it overlaps zero. In that case, the results would be declared non-significant. This is a different kind of error. We are now failing to detect an effect that actually is present. This is a false negative or type II error . [Pg.90]

Failure to detect a difference that genuinely is present. [Pg.90]

If a real difference of a stated size is present, then beta defines the risk that we might fail to detect it. [Pg.90]


See other pages where What else could possibly go wrong is mentioned: [Pg.90]   


SEARCH



Wrong

© 2024 chempedia.info