Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Clinically relevant difference with sample size

The definition of the minimally clinically relevant difference of interest involves clinical, medical, and regulatory experience and judgments. The appropriate sample size formula depends on the test of interest and should take into account the need for multiple comparisons (either among treatments or with respect to multiple examinations of the data). The project statistician provides critical guidance in this area. [Pg.181]

Now, whereas (1 - (3)/a is a monotonically increasing function of the sample size n, Li(P)/Lq(P) is not. It may increase at first, but eventually it will decline. The situation is illustrated in Figure 13.2, which takes the particular example of the trial is asthma considered in section 13.1 above and shows the likelihood for the null hypothesis (scaled to equal 1 in the case where the observed treatment difference is zero) for all possible observed treatment differences and also for the alternative hypothesis where the true treatment effect is equal to the clinically relevant difference. The situations for trials with 10 and 200 patients per group are illustrated. The critical value of the observed treatment difference for a two-sided test at the 5% level is marked in each case. For the smaller trial, a larger difference is required for significance. In the larger trial, a smaller difference is adequate. [Pg.205]

A very unsatisfactory feature of conventional approaches to sample size calculation is that there is no mention of cost. This means that for any two quite different indications with the same effect size, that is to say the same ratio of clinically relevant difference to standard deviation, the sample size would be the same whatever the cost or difficulty of recruiting and treating patients. This is clearly illogical and trialists probably manage this issue informally by manipulating the clinically relevant difference in the way discussed in Section 13.2.3. Clearly, it would be better to include the cost explicitly, and this suggests decision-analytic approaches to sample size determination. There are various Bayesian suggestions and these will be discussed in the next section. [Pg.210]

Even if the clinically irrelevant difference is the same as the clinically relevant difference and not, as will usually be the case, considerably smaller, it is generally the case that equivalence trials require a larger sample size for the same power as a conventional trial. The general position is illustrated by Figure 15.2. For purposes of sample size determination it will generally be unwise to assume that the treatments are exactly equal. This would be the best possible case and this already shows an important difference from conventional trials. We plan such trials to have adequate power to detect the difference we should not like to miss, but we may of course be fortunate and be faced with an even better drug. In that case the power is better than hoped for. If we are truly interested in showing equality (and not merely that an experimental treatment... [Pg.241]

As an example to help understand the effect of equivalence on sample size, consider a case where we wish to show that the difference in FEVi between two treatments is not greater than 200 ml and where the standard deviation is 350 ml for conventional type I and type II errors rates of 0.05 and 0.2. If we assume that the drugs are in fact exactly identical, the sample size needed (using a Normal approximation) is 53. If we allow for a true difference of 50 ml this rises to 69. On the other hand, if we wished to demonstrate superiority of one treatment over another for a clinically relevant difference of 200 ml with the other values as before, a sample size of 48 would suffice. Thus, in the best possible case a rise of about 10% in the sample size is needed (from 48 to 53). This difference arises because we have two one-sided tests each of which must be significant in order to prove efficacy. To have 80% power each must (in the case of exact equality) have approximately 90% power(because 0.9 x 0.9 = 0.8). The relevant sum of z-values for the power calculation is thus 1.2816-1-1.6449 = 2.93 as opposed to for a conventional trial 0.8416 -I-1.9600 = 2.8. The ratio of the square of 2.93 to 2.8 is about 1.1 explaining the 10% Increase in sample size. [Pg.242]

A common aim of dose-finding studies is to establish the so-called minimum effective dose. The conceptual difficulty associated with this is that it is not well defined. If we simply use hypothesis tests of contrasts of dose against placebo to establish this, then the minimum effective dose depends on the sample size (Filloon, 1995). Larger trials will have more power to detect smaller differences and so, other things being equal, larger trials will tend to conclude that lower doses are effective. This problem is related to one which was discussed in Chapter 13 where we considered whether trials should be powered to prove that a clinically relevant difference obtains. [Pg.331]

Suppose now that the trial in the example were a trial in which a difference of 0.5 mmol/1 was viewed as an important difference. Maybe this reflects the clinical relevance of such a difference or perhaps from a commercial standpoint it would be a worthwhile difference to have. Under such circumstances only having 62.3 per cent power to detect such a difference would be unacceptable this corresponds to a 37.7 per cent type II error, an almost 40 per cent chance of failing to declare significant differences. Well, there is only one thing you can do, and that is to increase the sample size. The recalculated values for power are given in Table 8.3 with a doubling of the sample size to 100 patients per group. [Pg.130]

We commonly refer to the level of effect to be detected as the cliniMlly relevant difference (crd) what level of effect is an important effect from a clinical standpoint. Note also that crd stands for commercially relevant difference it could well be that the decision is based on commercial interests. Finally crd stands for cynically relevant difference It does happen from time to time that a statistician is asked to do a sample size calculation, oh and by the way, we want 200 patients The issue here of course is budget and the question really is what level of effect are we able to detect with a sample size of 200 ... [Pg.132]


See other pages where Clinically relevant difference with sample size is mentioned: [Pg.189]    [Pg.216]    [Pg.721]    [Pg.59]    [Pg.60]    [Pg.208]    [Pg.211]    [Pg.240]    [Pg.301]    [Pg.309]    [Pg.415]    [Pg.144]    [Pg.69]    [Pg.871]    [Pg.806]    [Pg.260]    [Pg.305]    [Pg.134]    [Pg.263]    [Pg.93]   
See also in sourсe #XX -- [ Pg.196 , Pg.199 , Pg.200 , Pg.201 , Pg.202 , Pg.203 , Pg.204 , Pg.205 , Pg.208 ]




SEARCH



Clinical relevance

Clinical samples

Clinically relevant difference

Clinically relevant difference sample size

Difference sample

Sampling differences

Sampling sample size

Sampling size

© 2024 chempedia.info