Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Clinically relevant difference power

The optimal myeloablative preparative regimen is challenging to study because several indications for HCT (e.g., SCID and thalassemia) are rare enough that it is not feasible or is cost-prohibitive to conduct clinical trials that are powered adequately to detect clinically relevant differences. The longterm outcomes of busulfan-cyclophosphamide (BU-CY) and... [Pg.1453]

We see that in contrast to the type-1 error, the type-11 error is defined as occurring when accepting the null hypothesis if it is false. The power of a test is defined to be the probability of detecting a true difference and is equal to 1 — probability (type-11 error). The type-11 error and power depend upon the type-1 error, the sample size, the clinically relevant difference (CRD) that we are interested in detecting and the expected variability. Where do these values come from ... [Pg.303]

In a placebo-controlled hypertension trial, the primary endpoint is the fall in diastolic blood pressure. It is required to detect a clinically relevant difference of 8 mmHg in a 5 per cent level test. Fiistorical data suggests that CT= 10 mmHg. Table 8.4 provides sample sizes for various levels of power and differences around 8 mmHg the sample sizes are per group. [Pg.132]

The sample size calculation should be detailed in the trial publication, indicating the estimated outcomes in each of the treatment groups (and this will define, in particular, the clinically relevant difference to be detected), the type I error, the type II error or power and, for a continuous primary outcome variable in a parallel group trial, the within-group standard deviation of that measure. For time-to-event data details on clinically relevant difference would usually be specified in terms of the either the median event times or the proportions event-free at a certain time point. [Pg.258]

There are a number of values of the treatment effect (delta or A) that could lead to rejection of the null hypothesis of no difference between the two means. For purposes of estimating a sample size the power of the study (that is, the probability that the null hypothesis of no difference is rejected given that the alternate hypothesis is true) is calculated for a specific value of A. in the case of a superiority trial, this specific value represents the minimally clinically relevant difference between groups that, if found to be plausible on the basis of the sample data through construction of a confidence interval, would be viewed as evidence of a definitive and clinically important treatment effect. [Pg.174]

It is desired to run a placebo-controlled parallel group trial in asthma. The target variable is forced expiratory volume in one second (FEVi). The clinically relevant difference is presumed to be 200 ml and the standard deviation 450 ml. A two-sided significance level of 0.05 (or 5%) is to be used and the power should be 0.8 (or 80%). What should the sample size be. ... [Pg.197]

It may be a requirement that the results be robust to a number of alternative analyses. The problem that this raises is frequently ignored. However, where this requirement applies, unless the sample size is increased to take account of it, the power will be reduced. (If power, in this context, is taken to be the probability that all required tests will be significant if the clinically relevant difference applies.) This issue is discussed in section 13.2.12 below. [Pg.199]

Figure 13.1 Power as a function of clinically relevant difference for a two-paraUel-group trial in asthma. The outcome variable is FEVi, the standard deviation is assumed to be 450 ml, and n is the number of patients per group. If the clinically relevant difference is 200 ml, 80 patients per group are needed for 80% power. Figure 13.1 Power as a function of clinically relevant difference for a two-paraUel-group trial in asthma. The outcome variable is FEVi, the standard deviation is assumed to be 450 ml, and n is the number of patients per group. If the clinically relevant difference is 200 ml, 80 patients per group are needed for 80% power.
We should power trials so as to be able to prove that a clinically relevant difference obtains... [Pg.202]

The surprising thing is that this statement can also be shown to be true given suitable assumptions (Royall, 1986). We talked above about the power of the alternative hypothesis, but typically this hypothesis includes all sorts of values of x, the treatment effect. One argument is that if the sample size is increased, not only is the power of finding a clinically relevant difference, A, increased, but the power also of finding lesser differences. Another argument is as follows. [Pg.204]

If we are required to prove significance in two trials, as is usually believed to be necessary in phase III for a successful NDA application, then from the practical point of view it may be the power of the combined requirement which is important, rather than that for individual trials. On the assumption that the trials are competent and that the background planning has been carried out appropriately and that trial by treatment interactions may be dismissed, then the power to detect the clinically relevant difference in both of two trials, each with power 80%, is 64% since 0.8 x 0.8 = 0.64. This means that 1 — 0.64 = 0.36, or more than one-third, of such drug development programmes would fail in phase III for failure of one or both clinical trials on this basis. (This does not mean that one-third of drug developments will fall in phase III, since many drugs which survive that far may, indeed, have an effect which is superior to the clinically relevant difference. On the other hand, there may be other reasons for failure.) If it is desired to have 80% power overall, then it is necessary to run each trial with 90% power since 0.9 x 0.9 = 0.80. [Pg.208]

Retrospective power calculations are sometimes encountered for so-called failed trials, but this seems particularly pointless. The clinically relevant difference does not change as a result of having run the trial, in which case the power is just a function of the observed variance. It says nothing about the effect of treatment. [Pg.209]

The precision tft (if we take this to be the ratio of clinically relevant difference to standard error) of a trial designed to have a power of 1 — )8 for a two-sided test of size of a will be... [Pg.234]

Even if the clinically irrelevant difference is the same as the clinically relevant difference and not, as will usually be the case, considerably smaller, it is generally the case that equivalence trials require a larger sample size for the same power as a conventional trial. The general position is illustrated by Figure 15.2. For purposes of sample size determination it will generally be unwise to assume that the treatments are exactly equal. This would be the best possible case and this already shows an important difference from conventional trials. We plan such trials to have adequate power to detect the difference we should not like to miss, but we may of course be fortunate and be faced with an even better drug. In that case the power is better than hoped for. If we are truly interested in showing equality (and not merely that an experimental treatment... [Pg.241]

As an example to help understand the effect of equivalence on sample size, consider a case where we wish to show that the difference in FEVi between two treatments is not greater than 200 ml and where the standard deviation is 350 ml for conventional type I and type II errors rates of 0.05 and 0.2. If we assume that the drugs are in fact exactly identical, the sample size needed (using a Normal approximation) is 53. If we allow for a true difference of 50 ml this rises to 69. On the other hand, if we wished to demonstrate superiority of one treatment over another for a clinically relevant difference of 200 ml with the other values as before, a sample size of 48 would suffice. Thus, in the best possible case a rise of about 10% in the sample size is needed (from 48 to 53). This difference arises because we have two one-sided tests each of which must be significant in order to prove efficacy. To have 80% power each must (in the case of exact equality) have approximately 90% power(because 0.9 x 0.9 = 0.8). The relevant sum of z-values for the power calculation is thus 1.2816-1-1.6449 = 2.93 as opposed to for a conventional trial 0.8416 -I-1.9600 = 2.8. The ratio of the square of 2.93 to 2.8 is about 1.1 explaining the 10% Increase in sample size. [Pg.242]

A common aim of dose-finding studies is to establish the so-called minimum effective dose. The conceptual difficulty associated with this is that it is not well defined. If we simply use hypothesis tests of contrasts of dose against placebo to establish this, then the minimum effective dose depends on the sample size (Filloon, 1995). Larger trials will have more power to detect smaller differences and so, other things being equal, larger trials will tend to conclude that lower doses are effective. This problem is related to one which was discussed in Chapter 13 where we considered whether trials should be powered to prove that a clinically relevant difference obtains. [Pg.331]

Assurance. A sort of chimeric probability in which a frequentist power is averaged using a Bayesian prior distribution. It is thus the unconditional expected probability of a significant result as opposed to the conditional probability given a particular clinically relevant difference. [Pg.455]

Conjunctive power. The probability that a number of endpoints will be jointly significant as a function of a set of presumed clinically relevant differences. [Pg.460]

Power. The power is the probability of concluding that the alternative hypothesis is true given that it is in fact true. It depends on the statistical test being employed, the size of that test, the nature and variability of the observations obtained and the size of the trial. It also depends on the alternative hypothesis. In practice there is no single alternative h3fpothesis, so a reference alternative based on a clinically relevant difference is usually employed. The power of a trial is a useful concept when planning the trial but has little relevance to the interpretation of its results. (Caution not all statisticians agree with this last statement.)... [Pg.472]

Suppose now that the trial in the example were a trial in which a difference of 0.5 mmol/1 was viewed as an important difference. Maybe this reflects the clinical relevance of such a difference or perhaps from a commercial standpoint it would be a worthwhile difference to have. Under such circumstances only having 62.3 per cent power to detect such a difference would be unacceptable this corresponds to a 37.7 per cent type II error, an almost 40 per cent chance of failing to declare significant differences. Well, there is only one thing you can do, and that is to increase the sample size. The recalculated values for power are given in Table 8.3 with a doubling of the sample size to 100 patients per group. [Pg.130]

The clinically/commercially relevant difference (crd). If the expected difference is larger than this then it could be worth considering powering for the expected effect, the sample size will be lower. [Pg.139]

The investigator should consider whether the study design is likely to disclose critical differences of clinical relevance. Important factors in this context are the range of measurements, the SDs of the random errors of the involved methods, and the number of samples. These factors determine the statistical power of a method comparison study (i.e., the ability of the data analysis procedure to verify the presence of a given systematic difference). [Pg.391]

Additional TMDSC study of other vinyl polysiloxane, polyether and polysulfide impression materials is important to verify if the polymer transitions shown in Figures 16 to 19 generally exist in different products and to investigate the effects of other temperature modulation conditions. Complementary research on correlations with clinically relevant mechanical properties of the elastomeric impression materials is needed to verify if these thermal analyses have useful predictive power. Interestingly, when compared at apparently similar viscosities, the reported values of the elastic modulus [3] are highest for the vinyl polysiloxane silicone impression materials, intermediate for the polyether impression materials, and lowest for the polysulfide impression materials, in reverse order to the relative values of Tg fovind in our thermal analyses [45]. Our X-ray diffraction and scanning electron microscopic study [47] of these impression materials has shown that they contain substantial amounts of crystalline filler particles in the micron size range, which are incorporated by manufacturers to achieve the clinically desired viscosity levels. Tliese filler particles should have considerable influence on the mechanical properties of the impression materials. [Pg.654]


See other pages where Clinically relevant difference power is mentioned: [Pg.189]    [Pg.216]    [Pg.49]    [Pg.60]    [Pg.197]    [Pg.200]    [Pg.200]    [Pg.202]    [Pg.208]    [Pg.211]    [Pg.226]    [Pg.309]    [Pg.415]    [Pg.459]    [Pg.721]    [Pg.435]    [Pg.64]    [Pg.264]    [Pg.111]    [Pg.260]    [Pg.127]    [Pg.523]    [Pg.405]    [Pg.547]    [Pg.104]    [Pg.115]    [Pg.443]   
See also in sourсe #XX -- [ Pg.137 ]




SEARCH



Clinical relevance

Clinically relevant difference

© 2024 chempedia.info