Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Treatment effects/differences power

Many trials combine events in their primary outcome measure. This can produce a useful measure of the overall effect of treatment on all the relevant outcomes, and it usually affords greater statistical power, but the outcome that is most important to a particular patient may be affected differently by treatment than the combined outcome. Composite outcomes also sometimes combine events of very different severity, and treatment effects can be driven by the least important outcome, which is often the most frequent. Equally problematic is the composite of definite clinical events and episodes of hospitalization. The fact that a patient is in a trial will probably affect the likelihood of hospitalization and it will certainly vary between different healthcare systems. [Pg.235]

There are a number of values of the treatment effect (delta or A) that could lead to rejection of the null hypothesis of no difference between the two means. For purposes of estimating a sample size the power of the study (that is, the probability that the null hypothesis of no difference is rejected given that the alternate hypothesis is true) is calculated for a specific value of A. in the case of a superiority trial, this specific value represents the minimally clinically relevant difference between groups that, if found to be plausible on the basis of the sample data through construction of a confidence interval, would be viewed as evidence of a definitive and clinically important treatment effect. [Pg.174]

Even a type III trial, however, will not provide adequate power to test differences in response to treatment of men and women. There are two reasons. First, we may expect that the difference in the treatment effect between one sex and another will be smaller than the larger of the two effects. (If this were not the case it would imply that the treatment was harmful for one sex and beneficial for another a so-called qualitative interaction.) Second, such interactive contrasts, have, in any case (other things being equal) higher variances. Thus a type IV trial will require even greater numbers of patients and recruitment times. [Pg.136]

If we wish to say something about the difference which obtains, then it is better to quote a so-called point estimate of the true treatment effect, together with associated confidence limits. The point estimate (which in the simplest case would be the difference between the two sample means) gives a value of the treatment effect supported by the observed data in the absence of any other information. It does not, of course, have to obtain. The upper and lower 1 — a confidence limits define an interval of values which, were we to adopt them as the null hypothesis for the treatment effect, would not be rejected by a hypothesis test of size a. If we accept the general Neyman-Pearson framework and if we wish to claim any single value as the proven treatment effect, then it is the lower confidence limit, rather than any value used in the power calculation, which fulfills this role. (See Chapter 4.)... [Pg.201]

The surprising thing is that this statement can also be shown to be true given suitable assumptions (Royall, 1986). We talked above about the power of the alternative hypothesis, but typically this hypothesis includes all sorts of values of x, the treatment effect. One argument is that if the sample size is increased, not only is the power of finding a clinically relevant difference, A, increased, but the power also of finding lesser differences. Another argument is as follows. [Pg.204]

This test has very low power, however, for three reasons. (1) It is a between-patient test for a trial which has been designed to use within-patient differences to detect treatment effects. (2) The carry-over effect where it occurs is likely to be somewhat smaller than the pure effect of treatment. (3) The carry-over is in any case only manifested in the second period. Therefore, although it is necessary to use the totals to compare sequences to account for other effects that might bias the test of carry-over, the direct information for carry-over comes only from the second period and the effect of this is diluted. In short, although a test of carry-over is available it is too weak to be of much use. [Pg.278]

Exposure of E. coli to microwave treatments results in a reduction of the microbial population in apple juice. Canumir et al. (2002) determined the effect of pasteurization at different power levels (270-900 W) on the microbial quality of apple juice, using a domestic 2450 MHz microwave. The data obtained were compared with conventional pasteurization (83 °C for 30 s). Apple juice pasteurization at 720-900 W for 60-90 s resulted in a 2- to 4-log population reduction. Using a linear model, the D-values ranged from 0.42 0.03 minutes at 900 W to 3.88 0.26 minutes at 270 W. The value for z was 652.5 2.16 W (58.5 0.4°C). These observations indicate that inactivation of E. coli is due to heat. [Pg.130]

Yasuda et al. [198-200] studied the effect of plasma treatment on different fibers and fabrics. They used four nonpolymerizing gases helium, air, nitrogen, and tetra-fluoromethane. It was found that in some cases the etching of the fiber was accompanied by the implantation of the specific atoms into its surface. The model studies performed with nylon 6 have shown that plasma treatment, similar to plasma polymerization, may be carried out in the power-deficient range as well as in the gas-deficient range. [Pg.102]

In discussion of meta-analysis, there is often much attention given to the random effects model versus the fixed effects model. Random effects models assume that the true treatment effects of the individual trials represent a random sample from some population. The random effects model estimates the population mean of the treatment effects and accounts for the variation in the observed effects. It is sometimes stated that the fixed effects model assumes that the individual trial effects are constant. However, this is not a necessary assumption. An alternative view is that the fixed effects model estimates the mean of the true treatment effects of the trials in the meta-analysis. Senn (2000) discussed the analogy with center effects in multicenter trials. In safety, random effects models may be problematic because of the need to estimate between-trial effects with sparse data. Additionally, the random effects model is less statistically powerful than the fixed effects model, albeit the hypotheses are different. In the fixed effects model, the variance estimate should account for trial effect differences either through stratification, conditioning, or modeling of fixed effects. [Pg.242]


See other pages where Treatment effects/differences power is mentioned: [Pg.112]    [Pg.464]    [Pg.132]    [Pg.715]    [Pg.720]    [Pg.276]    [Pg.175]    [Pg.212]    [Pg.93]    [Pg.522]    [Pg.116]    [Pg.236]    [Pg.322]    [Pg.139]    [Pg.289]    [Pg.816]    [Pg.281]    [Pg.49]    [Pg.197]    [Pg.211]    [Pg.226]    [Pg.263]    [Pg.266]    [Pg.293]    [Pg.382]    [Pg.649]    [Pg.238]    [Pg.107]    [Pg.620]    [Pg.36]    [Pg.305]    [Pg.311]    [Pg.105]    [Pg.379]    [Pg.261]    [Pg.406]    [Pg.24]    [Pg.137]    [Pg.770]    [Pg.456]    [Pg.374]    [Pg.765]   


SEARCH



Difference effect

Treatment effectiveness

Treatment effects

Treatment effects/differences

© 2024 chempedia.info