Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistics randomisation

How the baseline measurement will be used in relation to the critical evaluable endpoints must be determined before analysis. Comparison of two or more treatments usually takes into account the differences between baseline values between treatment groups at the point of randomisation. The way in which the analysis will influence the report and publications needs to be decided, as some regulatory authorities have their own statistical criteria that need to be observed (e.g. for bioequivalence studies ). [Pg.229]

A second reason for randomisation is that from a statistical perspective it ensures the validity of the standards approaches to statistical inference, t-tests, analysis of variance (ANOVA), etc. [Pg.294]

Section 1 of the guidelines establishes the context of the submission. It asks for a description of the drug, its use on the PBS and the therapies that wiU be co-administered or substituted. Section 2 asks for the best available evidence on the clinical performance of the drug, including the scientific and statistical rigour of randomised trials, and a preliminary economic evaluation based on evidence from the randomised trials. Section 3 describes when extrapolation beyond the preliminary economic evaluation maybe made and how adjustments can be made in a modelled economic evaluation. Section 4 requests a financial analysis from the perspective of the PBS and government health budgets. [Pg.670]

So if you are planning a trial then stick with stratification and avoid dynamic allocation. If you have an ongoing trial which is using dynamic allocation then continue, but be prepared at the statistical analysis stage to supplement the standard methods of calculating p-values with more complex methods which take account of the dynamic allocation scheme. These methods go under the name of randomisation tests. [Pg.10]

Firstly, if the randomisation has been stratified for baseline variables then from a theoretical statistical point of view these variables should be taken into account in the analysis. Secondly, the efficiency of the statistical analysis can be improved in several ways if baseline prognostic factors (factors which influence outcome) are included in the analysis. Finally, it provides a framework for the investigation of the consistency of the treatment effect according to different values for those factors. [Pg.91]

Statistical testing for baseline imbalance has no role in a trial where the handling of randomisation and blinding has been fully satisfactory. ... [Pg.109]

It is nonetheless appropriate to produce baseline tables of summary statistics for each of the treatment groups. These should be looked at from a clinical perspective and imbalances in variables that are potentially prognostic noted. Good practice hopefully will have ensured that the randomisation has been stratified for important baseline prognostic factors and/or the important prognostic factors... [Pg.109]

The groups were compared overall and the five year death rate amongst the 1103 patients randomised to clofibrate was 20.0 per cent compared to a five year death rate amongst the 2789 placebo patients of 20.9 per cent. These differences were not statistically significant with p = 0.55. [Pg.114]

The principle of intention-to-treat (ITT) tells us to compare the patients according to the treatments to which they were randomised. Randomisation gives us comparable groups, removing patients at the analysis stage destroys the randomisation and introduces bias. Randomisation also underpins the validity of the statistical comparisons. If we depart from the randomisation scheme then the statistical properties of our tests are compromised. [Pg.115]

This was a multi-centre, pan-European, randomised double-blind placebo-controlled clinical trial in acute stroke to evaluate the effect of ancrod, a natural defribrinogenating agent (Hennerici et al. (2006)). The primary endpoint was based on the Barthel Index a favourable score of 95 or 100 or a return to the pre-stroke level at three months was viewed as a success. The primary method of statistical analysis was based on a logistic model including terms for treatment, age category, baseline Scandinavian Stroke Scale and centre. [Pg.223]

Arani RB, Soong S-1, Weiss HL, Wood Ml et al. (2001) Phase specific analysis of herpes zoster associated pain data a new statistical approach Statistics in Medicine, 20, 2429-2439 Bedikian AY, MiUward M, Pehamberger H, Conry R et al. (2006) Bcl-2 Antisense (obUmersen sodium) plus dacarbazine in patients with advanced melanoma The ObUmersen Melanoma Study Group Journal of Clinical Oncology, 24, 4738-4745 Bland M (2004) Cluster randomised trials in the medical Uterature two bibliometric surveys BMC Medical Research Methodology, 4, 21... [Pg.261]

Byar DP (1980) Why data bases should not replace randomised trials Biometrics, 36, 337-342 Campbell MJ, Donner A and Klar N (2007) Developments in cluster randomised trials and Statistics in Medicine Statistics in Medicine, 26, 2-19 Coronary Drug Project Research Group (1980) Influence and adherence to treatment and response of cholesterol on mortality in the coronary drug project New England Journal of Medicine, 303, 1038-1041... [Pg.261]

Senn S (1997) Statistical Issues in Drug Development Chichester. John WUey Sons, Ltd Senn S (2002) Cross-Over Trials in Clinical Research (2nd edn) Chichester John WUey Sons Senn S (2003) Disappointing dichotomies. Pharmaceutical Statistics, 2, 239-240 Sherman DG, Atkinson RP, Chippendale T et al (2000) Intravenous ancrod for treatment of acute ischemic stroke the STAT study a randomised controUed trial. Journal of the American Medical Association, 283, 2395-2403... [Pg.264]

Other problems pointed out by Box et al. [20] are serially correlated errors, dynamic relations and feedback. All the above problems can be overcome by the use of properly designed statistical experiments that employ features such as randomisation, blocking and other suitable controls. [Pg.203]

The randomisation test proposed by Wiklund et al. [34] assesses the statistical significance of each individual component that enters the model. This had been studied previously, e.g. using a t- or F-test (for instance, Wold s criterion seen above), but they are all based on unrealistic assumptions about the data, e.g. the absence of spectral noise see [34] for more advanced explanations and examples. A pragmatic data-driven approach is therefore called for and it has been studied in some detail recently [34,40]. We have included it here because it is simple, fairly intuitive and fast and it seems promising for many applications. [Pg.208]

The tetrafluoromethane ion has also been found to decay before electronic randomisation has occurred [129, 769] (see Sect. 5.3 and 5.9 for other perfluorinated molecules). The breakdown diagrams for CF3X molecules (X = a halogen atom other than F) have been reported [690]. Translational energy release distributions have also been measured for these molecules and shown to be in agreement with the predictions of statistical theory (phase space theory) [691]. Carbonyl chloride and fluoride have been studied [451] (see Sect. 8). [Pg.97]

C2H4 [877], C2H6 [877], C2H5OH [130], (CH3)2Hg [154] and C2F6 [766, 767, 768, 769] have been studied at fixed wavelength. Decompositions of (C2F6)f have been shown to be non-statistical in that electronic energy is not randomised (see Sect. 8.2). [Pg.98]

A second important event was the development by Hosemann (1950) of a theory by which the X-ray patterns are explained in a completely different way, namely, in terms of statistical disorder. In this concept, the paracrystallinity model (Fig. 2.11), the so-called amorphous regions appear to be the same as small defect sites. A randomised amorphous phase is not required to explain polymer behaviour. Several phenomena, such as creep, recrystallisation and fracture, are better explained by motions of dislocations (as in solid state physics) than by the traditional fringed micelle model. [Pg.31]

In August 2005, a Phase Ha clinical study of bevirimat was completed successfully. In this randomised double-blind Phase Ha study, bevirimat monotherapy for ten days resulted in statistically significant reductions in viral load compared with placebo, with individual decreases of up to 1.7 log10, at the 100 and 200 mg doses. Genetic analysis of HIV in patients pre- and post-treatment showed no evidence of the development of resistance to the drug. [Pg.387]

If we take the averages of the four results for each treatment tiiey wiB be statistic ly sound if the randomisation was properly carried out, but the error in each mean (average) will be inflated hecau it includes in itself the differences between blocks. The periment will thm not be as accurate as it m t be. [Pg.11]

Randomisation introduces a deliberate element of chance into the assignment of treatments to the subjects in a clinical trial. It provides a sound statistical basis for the evaluation of the evidence... [Pg.61]


See other pages where Statistics randomisation is mentioned: [Pg.77]    [Pg.113]    [Pg.166]    [Pg.226]    [Pg.297]    [Pg.57]    [Pg.81]    [Pg.115]    [Pg.123]    [Pg.199]    [Pg.217]    [Pg.262]    [Pg.263]    [Pg.263]    [Pg.209]    [Pg.15]    [Pg.334]    [Pg.35]    [Pg.33]    [Pg.40]    [Pg.454]    [Pg.455]    [Pg.35]    [Pg.77]    [Pg.87]    [Pg.108]    [Pg.139]    [Pg.169]    [Pg.193]    [Pg.26]   


SEARCH



Randomisation

© 2024 chempedia.info