Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error types sample size

We see that in contrast to the type-1 error, the type-11 error is defined as occurring when accepting the null hypothesis if it is false. The power of a test is defined to be the probability of detecting a true difference and is equal to 1 — probability (type-11 error). The type-11 error and power depend upon the type-1 error, the sample size, the clinically relevant difference (CRD) that we are interested in detecting and the expected variability. Where do these values come from ... [Pg.303]

Freiman JA, Chalmers TC, Smith HJ, et al. The importance of beta, the type II error and sample size in the design and interpretation of the randomized control trial. Survey of 71 negative trials. N Engl J of Med 1978 299 690-4. [Pg.309]

In the next several sections, the theoretical distributions and tests of significance will be examined beginning with Student s distribution or t test. If the data contained only random (or chance) errors, the cumulative estimates x and 5- would gradually approach the limits p and cr. The distribution of results would be normally distributed with mean p and standard deviation cr. Were the true mean of the infinite population known, it would also have some symmetrical type of distribution centered around p. However, it would be expected that the dispersion or spread of this dispersion about the mean would depend on the sample size. [Pg.197]

A problem long appreciated in economic evaluations, but whose seriousness has perhaps been underestimated (Sturm et al, 1999), is that a sample size sufficient to power a clinical evaluation may be too small for an economic evaluation. This is mainly because the economic criterion variable (cost or cost-effectiveness) shows a tendency to be highly skewed. (One common source of such a skew is that a small proportion of people in a sample make high use of costly in-patient services.) This often means that a trade-off has to be made between a sample large enough for a fully powered economic evaluation, and an affordable research study. Questions also need to be asked about what constitutes a meaningful cost or cost-effectiveness difference, and whether the precision (type I error) of a cost test could be lower than with an effectiveness test (O Brien et al, 1994). [Pg.16]

Growing experience with complex disease genetics has made clear the need to minimize type I error in genetic studies [41, 109]. Power is especially an issue for SNP-based association studies of susceptibility loci for phenomenon such as response to pharmacological therapy, which are extremely heterogeneous and which are likely to involve genes of small individual effect. Table 10.2 shows some simple estimation of required sample sizes of cases needed to detect a true odds ratio (OR) of 1.5 with 80% power and type I error probability (a) of either 0.05 or 0.005. [Pg.226]

In anticipation of the development to operational status of the ion or direct counting systems, it would be helpful if we could compare these values with projected counting errors for the two types of direct counting systems being developed. Table 4 lists projections for the Rochester Van de Graaff facility [49] and the University of California Lawrence Berkeley cyclotron system employing an external ion source [31,50]. Table 4 also lists the sample sizes and approximate measurement periods for both systems. This data illustrates the potential extention in dating... [Pg.456]

Increasing the sample size would reduce both alpha and beta, but samples and especially their analyses cost money. Intuitively the minimal actual loss should occur when the expected losses are equal. So the relative alpha and beta should be found from equating expected loss from type I error with the expected loss from type II error. [Pg.189]

The aim of any clinical trial is to have low risk of Type I and II errors and sufficient power to detect a difference between treatments, if it exists. Of the three factors in determining sample size, the power (probability of detecting a true difference) is arbitrarily chosen. The magnitude of the drug s effect can be estimated with more or less accuracy from previous experience with drugs of the same or similar action, and the variability of the measurements is often known from published experiments on the primary endpoint, with or without the drug. These data will, however, not be available for novel substances in a new class and frequently the sample size in the early phase of development has to be chosen on an arbitrary basis. [Pg.228]

In Section S.4.2.2 we encoimtered the concept of the t)rpe- 1 error that was defined as rejecting the null h)rpothesis when it is true. When considering how to sample size a study we need to consider a second type of error - the t)rpe-ll error. The relationship between this second error and the null hypothesis is illustrated in the Table below. [Pg.303]

In most cases we will be interested in determining the sample size for a given type-11 error, which is typically fixed at values of 0.1 or 0.2. [Pg.303]

Sample size will increase if the type-1 and type-11 errors, a and p, are stricter in the sense that they are smaller,... [Pg.303]

To illustrate the use of the formula suppose we are designing a trial to compare treatments for the reduction of blood pressure. We determine that a clinically relevant difference is 5 mmHg and that the between-patient standard deviation 0 is 10 mmHg. At)q)e-1 error is set at 0.05 and the type-11 error at 0.20. Then the required sample size, per group, is... [Pg.303]

It is generally the case that when more complex statistical analysis strategies and designs are under consideration, standard sample size calculations are inadequate to cover them. In such circumstances simulation is often used to determine the t)rpe-I and type-II errors of the proposed studies for a given sample size. [Pg.304]

Sample sizes for clinical trials are discussed more fully elsewhere in this book and should be established in discussion with a statistician. Sample sizes should, however, be sufficient to be 90% certain of detecting a statistically significant difference between treatments, based on a set of predetermined primary variables. This means that trials utilising an active control will generally be considerably larger than placebo-controlled studies, in order to exclude a Type II statistical error (i.e. the failure to demonstrate a difference where one exists). Thus, in areas where a substantial safety database is required, for example, hypertension, it may be appropriate to have in the programme a preponderance of studies using a positive control. [Pg.320]

CH08 POWER AND SAMPLE SIZE Table 8.1 Type I and type II errors... [Pg.128]

Suppose now that the trial in the example were a trial in which a difference of 0.5 mmol/1 was viewed as an important difference. Maybe this reflects the clinical relevance of such a difference or perhaps from a commercial standpoint it would be a worthwhile difference to have. Under such circumstances only having 62.3 per cent power to detect such a difference would be unacceptable this corresponds to a 37.7 per cent type II error, an almost 40 per cent chance of failing to declare significant differences. Well, there is only one thing you can do, and that is to increase the sample size. The recalculated values for power are given in Table 8.3 with a doubling of the sample size to 100 patients per group. [Pg.130]

Although conventional p-values have no role to play in equivalence or noninferiority trials there is a p-value counterpart to the confidence intervals approach. The confidence interval methodology was developed by Westlake (1981) in the context of bioequivalence and Schuirmann (1987) developed a p-value approach that was mathematically connected to these confidence intervals, although much more difficult to understand It nonetheless provides a useful way of thinking, particularly when we come later to consider type I and type II errors in this context and also the sample size calculation. We will start by looking at equivalence and use A to denote the equivalence margins. [Pg.178]

We will focus our attention to the situation of non-inferiority. Within the testing framework the type I error in this case is as before, the false positive (rejecting the null hypothesis when it is true), which now translates into concluding noninferiority when the new treatment is in fact inferior. The type II error is the false negative (failing to reject the null hypothesis when it is false) and this translates into failing to conclude non-inferiority when the new treatment truly is non-inferior. The sample size calculations below relate to the evaluation of noninferiority when using either the confidence interval method or the alternative p-value approach recall these are mathematically the same. [Pg.187]

Finally, it is, in principal, possible to increase the sample size based on the observed treatment difference at an interim stage without affecting the type I error, but great care needs to be taken with regard to dealing with this statistically. Evidence to date suggests that such procedures offer no real advantages over and above a standard interim analysis plan. [Pg.225]

The sample size calculation should be detailed in the trial publication, indicating the estimated outcomes in each of the treatment groups (and this will define, in particular, the clinically relevant difference to be detected), the type I error, the type II error or power and, for a continuous primary outcome variable in a parallel group trial, the within-group standard deviation of that measure. For time-to-event data details on clinically relevant difference would usually be specified in terms of the either the median event times or the proportions event-free at a certain time point. [Pg.258]

A troubling aspect of probabilisfic error is that, for a given sample size, reducing the likelihood of one fype of inference error increases the likelihood of the other type. If false-positive errors are minimized, false-negative errors are dramatically increased and vice versa. As we shall see, only by increasing sample size can we decrease the... [Pg.6]

Here we have omitted the continuity correction. This is permissible for large sample sizes. In addition, the method used here is only approximate because it assumes the variance is constant regardless of Hj. If H1 p=50 were true and H0 thereby false, the curve for Hj in Fig. 1.7 would be the correct one but the limits 34.7 and 61,3 still define the region of acceptance of H0. Because H0 would be accepted when Hj is true if X falls between (34.7 61.3), then the area under the curve Hj between these limits is (3, which is the probability of a type II error. The area is labeled in Fig. 1.7. To determine this area, we use the original limits with the curve for Hx to determine the standardized limits of the (3 region. [Pg.28]

One technique employed to arrive, at an appropriate value has been postulated by L. Torbeck [16], who has taken a statistical and practical approach that in the absence of any other retest rationale can be judged as a technically sound plan. According to Torbeck, the question to be answered— how big should the sample be —is not easily resolved. One answer is that we first need a prior estimate of the inherent variability, the variance, under the same conditions to be used in the investigation. What is needed is an estimate of a risk level (defined as the percentage of probability that a significant difference exists between samples when there is none what statisticians call a type 1 error), the (i risk level ([1 is the probability of concluding that there is no difference between two means when there is one also known as a type 2 error) and the size of the difference (between the OOS result and the limit value) to be detected. The formula for the sample size for a difference from the mean is expressed as ... [Pg.410]


See other pages where Error types sample size is mentioned: [Pg.228]    [Pg.292]    [Pg.200]    [Pg.221]    [Pg.864]    [Pg.198]    [Pg.131]    [Pg.137]    [Pg.153]    [Pg.224]    [Pg.717]    [Pg.720]    [Pg.359]    [Pg.189]    [Pg.141]    [Pg.119]    [Pg.204]    [Pg.745]    [Pg.334]    [Pg.578]   
See also in sourсe #XX -- [ Pg.320 ]




SEARCH



Error sampling

Error, sample

Sample types

Sampling error sample size

Sampling sample size

Sampling size

Sampling types

Type size

© 2024 chempedia.info