Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Multiple testing adjustment

Likewise, the method was able to measure changes in abundance as low as 1.5-fold with confidence (p < 0.05), following multiple testing adjustment using the Benjamini-Hohcberg method (Fig. 20.7). [Pg.354]

It is essential to strictly control error rates by careful significance evaluation to save the time and effort of experiments for confirming biomarker candidates selected. Error rates can be controlled more carefully and strictly by using multiple testing adjusted p-values rather than raw p-values. [Pg.78]

In summary, GC adjusts for population stratification without the assumption or estimation of parameters such as the number of subpopulations involved in the study. It provides control of false-positive results caused by population structure as well as by multiple testing. One possible drawback of this method is that the correction of the test statistic is constant across the genome. As a result, GC may have less power in certain situations. [Pg.38]

As mentioned in the previous section, multiplicity can lead to adjustment of the significance level. There are, however, some situations when adjustment is not needed although these situations tend to have restrictions in other ways. We will focus this discussion in relation to multiple primary endpoints and in subsequent sections use similar arguments to deal with other aspects of multiple testing. [Pg.149]

CPMP (2002) Points to Consider on Multiplicity Issues in Clinical Trials General aspects of multiple testing were considered in this guideline together with discussion on adjustment of significance levels or specific circumstances where adjustment is not needed (see Chapter 10). [Pg.247]

Westfall PH, Young SS. Resampling-Based Multiple Testing Examples and Methods for P-Value Adjustment. John Wiley Sons, New York, 1993. [Pg.369]

Previous discussions of multiplicity adjusted testing of gene expressions, by Dudoit et al. (2002) and (2003), for example, generally took a nonmodeling approach. Because the joint distribution of the test statistics is generally not available with this approach, multiplicity adjustments in these papers tend to be calculated based on conservative inequalities (for example, the Bonferroni inequality or Sidak s inequality) or on a joint distribution of independent test statistics. In contrast, here, we describe multiplicity adjustment based on the actual joint distribution of the test statistics. However, before describing such adjustments, we first address the construction principles to which all multiple tests should adhere, regardless of the approach taken. These principles do not appear to be as well known in the field of bioinformatics as they are in clinical trials. [Pg.146]

The interpretation of the pharmacokinetic variables Cmax, AUCs and MRT of insulin glulisine was based on 95 % confidence intervals, after ln-transformation of the data. These 95 % confidence intervals were calculated for the respective mean ratios of pair-wise treatment comparisons. In addition, the test treatment was compared to the reference treatment with respect to the pharmacokinetic variables using an ANOVA with subject, treatment and period effects, after ln-transformation of the data. The subject sum of squares was partitioned to give a term for sequence (treatment by period interaction) and a term for subject within sequence (a residual term). Due to the explorative nature of the study, no adjustment of the a-level was made for the multiple testing procedure. [Pg.687]

For each tumor found to be statistically significant at the P = 0.05 level (one sided) by use of a statistical test of dose response over the entire study that is adjusted for mortality as appropriate and that is not adjusted for multiple comparisons or multiple testing, the following information should be included ... [Pg.122]

Flowever, circumstances arise where one may desire to use modeling for confirmatory analyses, as will be discussed later. Analyses of this type are hypothesis confirming, which are inferential in nature. If multiple tests are conducted, adjustment usually must be made to prevent inflation of against Type I error. Therefore, for modeling to be used in confirmatory analyses, special care must be taken to protect against Type I error. Our purpose is to draw attention to this issue, through discussion of two separate application areas in bioequivalence. [Pg.421]

If this is the case, a series of Student s f-tests (adjusted for multiple comparisons) are conducted using the 0.05 level of significance for Type I (a) error. [Pg.300]

In practical terms, this means that if we perform multiple tests and make multiple inferences, each one at a reasonably low error probability, the likelihood that some of these inferences will be erroneous could be appreciable. To correct for this, one must conduct each individual test at a decreased significance level, with the result that either the power of the tests will be reduced as well, or the sample size must be increased to accommodate the desired power. This could make the trial prohibitively expensive. Statisticians sometimes refer to the need to adjust the significance level so that the experimentwise error rate is controlled, as the statistical penalty for multiplicity. [Pg.251]

They must be specified in the study protocol and the appropriate adjustments to the error probabilities must be made. Similarly, one should remember that when multiple tests are performed without adjustment, as would be the case in an exploratory testing situation, one should expect to see spurious statistically significant results that may or may not be meaningful. This last comment applies particularly to statistical tests performed on adverse events and laboratory data. Adverse events reported in a study are often summarized by reporting their incidences, summarized by body system. Often, dozens of categories are listed. When formal statistical tests are applied to these data, some of these tests will result in p values less than the customary 0.05. The researcher should be cognizant of this issue and not jump to conclusions. It is strongly advis-... [Pg.252]

For example, the gene expression values are 12.79,12.53, and 12.46 for the naive condition and 11.12, 10.77, and 11.38 for the 48-h activated condition from the T-cell immune response data. The sample sizes are nj = 2 = 3. The sample means are 12.60 and 11.09 and the sample variances are 0.0299 and 0.0937, resulting in a pooled variance of (0.2029). The i-statistic is (12.60 - 11.09)/0.2029 = 7.44 and the degree of freedom is ni -I- 2 2 = 3 -I- 3 — 2 = 4. Then we ean find a p-value of 0.003. If using Welch s t-test, the t-statistic is still 7.42 sinee i = n, but we find the p-value of 0.0055 since the degree of freedom is 3.364 rather than 4. We claim that the probe set is differentially expressed under the two eonditions because its p-value is less than a predetermined significance level (e.g., 0.05). In this manner, p-values for the other probe sets ean be calculated and interpreted. In Section 4.4, the overall interpretation for p-values of all of the probe sets is described with adjustments for multiple testing. The Student s t-test and Weleh s t-test are used for samples drawn independently from two eonditions. When samples from the two conditions are paired, a different version ealled the paired t-test is more appropriate than independent t-tests ... [Pg.74]

We also asked questions about 11 common health problems that could appear with aging [respiratory, cardiac, digestive, urinary, muscular, ear, nose, or throat (ENT), diabetes, cholesterol, thyroid, insomnia, and nervous breakdown problems]. The distribution of LOG and NOG subjects was compared between the group who reported an illness and the group who did not. No difference appeared between the two distributions for subjects who declared they had nervous breakdown, respiratory, digestive, cholesterol, thyroid, or, quite surprisingly, ENT problems test, < 0.005 correction for multiple tests Bonferroni adjusted a = 0.005). [Pg.76]

In the framework of a generalized linear model, linear model decompositions (contrasts) are done in the usual traditional way. For example, if one compares controls against each of the test compounds, this is an a priori linear contrast and sufficient degrees of freedom exist to avoid a multiple comparisons adjustment. However, if one also compares each compound to every other one, or against a deet positive control, then one is making more comparisons than allowed for with the degrees of freedom (the multiple comparisons scenario) and an adjustment, either on the test statistic or the p value, is needed. A Bonferroni adjustment is an example of an adjustment on the p value better methods exist—a contemporary one is to adjust for the false discovery rate. [Pg.277]


See other pages where Multiple testing adjustment is mentioned: [Pg.424]    [Pg.424]    [Pg.322]    [Pg.134]    [Pg.148]    [Pg.362]    [Pg.139]    [Pg.146]    [Pg.147]    [Pg.298]    [Pg.113]    [Pg.336]    [Pg.477]    [Pg.502]    [Pg.512]    [Pg.108]    [Pg.143]    [Pg.76]    [Pg.703]    [Pg.81]    [Pg.311]    [Pg.318]    [Pg.457]    [Pg.63]    [Pg.213]    [Pg.216]    [Pg.75]    [Pg.61]    [Pg.1770]    [Pg.327]    [Pg.412]   
See also in sourсe #XX -- [ Pg.424 ]




SEARCH



Multiple testing

Multiple testing multiplicity

© 2024 chempedia.info