Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Null comparison methods

In a null comparison measurement of resistance, the effect of an unknown resistance must be compared with the effect of a variable standard resistance under conditions as identical as possible. Therefore, the unknown and standard resistances are placed in identical circuits in such a way that the resulting voltage or current in each circuit can be compared. Then the standard is varied until the difference in voltage or current between the two circuits is zero. Several methods for performing this comparison have been devised, of which the Wheatstone bridge is by far the most common. Comparison methods for resis-... [Pg.247]

Some analytical procedures compare a property of the analyte (or the product of a reaction with the analyte) with a standard such that the property being tested matches or nearly matches that of the standard. For example, in early colorimeters, the color produced as the result of a chemieal reaction of the analyte was compared with the color produced by reaction of standards. If the concentration of the standard was varied by dilution, it was possible to obtain a fairly exaet color match. The concentration of the analyte was then equal to the concentration of the standard after dilution. Such a procedure is called a null comparison or isomation method. ... [Pg.192]

Like the difference paired comparison method, the A-non-A test has four possible serving sequences (AA, BB, AB, BA) that are randomized across panelists, with each sequence appearing an equal number of times. As in paired comparison, the null hypothesis is of no distinction between the samples and the alternative hypothesis is Pa >0.5. [Pg.4422]

This is an example of a paired data set since the acquisition of samples over an extended period introduces a substantial time-dependent change in the concentration of monensin. The comparison of the two methods must be done with the paired f-test, using the following null and two-tailed alternative hypotheses... [Pg.93]

The results of such multiple paired comparison tests are usually analyzed with Friedman s rank sum test [4] or with more sophisticated methods, e.g. the one using the Bradley-Terry model [5]. A good introduction to the theory and applications of paired comparison tests is David [6]. Since Friedman s rank sum test is based on less restrictive, ordering assumptions it is a robust alternative to two-way analysis of variance which rests upon the normality assumption. For each panellist (and presentation) the three products are scored, i.e. a product gets a score 1,2 or 3, when it is preferred twice, once or not at all, respectively. The rank scores are summed for each product i. One then tests the hypothesis that this result could be obtained under the null hypothesis that there is no difference between the three products and that the ranks were assigned randomly. Friedman s test statistic for this reads... [Pg.425]

The test to determine whether the bias is significant incorporates the Student s /-test. The method for calculating the t-test statistic is shown in equation 38-10 using MathCad symbolic notation. Equations 38-8 and 38-9 are used to calculate the standard deviation of the differences between the sums of X and Y for both analytical methods A and B, whereas equation 38-10 is used to calculate the standard deviation of the mean. The /-table statistic for comparison of the test statistic is given in equations 38-11 and 38-12. The F-statistic and f-statistic tables can be found in standard statistical texts such as references [1-3]. The null hypothesis (H0) states that there is no systematic difference between the two methods, whereas the alternate hypothesis (Hf) states that there is a significant systematic difference between the methods. It can be seen from these results that the bias is significant between these two methods and that METHOD B has results biased by 0.084 above the results obtained by METHOD A. The estimated bias is given by the Mean Difference calculation. [Pg.189]

Hypothesis testing In classical statistics, a formal procedure for testing the longterm expected truth of a stated hypothesis. The statistical method involves comparison of two or more sets of sample data. On the basis of an expected distribution of the data, the test leads to a decision on whether to accept the null hypothesis (usually that there is no difference between the samples) or to reject that hypothesis and accept an alternative one (usually that there is some difference between the samples). [Pg.180]

From empirical comparisons of various proposed methods of analysis of orthogonal saturated designs (Hamada and Balakrishnan, 1998 Wang and Voss, 2003), Lenth s method can be shown to have competitive power over a variety of parameter configurations. It remains an open problem to prove that the null case is the least favourable parameter configuration. [Pg.274]

The predictive techniques are rather accurate. However, significant errors have been observed in few cases (4, 13, 27, 40). No direct comparison between the three predictive methods is available. The authors of the parachor method (27) claim that their method yields equal or better results than the PDD method for the cases considered in their study it is believed (42), however, that the latter is more reliable and it is recommended. The Weimer-Prausnitz method probably gives less accuracy than the PDD method, but it is more general. For example, Hanson and Van Winkle (40) report that their data on the hexane-hexene pair were not successfully correlated by the WP method. The Helpinstill-Van Winkle modification is recommended over the WP method. Recently, Null and Palmer (43) have presented a modification of the WP method which provides better accuracy but it is less general. The PDD method should be used cautiously when extrapolation with respect to temperature is used (27). When the GLC method is used, reliable results are expected. Evaluation of infinite dilution relative volatilities is recommended (36). [Pg.71]

Table 2. Comparison of clustering methods and distance functions. The agreement between the sets of clusters resulting from the four clustering methods was measured using the k test. The standard deviations of the statistic under the null hypothesis were estimated to range between 0.014 and 0.023 from multiple simulations. From Chen and Murphy (2005). Table 2. Comparison of clustering methods and distance functions. The agreement between the sets of clusters resulting from the four clustering methods was measured using the k test. The standard deviations of the statistic under the null hypothesis were estimated to range between 0.014 and 0.023 from multiple simulations. From Chen and Murphy (2005).
The relation between the null hypothesis situation of no difference (difference = 0) and the alternative hypothesis of the presence of a real difference (difference A) is schematically shown in Figure 14-37. The figure outlines the hypothetical situation corresponding to a set of repeated method comparison studies that results in observed differences D that are distributed around the true difference, which is zero under the null hypothesis of no difference and equal to A under the alternative hypothesis. The larger the sample size is, the narrower the dispersions of observed differences around the true values are. Thus for a given A and Type I error the power increases with the sample size. [Pg.391]

Figure 14-37 Schematic illustration of distributions of differences D under the null hypothesis (Hq) of no real difference and the alternative hypothesis (Ha) of the presence of a real difference A.The vertical dotted line indicates the limit of statistical significance, p is the Type I error (5%), and I q is the power (90%). (From Linnet K. Necessary sample size for method comparison studies based on regression analysis. Clin Chem 1999 45 882-94.)... Figure 14-37 Schematic illustration of distributions of differences D under the null hypothesis (Hq) of no real difference and the alternative hypothesis (Ha) of the presence of a real difference A.The vertical dotted line indicates the limit of statistical significance, p is the Type I error (5%), and I q is the power (90%). (From Linnet K. Necessary sample size for method comparison studies based on regression analysis. Clin Chem 1999 45 882-94.)...
In conclusion, planning a method comparison study to achieve a given power for detection of medically notable differences should be considered. In this way, a method comparison study is likely to be conclusive either the null hypothesis of no difference is accepted, or the presence of a relevant difference is established. Otherwise, a statistically nonsignificant slope deviation from unity or intercept deviation from zero or both may either imply that the null hypothesis is true, or be an example of a Type II error (i.e., an overlooked real difference of medical importance). [Pg.395]

In the case of large numbers of measurements in both sets, the z test, discussed in the previous section, can be modified to take into account a comparison of two sets of data. More often, both sets contain only a few results, and we must use the t test. To illustrate, let us assume that N replicate analyses by analyst 1 yielded a mean value of. fj and that N2 analyses by analyst 2 obtained by the same method gave X2- The null hypothesis states that the two means are identical and that any difference is the result of random errors. Thus, we can write Hq. /u,] = /a2- Most often when testing differences in means, the alternative hypothesis is /jl 7= and the te.st is a two-tailed test. In some situations, however, we could test // /aj > fi2 or fii < 1J.2 and use a one-tailed test. We will assume here that a two-tailed test is used. [Pg.154]

Bonferroni s test is the most straightforward of several statistical methodologies that can appropriately be used in the context of multiple comparisons. That is, Bonferroni s test can appropriately be used to compare pairs of means after rejection of the null hypothesis following a significant omnibus F test. Imagine that we have c groups in total. Bonferroni s method makes use of the following inequality ... [Pg.160]

When using Bonferroni s method, the null hypothesis associated with a pairwise comparison is rejected if the calculated test statistic, that is. [Pg.161]

Bonferroni s test is overly conservative, in that the critical values required for rejection need not be as large as they are. In other words, using a less conservative method may result in more null hypotheses being rejected. The reason that Bonferroni s method is so conservative is that it does not in any way account for the extent of correlation among the various hypotheses being tested. If a method could take into account the overlap, or lack thereof, of the various hypotheses, the critical values would not need to be defined as narrowly as with Bonferroni s. In this section, we therefore discuss another analytical strategy for multiple comparisons, Tukey s honestly significant difference (HSD) test. [Pg.163]

Based on model population analysis, here we propose to perform model comparison by deriving an empirical distribution of the difference of RMSEP or RMSECV between two models (variable sets), followed by testing the null hypothesis that the difference of RMSEP or RMSECV between two models is zero. Without loss of generality, we describe the proposed method by taking the distribution of difference of RMSEP as an example. We assume that the data X consists of m samples in row and p variables in column and the target value Y is an m-dimensional column vector. Two variable sets, say Vi and V2, selected from the p variables, then can be compared using the MPA-based method described below. [Pg.9]

The Friedman test could alternatively be used in the reverse form assuming that the three analytical methods give indistinguishable results, the same procedure could be used to test differences between the four plant extracts. In this case k and n are 4 and 3 respectively, and the reader may care to verify that R is 270 and that the resulting x value is 9.0. This is higher than the critical value for P= 0.05, n = 3, k=A, which is 7.4. So in this second application of the test we can reject the null hypothesis, and state that the four samples do differ in their pesticide levels. Further tests, which would allow selected comparisons between pairs of samples, are then available. [Pg.167]


See other pages where Null comparison methods is mentioned: [Pg.753]    [Pg.146]    [Pg.147]    [Pg.102]    [Pg.203]    [Pg.261]    [Pg.269]    [Pg.120]    [Pg.387]    [Pg.11]    [Pg.133]    [Pg.2272]    [Pg.395]    [Pg.157]    [Pg.476]    [Pg.477]    [Pg.207]    [Pg.52]    [Pg.35]    [Pg.107]    [Pg.112]    [Pg.295]    [Pg.465]    [Pg.510]    [Pg.386]    [Pg.178]    [Pg.7]   
See also in sourсe #XX -- [ Pg.192 ]




SEARCH



Null method

© 2024 chempedia.info