Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Inferences about Means

Depending on the circumstances at hand, several different types of mean comparisons can be made. In this section we review the method for comparison of two means with independent samples. Other applications, such as a comparison of means with matched samples, can be found in statistical texts. Suppose, for example, we have two methods for the determination of lead (Pb) in orchard leaves. The first method is based on the electrochemical method of potentiometric stripping analysis [1], and the second is based on the method of atomic absorption spectroscopy [2], We perform replicate analyses of homogeneous aliquots prepared by dissolving the orchard leaves into one homogeneous solution and obtain the data listed in Table 3.1. [Pg.49]

We wish to perform a test to determine whether the difference between the two methods is statistically significant. In other words, can the difference between the two means be attributed to random chance alone, or are other significant experimental factors at work The hypothesis test is performed by formulating an appropriate null hypothesis and an alternative hypothesis  [Pg.49]

In developing the hypothesis, note that a difference of zero between the two means is equivalent to a hypothesis stating that the two means are equal. [Pg.49]

To make the test, we compute a test statistic based on small sample measurements such as those described in Table 3.1 and compare it with tabulated values. The result [Pg.49]

To make the test for the comparison of means described in Equation 3.13, we compute the test statistic, /calc, [Pg.50]


If this is so, it implies that the expected value of the ratio of means will depend on the subjects recruited. This is, however, a quite undesirable property and shows that such a ratio is in fact meaningless. The only sensible thing to do is to measure on the additive scale. There are good reasons for supposing that this is more likely to be log-AUC and not AUC. To sum up, we are not interested in making inferences about ratios of means and medians but in making inferences about means (and medians) of ratios. [Pg.369]

There are many other distributions used in statistics besides the normal distribution. Common ones are the yl and the F-distributions (see later) and the binomial distribution. The binomial distribution involves binomial events, i.e. events for which there are only two possible outcomes (yes/no, success/failure). The binomial distribution is skewed to the right, and is characterised by two parameters n, the number of individuals in the sample (or repetitions of a trial), and n, the true probability of success for each individual or trial. The mean is n n and the variance is nn(l-n). The binomial test, based on the binomial distribution, can be used to make inferences about probabilities. If we toss a true coin a iarge number of times we expect the coin to faii heads up on 50% of the tosses. Suppose we toss the coin 10 times and get 7 heads, does this mean that the coin is biased. From a binomiai tabie we can find that P(x=7)=0.117 for n=10 and n=0.5. Since 0.117>0.05 (P=0.05 is the commoniy... [Pg.299]

These predictions, as stated, are valid whenever the two sources emit at the same time. This is en extremely tough experimental requirement to achieve. Usually, the two independent sources emit particles in a random way. This means that sometimes, the two independent waves arrive precisely at the same time, corresponding to a complete overlapping, while they don t even partially mix at other times. If the independent waves do not overlap at the mixing region, no inference about the reality of the quantum waves can be drawn. Between these two extreme cases there are, of course, all the intermediate cases of partial superposition. [Pg.525]

The foregoing statement is not to be construed as meaning that chemical methods, as well as physical methods, cannot be used as the basis for inference about the nature of resonating structures. This inference is based on the resultant bond type, and not on tbe direct iden tification of individual structures. [Pg.569]

There are increasing numbers of protein sequences available as a result of the increasing numbers of genome sequences that are now accessible in public databanks. It is estimated that there were experimentally determined structures for 1% of the protein sequences known in 1999, meaning that inferences about structure had to rely on information from models for the remaining 99%. [Pg.430]

Relationship between kerogens and asphaltenes have taken on new meaning in the last few years especially for relatively low maturity oils. Asphaltene composition can give inferences about the source kerogen composition when kerogen data are unavailable (53.58-59). Asphaltenes are defined as materials soluble or peptized in oil or bitumen that precipitate when... [Pg.20]

Data are not random but are representative in other ways. This may mean, for example, that the data are a stratified sample applicable to the real-world situation for the assessment scenario of interest. In this case, frequentist methods can be used to make inferences for the strata that are represented by the data (e.g. particular exposed subpopulations), but not necessarily for all aspects of the scenario. However, for the components of the scenario for which the data cannot be applied, there is a lack of representative data. For example, if the available data represent one subpopulation, but not another, frequentist methods can be applied to make inferences about the former, but could lead to biased estimation of the latter. Bias correction methods, such as comparison with benchmarks, use of surrogate (analogous) data or more formal application of expert judgement, may be required for the latter. [Pg.51]

Consider now robustness. If the estimators A are computed from independent response variables then, as noted in Section 1, the estimators have equal variances and are usually at least approximately normal. Thus the usual assumptions, that estimators are normally distributed with equal variances, are approximately valid and we say that there is inherent robustness to these assumptions. However, the notion of robust methods of analysis for orthogonal saturated designs refers to something more. When making inferences about any effect A, all of the other effects At (k i) are regarded as nuisance parameters and robust means that the inference procedures work well, even when several of the effects ft are large in absolute value. Lenth s method is robust because the pseudo standard error is based on the median absolute estimate and hence is not affected by a few large absolute effect estimates. The method would still be robust even if one used the initial estimate 6 of op, rather than the adaptive estimator 6L, for the same reason. [Pg.275]

One of the most useful quantities for providing a measure of the location of data, making inferences about the true mean, fi, and comparing sets of data by statistical methods is the sample mean, x, defined as the sum of all the values of a set of data divided by the number of observations, n, making up the data sample. Mathematically, the mean can be symbolized by... [Pg.741]

Studies of cellulose anhydro derivatives containing 2,3- and 3,6-anhydro rings by means of infrared spectroscopy (14,15) have made it possible to draw inferences about the nature of the conformational... [Pg.92]

Most practical exercises are based on a limited number of individual data values (a sample) which are used to make inferences about the population from which they were drawn. For example, the lead content might be measured in blood samples from 100 adult females and used as an estimate of the adult female lead content, with the sample mean (T) and sample standard deviation (j) providing estimates of the true values of the underlying population mean (/r) and the population standard deviation (c). The reliability of the sample mean as an estimate of the true (population) mean can be assessed by calculating the standard error of the sample mean (often abbreviated to standard error or SE), from ... [Pg.268]

Estimation is the use of the sample data to make inferences about the population that the sample represents . With qualitative data, we would usually be interested in estimating the proportion or percentage of individuals in the population having some outcome or characteristic with ordinal data we would probably wish to estimate the population median, and with quantitative data the population mean. Although percentages, medians and means are most often of interest, it is possible to use any sample statistic to estimate the corresponding population value thus in Sections 7.3.1.3.3 and 7.3.1.3.4 we were interested in whether a sample gl or g2 was consistent with ffie true or population values being zero. [Pg.373]

There is often more than one correct way to write condensed structural formulas. You must often make inferences about what a condensed formula means according to valence rules, especially in structures with C=0 as shown in parts (a) and (d). [Pg.6]

The sample mean is a poor measure of central tendency when the distribution is heavily skewed. Despite our best efforts at designing well-controlled clinical trials, the data that are generated do not always compare with the (deliberately chosen) tidy examples featured in this book. When we wish to make an inference about the difference in typical values among two or more independent populations, but the distributions of the random variables or outcomes are not reasonably symmetric, nonparametric methods are more appropriate. Unlike parametric methods such as the two-sample t test, nonparametric methods do not require any assumption about the shape of a distribution for them to be used in a valid manner. As the next analysis method illustrates, nonparametric methods do not rely directly on the value of the random variable. Rather, they make use of the rank order of the value of the random variable. [Pg.150]

A number of studies have used xenoliths from Archaean subcontinental lithosphere to make inferences about the oxidation state of the early mantle. For example Woodland and Koch (2003) showed that the subcontinental lithosphere beneath the Kaapvaal Craton displays a systematic decrease in oxygen fugacity with depth (Fig. 5.11). However, it should be remembered that the highly depleted nature of the Archaean subcontinental lithosphere means that it is atypical of the mantle as a whole and may not therefore be useful as an indicator of mantle redox conditions (Chapter 3, Section 3.1.3.2). [Pg.198]

For most target sequences there will be no structures found with a 100% sequence match. However, in about half of all cases there will be several protein structures that are significandy similar in sequence, so some structural information can be inferred about the target sequence [4]. Usually, this means that the overall fold of one or more domains can be inferred, together with the location of binding site residues and other special sites. [Pg.291]

In order to make inferences about the parameter estimates, an estimate of the error variance (the mean square error), a2, is needed. The estimator for cr2 is same as for linear regression... [Pg.104]

It is instructive to think about the meaning of the prior distribution in order to understand why this happens. Having a Normal prior distribution for a true population mean and then observing a sample from that population is formally analogous to the following. One has observed an infinity of population means (these are what make the prior) one has drawn a population at random from this infinity (but one does not know which one), observed a sample from it and now proceeds to make inferences about the mean. Thus implicitly, one is comparing the sample mean with an infinity of known population means. Thus the comparison with some locally observed sample means adds no further useful information and it is therefore also irrelevant whether the sample mean is the largest or not in this sample. [Pg.160]

This Is an issue where consensus now seems to have been achieved. An ingenious theorem due to Fieller (Fieller, 1940, 1944) enables one to calculate a confidence interval for the ratio of two means. The approach does not require transformation of the original data. (Edgar C. Fieller, 1907—1960, is an early example of a statistician employed in the pharmaceutical industry. He worked for the Boots company in the late 1930s and 1940s.) For many years this was a common approach to making inferences about the ratio of the two mean AUCs in the standard bioequivalence experiment (Locke, 1984). [Pg.368]


See other pages where Inferences about Means is mentioned: [Pg.49]    [Pg.49]    [Pg.251]    [Pg.53]    [Pg.204]    [Pg.204]    [Pg.104]    [Pg.50]    [Pg.246]    [Pg.40]    [Pg.326]    [Pg.177]    [Pg.248]    [Pg.293]    [Pg.181]    [Pg.216]    [Pg.95]    [Pg.103]    [Pg.214]    [Pg.148]    [Pg.168]    [Pg.31]    [Pg.365]    [Pg.106]    [Pg.295]    [Pg.15]    [Pg.41]    [Pg.288]   


SEARCH



Inference

© 2024 chempedia.info