Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random variable sample size

In contrast to variable testing (comparison of measured values or analytical values), attribute testing means testing of product or process quality (nonconformity test, good-bad test) by samples. Important parameters are the sample size n (the number of units within the random sample) as well as the acceptance criterion naccept, both of which are determined according to the lot size, N, and the proportion of defective items, p, within the lot, namely by the related distribution function or by operational characteristics. [Pg.118]

The principle of sequential analysis consists of the fact that, when comparing two different populations A and B with pre-set probabilities of risks of error, a and /3, just as many items (individual samples) are examined as necessary for decision making. Thus the sample size n itself becomes a random variable. [Pg.119]

Collection of a sample of size N from the random variable X. [Pg.281]

It is well recognised that the faecal bile acid content of random stool samples is highly variable with marked daily variation.Therefore, studies testing the association between luminal bile acid exposure and the presence of colorectal neoplasia have usually measured serum bile acid levels, which demonstrate less variability and are believed to reflect the total bile acid pool more accurately. Serum DCA levels have been shown to be higher in individuals with a colorectal adenoma compared with individuals without a neoplasm. Only one study has assessed future risk of CRC in a prospective study of serum bile-acid levels. The study was hampered by the small sample size (46 CRC cases). There were no significant differences in the absolute concentrations of primary and secondary bile acids or DCA/CA ratio between cases and controls although there was a trend towards increased CRC risk for those with a DCA/ CA ratio in the top third of values (relative risk 3.9 [95% confidence interval 0.9-17.0 = 0.1]). It will be important to test the possible utility of the DCA/ CA ratio as a CRC risk biomarker in larger, adequately powered studies. A recent study has demonstrated increased levels of allo-DCA and allo-LCA metabolites in the stool of CRC patients compared with healthy controls. ... [Pg.88]

This example shows that the standard deviation of the sampling distribution is less than that of the population. In fact, this reduction in the variability is related to the sample size used to calculate the sample means. For example, if we repeat the sampling experiment, but this time based on 15 rather than 10 random samples, the resulting standard deviation of the sampling is 0.159, and on 25 random samples it is 0.081. The precise relationship between the population standard deviation a and the standard error of the mean is ... [Pg.284]

Due to the complexity of numerical integration and the exponential increase in sample size with the increase of the random variables, we employ an approximation scheme know as the sample average approximation (SAA) method, also known as stochastic counterpart. The SAA problem can be written as ... [Pg.184]

A randomized crossover design has theoretical appeal because it eliminates the largest source of experimental variance interindividual variability. This could significantly enhance statistical power and permit much smaller samples sizes to detect a treatment effect. Unfortunately, a crossover design is appropriate only in rare cases in psychopharmacology, namely in studies ... [Pg.178]

The sum of identically distributed independent random variables is normally distributed for large sample size regardless of the probability distribution of the population of the random variable,... [Pg.36]

In an experiment to determine the effects of sample size and amount ofliquid phase on the height equivalent to a theoretical plate (HETP) in gas chromatography, it was necessary to utilize solid support material from different batches. It was therefore imperative that the resulting data be checked for homogeneity prior to attempting to develop any quantitative expressions regarding the effects of these variables on HETP. Several sets of data points were selected at random and examined using Bartlett s test. [Pg.112]

Probability distribution models can be used to represent frequency distributions of variability or uncertainty distributions. When the data set represents variability for a model parameter, there can be uncertainty in any non-parametric statistic associated with the empirical data. For situations in which the data are a random, representative sample from an unbiased measurement or estimation technique, the uncertainty in a statistic could arise because of random sampling error (and thus be dependent on factors such as the sample size and range of variability within the data) and random measurement or estimation errors. The observed data can be corrected to remove the effect of known random measurement error to produce an error-free data set (Zheng Frey, 2005). [Pg.27]

Consider the situation in which a chemist randomly samples a bin of pharmaceutical granules by taking n aliquots of equal convenient sizes. Chemical analysis is then performed on each aliquot to determine the concentration (percent by weight) of pseudoephedrine hydrochloride. In this example, measurement of concentration is referred to as a continuous random variable as opposed to a discrete random variable. Discrete random variables include counted or enumerated items like the roll of a pair of dice. In chemistry we are interested primarily in the measurement of continuous properties and limit our discussion to continuous random variables. [Pg.43]

The extent of random sampling error is governed by the sample size and the SD of the data. Small samples are subject to greater random error than large ones. Data with a high SD are subject to greater sampling error than that with low variability. [Pg.46]

Sample sizes for randomized controlled trials in neurology may need to be large, either because treatment effects are relatively small or because the progression of disease is slow. Table 18.3 shows the effect of sample size on the reliability of the result of a trial of a hypothetical neurological treatment that is assumed to reduce the risk of a poor outcome by 20%, from 10% to 8%. The risk of getting the wrong result when a trial has an inadequate sample size is illustrated in Fig. 18.1. In this trial, there was considerable variability in the apparent effect of treatment until several hundred patients had been randomized. If the trial had been small, misleading trends in treatment effect could easily have been reported. [Pg.225]

The total sampling error is made up of errors due to the primary sampling, subsequent sample dividing and errors in the analysis itself Sampling is said to be accurate when it is free from bias, that is, the error of sampling is a random variable about the true mean. Sampling is precise when the error variation is small irrespective of whether the mean is the true mean or not. The ultimate that may be obtained by representative sampling may be called the perfect sample the difference between this sample and the bulk may be ascribed wholly to the expected difference on a statistical basis. Errors in particle size analysis may be due to ... [Pg.2]

The Central-Limit Theorem states that the sampling distribution of the mean, for any set of independent and identically distributed random variables, will tend toward the normal distribution, equation (3.17), as the sample size becomes large. ... [Pg.42]

For any continuous random variable X which has a distribution with population mean, p, and variance, the sampling distribution of the mean for samples of size n has a distribution with population mean, p, and variance. [Pg.70]

The assumption of a normal distribution for the random variable X is somewhat restrictive. However, for any random variable, as the sample size increases, the sampling distribution of the sample mean becomes approximately normally distributed according to a mathematical result called the central limit theorem. For a random variable X that has a population mean, p, and variance, ct-, the sampling distribution of the mean of samples of size n (where n is large, that is, > 200) will have an approximately normal distribution with population mean, p, and variance, cs-ln. Using the notation described earlier, this result can be summarized as ... [Pg.71]

Therefore, the expression written above for the confidence interval for the population mean also applies to any continuous random variable as long as the sample size is large (as just noted, of the order of 200 or more). The other rather restrictive assumption required for this confidence interval is that the population variance be known. Such a scenario is neither common nor realistic. [Pg.71]

At this point, it is worth emphasizing the difference between the terms "standard error" and "standard deviation," which, despite the same initial word, represent very different aspects of a data set. Standard error is a measure of how certain we are that the sample mean represents the population mean. Standard deviation is a measure of the dispersion of the original random variable. There is a standard error associated with any statistical estimator, including a sample proportion, the difference in two means, the difference in two proportions, and the ratio of two proportions. When presented with the term "standard error" in these applications the concept is the same. The standard error quantifies the extent to which an estimator varies over samples of the same size. As the sample size increases (for the same standard deviation) there... [Pg.73]

For a sample size of n observations of a random variable, the sample mean, an estimator of the population mean, is calculated as ... [Pg.119]

Finally, assuming that the random variable is normally distributed (or at least symmetrically distributed with a sample size > 30), a (1 -a/2)% confidence interval is ... [Pg.119]

For two independent groups 1 and 2, a sample size of observations of a random variable from group 1 and observations of a random variable from group 2, the sample means from each group are ... [Pg.120]

In Chapter 10 we saw that there are various methods for the analysis of categorical (and mostly binary) efficacy data. The same is true here. There are different methods that are appropriate for continuous data in certain circumstances, and not every method that we discuss is appropriate for every situation. A careful assessment of the data type, the shape of the distribution (which can be examined through a relative frequency histogram or a stem-and-leaf plot), and the sample size can help justify the most appropriate analysis approach. For example, if the shape of the distribution of the random variable is symmetric or the sample size is large (> 30) the sample mean would be considered a "reasonable" estimate of the population mean. Parametric analysis approaches such as the two-sample t test or an analysis of variance (ANOVA) would then be appropriate. However, when the distribution is severely asymmetric, or skewed, the sample mean is a poor estimate of the population mean. In such cases a nonparametric approach would be more appropriate. [Pg.147]

The sample size formula required to test (two-sided) the equality of two means from random variables with normal distributions is ... [Pg.174]

A clinical endpoint is a characteristic or variable that reflects how a patient feels, functions or survives. It is a distinct measurement or analysis of disease characteristics observed in a study or a clinical trial that reflect the effect of a therapeutic intervention. Clinical endpoints are the most credible characteristics used in the assessment of the benefits and risks of a therapeutic intervention in randomized clinical trials. There can be problems with using clinical endpoints as the final measure of patient response because a large patient sample size may be needed to determine drug effect or the modification in the clinical endpoint for a drug may not be detectable for several years after the initiation of therapy. [Pg.5]


See other pages where Random variable sample size is mentioned: [Pg.53]    [Pg.157]    [Pg.409]    [Pg.283]    [Pg.110]    [Pg.14]    [Pg.57]    [Pg.128]    [Pg.37]    [Pg.123]    [Pg.25]    [Pg.381]    [Pg.4]    [Pg.71]    [Pg.2558]    [Pg.112]    [Pg.83]    [Pg.40]    [Pg.55]    [Pg.70]    [Pg.78]    [Pg.211]    [Pg.118]   
See also in sourсe #XX -- [ Pg.40 ]




SEARCH



Random samples

Random sampling

Random variables

Randomized samples

Sample variability

Samples random sample

Sampling sample size

Sampling size

© 2024 chempedia.info