Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sampling probability

A useful qualitative guide to the principles underlying probability sampling is provided by an ASTM standard [3]. [Pg.73]

In many cases. sampling schemes are regulated by statutes, such as those published by individual governments, the European Community, the Codex Alimentarius Commission, and so on. Such schemes are particularly common in the analysis of foodstuffs, and may specify the mass and number of the sample increments to be taken as well as the statutory limits for the analyte(s) under study. [Pg.73]

When sampling a bulk material, it is clearly important to come to decisions about (1) how many increments should be taken and (2) how large they should be. The minimum number of increments h necessary to obtain a given level of confidence is  [Pg.73]


Systematic grid with random start 2-phase unequal probability sampling... [Pg.160]

To permit inferences about the target population, probability sampling methods will be used in designing the survey. Sample collection and interview procedures being considered include face to face interviews using field study staff, random digit dialing telephone interviews or some combination of these procedures. The face to face procedure is the most likely method at the present time and will be described here to illustrate the manner in which the probability sample will be selected. [Pg.71]

This is used when a representative sample cannot be collected or is not appropriate. It is the correct sampling approach to use to produce a selective sample (see Section 3.2.2). There are three main non-probability sampling strategies. [Pg.34]

Equal probability sampling (EPS), in parameter estimation, 26 1039 Equation of state approach, 24 134. See also Equations of state Equation-oriented approach, 20 729... [Pg.325]

The proper setting for variographic analysis is a set of 60 representative increments (this is a minimnm requirement, 100 samples is always preferable if possible, the minimum number of increments ever snc-cessfnlly snpporting an interpretable variogram is 42), in order to cover well the specific process variations to be characterized. It is important to select a 0niin that is smaller than the most probable sampling frequency likely to be used in rontine process monitoring and QC. It will be the objective of the data analysis to... [Pg.68]

The classical, frequentist approach in statistics requires the concept of the sampling distribution of an estimator. In classical statistics, a data set is commonly treated as a random sample from a population. Of course, in some situations the data actually have been collected according to a probability-sampling scheme. Whether that is the case or not, processes generating the data will be snbject to stochastic-ity and variation, which is a sonrce of uncertainty in nse of the data. Therefore, sampling concepts may be invoked in order to provide a model that accounts for the random processes, and that will lead to confidence intervals or standard errors. The population may or may not be conceived as a finite set of individnals. In some situations, such as when forecasting a fnture value, a continuous probability distribution plays the role of the popnlation. [Pg.37]

Johnson and co-workers (41) found that the lifetime rate of suicide attempts with uncomplicated panic disorder was about 7%, which is consistently higher than that of the general population without a psychiatric disorder (i.e., about 1%). The researchers concluded that panic disorder, either uncomplicated or as a co-morbid illness, led to a risk of suicide attempts comparable with those of major depression ( co-morbid or uncomplicated). Their data were derived from the Epidemiologic Catchment Area Study, with a probability sample of more than 18,000 adults living in five United States communities. [Pg.108]

P3 and P6). Rather than being products of specific nucleosynthetic processes in specific stars, these components may represent mixtures of many stellar nucleosynthetic contributions, as might be found in the ISM (and our own solar system). It is worth recalling that presolar diamonds are too small to be analyzed separately, so the assortment of diamonds in a chondrite probably samples multiple stellar sources. [Pg.375]

Epidemiological data are often based on broad populations such as a community, a nationwide probability sample, registries or disease surveillance programmes (Savitz Harlow, 1991 Scialli et al., 1997). Potential toxicants are also monitored in outdoor air, food, water and soil. These measurements can be used to calculate estimated exposure of humans through contact with their contaminated environment. However, such environmental measurements are difficult to link to... [Pg.122]

The practices described by the method provide instructions for sampling coal from beneath the exposed surface of the coal at a depth (approximately 24 in., 61 cm) where drying and oxidation have not occurred. The purpose is to avoid collecting increments that are significantly different from the majority of the lot of coal being sampled due to environmental effects. However, samples of this type do not satisfy the minimum requirements for probability sampling and, as such, cannot be used to draw statistical inferences such as precision, standard error, or bias. Furthermore, this method is intended for use only when sampling by more reliable methods that provide a probability sample is not possible. [Pg.28]

Designed to study a probability sample of the noninstitutionalized civilian population of the United States, NHANES also conducted nutritional assessment of three high-risk populations preschool children (6 months to 5 years old), those 60-74 years old, and the poor (persons below the poverty level) (NRC 1991). Each year, the current NHANES samples 5,000 persons, representative of the U.S. civilian household population, in 15 geographic locations. There is also an effort to oversample some demographic groups, including blacks and Mexican Americans (Schober 2005). [Pg.73]

The reports are based on a probability sampling of the U.S. population and cannot target hot spots of exposure. [Pg.75]

Glueck et al. assessed hypocholesterolemia in 203 patients hospitalized with affective disorders (depression, bipolar disorder, and schizoaffective disorder), 1595 self-referred subjects in an urban supermarket screening, and 11,864 subjects in the National Health and Nutrition Examination Survey II (a national probability sample) [34], Low plasma cholesterol concentration (<160 mg/dL) was much more common in patients with affective disorders than in those found in urban supermarket screening subjects or in the National Health and Nutrition Examination Survey II subjects. When paired with supermarket screening subjects by age and sex, patients with affective disorders had much lower TC, LDL, HDL, and higher TG concentrations. However, there was no evidence that low plasma cholesterol could cause or worsen affective disorders [34]. [Pg.84]

In general, three broad methods are available for planning a sampling procedure [GARFIELD, 1989] probability sampling, nonprobability sampling, and bulk sampling. [Pg.122]

Firstly, the numbers studied in most investigations have been small and do not represent formal probability samples from a known population. Thus, the true representative character of the results is questionable, and the robustness of this model remains to be established through a demonstration that the results obtained with it can be replicated in another sample group drawn from the same larger population. The seriousness of this reservation arises from the multiple environmental factors that could exert some effect and the difficulty in obtaining a sufficiently large group of subjects to represent adequately most pertinent environmental factors. In fact, as indicated below, replicability of results has not been obtained with this model. [Pg.76]

Description of the target or study population or a representative (probability) sample of it. [Pg.72]

Description of the referent population or representative (probability) sample ... [Pg.227]

As implied by the name, distributional risk assessments use the entire data distribution to calculate exposure and risk. As described for the EPA Tier 1 assessment [5], a distributional analysis is not necessarily a probabilistic analysis. The EPA Tier 1 acute dietary assessment produces a dietary exposure distribution using the entire food consultation distribution and point estimates of residue concentration. There is no probability sampling in the Tier 1 assessment. [Pg.361]

One difference between the US and the EU approach to probabilistic dietary risk assessment is in the method of sampling. In the US approach, probabilistic sampling is done for the residue distribution. However, Exponent s Dietary Exposure Evaluation Model (DEEM [10]) currently used by the EPA for dietary risk assessment uses all of the food consumption data. In the EU, it appears that probability sampling may be performed from both the residue and consumption distributions [11], In the US, the CARES... [Pg.361]

Step 3. Sample u U(0,1) P P(P) acceptance probability sample parameters from pri... [Pg.142]

Singh BB, Berman BM, Simpson RL, et al. Incidence of premenstrual syndrome and remedy usage A national probability sample study. Altem Ther Health Med 1998 4 75-79. [Pg.1482]

Hansen examined a probability sample of about 67 NCEs originated by U.S.-based pharmaceutical companies first entering human clinical trials from 1963 through 1975. DiMasi and colleagues studied a sample of 93 such NCEs first entering human trials from 1970 through 1982. [Pg.11]

Both studies selected a sample of NCEs that originated within the company s U.S. research organizations. NCEs were selected from a database maintained by CSDD of new products under development. Probability samples were drawn from the universe of NCEs in the CSDD database, but some nonresponding companies could have biased the sample. Furthermore, neither study reported the within-fro response rate. If firms failed to provide data on some NCEs for which data were poor, or if they selectively reported on NCEs for some other reason, the sample of NCEs could be biased. Again, the effect of such potential biases on cost estimates cannot be judged. [Pg.55]

This discussion has emphasized the theory and basic principles of probability sampling of wells in a large area. The next two sections address studies of areas of a few acres or less. The guidelines for well sampling in those sections are appropriate for this section as well. [Pg.178]

What are the differences between convenience, pnrposive, and probability sampling Explain. [Pg.598]

Probability sampling A method of sampling whereby each subject in a defined population theoretically has an equal chance of being included in the sample. [Pg.1364]


See other pages where Sampling probability is mentioned: [Pg.29]    [Pg.33]    [Pg.34]    [Pg.567]    [Pg.178]    [Pg.299]    [Pg.245]    [Pg.85]    [Pg.133]    [Pg.73]    [Pg.227]    [Pg.319]    [Pg.170]    [Pg.435]    [Pg.475]    [Pg.585]    [Pg.74]    [Pg.660]    [Pg.1356]    [Pg.476]   
See also in sourсe #XX -- [ Pg.274 ]

See also in sourсe #XX -- [ Pg.278 ]

See also in sourсe #XX -- [ Pg.585 ]

See also in sourсe #XX -- [ Pg.53 ]

See also in sourсe #XX -- [ Pg.72 ]

See also in sourсe #XX -- [ Pg.34 , Pg.35 , Pg.36 , Pg.37 , Pg.38 ]




SEARCH



© 2024 chempedia.info