Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Subject statistical theory

This book is divided into five parts the problem, accidents, health risk, hazard risk, and hazard risk analysis. Part 1, an introduction to HS AM, presents legal considerations, emergency planning, and emergency response. This Part basically ser es as an oveiwiew to the more teclmical topics covered in the remainder of the book. Part 11 treats the broad subject of accidents, discussing fires, explosions and other accidents. The chapters in Parts 111 and Part IV provide introductory material to health and hazard risk assessment, respectively. Pai1 V examines hazaid risk analysis in significant detail. The thiee chapters in this final part include material on fundamentals of applicable statistics theory, and the applications and calculations of risk analysis for real systems. [Pg.661]

The subject of statistical researches are the Population (universe, statistical masses, basic universe, completeness) and samples taken from a population. The population must be representative of a collection of a continual chemical process by some features, i.e. properties of the given products. If we are to find a property of a product, we have to take out a sample from a population that, by mathematical statistics theory is usually an infinite gathering of elements-units. [Pg.3]

In clinical research it is of particular interest to estimate a population mean on the basis of data collected from a sample of subjects employed in a randomized clinical trial. Sampling and statistical procedures facilitate the estimation of the population mean based on the sample mean and sample SD that are precisely calculated from the data collected in the trial. If we take a sample of 100 numbers from a population of 100,000 numbers and calculate the mean of those 100 numbers, this sample mean, which is precisely known, provides an estimate of the unknown population mean. If we then took another sample of 100 numbers, or indeed many samples, it is extremely unlikely that the numbers in any subsequent sample would be identical to those in the first sample, and it is unlikely that the calculated sample means would be identical to that of the first sample. Therefore, in a randomized clinical trial, a situation in which only one sample is taken from a population, a question that arises is What degree of certainty is there that the mean of that sample represents the mean of the population This question can be answered using statistical theory in conjunction with knowledge of the number of subjects participating in the trial, i.e., the sample size. [Pg.92]

Recent years have also witnessed exciting developments in the active control of unimolecular reactions [30,31]. Reactants can be prepared and their evolution interfered with on very short time scales, and coherent hght sources can be used to imprint information on molecular systems so as to produce more or less of specified products. Because a well-controlled unimolecular reaction is highly nonstatistical and presents an excellent example in which any statistical theory of the reaction dynamics would terribly fail, it is instmctive to comment on how to view the vast control possibihties, on the one hand, and various statistical theories of reaction rate, on the other hand. Note first that a controlled unimolecular reaction, most often subject to one or more external fields and manipulated within a very short time scale, undergoes nonequilibrium processes and is therefore not expected to be describable by any unimolecular reaction rate theory that assumes the existence of an equilibrium distribution of the internal energy of the molecule. Second, strong deviations Ifom statistical behavior in an uncontrolled unimolecular reaction can imply the existence of order in chaos and thus more possibilities for inexpensive active control of product formation. Third, most control scenarios rely on quantum interference effects that are neglected in classical reaction rate theory. Clearly, then, studies of controlled reaction dynamics and studies of statistical reaction rate theory complement each other. [Pg.8]

There are several reasons for going first to this level of generality for the n-compartment model. First/ it points out clearly that the theories of noncompartmental and compartmental models are very different. While the theory underlying noncompartmental models relies more on statistical theory/ especially in developing residence time concepts [see/ e.g./ Weiss (11)]/ the theory underlying compartmental models is really the theory of ordinary/ first-order differential equations in which/ because of the nature of the compartmental model applied to biological applications/ there are special features in the theory. These are reviewed in detail in Jacquez and Simon (5)/ who also refer to the many texts and research articles on the subject. [Pg.98]

We use the variable m to represent the total number of components In the synthetic chromatogram Instead of in. The fonner value Is known In our computer-generated chromatograms but not In a complex mixture subjected to chromatography. In either case, only the mean component number m may be estimated by the statistical theory. [Pg.14]

Statistical theory must also be held not only with respect but also with healthy skepticism (although this is really the subject of Chapter 19). It should be remembered that the development of statistics, as they have come to be applied to clinical trials, has... [Pg.120]

Having said all of this, it is important to remember, however (Popper, 1976 Appendix IX), ... that non-statistical theories have as a rule a form totally different from that of the h here described, that is, they are of the form of a universal proposition. The question thus becomes whether systematics, or phylogeny reconstruction, can be construed in terms of a statistical theory that satisfies the rejection criteria formulated by Popper (see footnote 1) and that, in case of favorable evidence, allows the comparison of degree of corroboration versus Fisher s likelihood function. As far as phylogenetic analysis is concerned, I found no indication in Popper s writing that history is subject to the same logic as the test of random samples of statistical data. As far as a metric for degree of corroboration relative to a nonstatistical hypothesis is concerned. Popper (1973 58-59 see also footnote 1) clarified. [Pg.85]

In the setting up of the statistical theory there are several alternative procedures, and the newcomer to the subject often finds it difiicult to see the connexion between them. The essential problem, of course, is that of averagiug and the various methods differ principally in the following respects ... [Pg.339]

The role of the i.v.R to explain the selection rules within the product in the dissociation of such complexes is the subject of large discussions and need further experimental and theoretical work. It is yet not clear when and how statistical theory should be used. [Pg.322]

S. B. Batdorf and J. G. Crose, A Statistical Theory for the Fracture of Brittle Structures Subjected to Polyaxial Stress States, J. Appl. Mech. 41, 459 465 (1974). [Pg.132]

In mathematical statistics theory it is well known that Bayesian method allows a combination of two kinds of information prior (for instance, generic statistic data, subjective option of experts) and measurements or observations (Bernardo et al, 2003 Berthold et al, 2003). Bayesian method allows updating estimates of all parameters in the model with a single new obtained observation, i.e. Bayesian method does not require to have new information on the values of all factors involved in the created model. [Pg.394]

We have introduced so far a novel nonequilibrium statistical theory, [eq. (6)]. Although this subject might appear quite different from the objective of this paper on the possible interrelationship between the Tg, and T transitions, it is very pertinent. We will now apply the principle of the Energetic Kinetic Theory, presented above to determine the structure of the dual split, to another situation. We assume that the free energy still remains equal to the equilibrium value at the same temperature (EKT principle), but it is allowed to subdivide into (t) identical systems to render the energetic constraint feasible. Hence the total number of units, Bg, can be divided into (t) systems of (Bp/fV ) units each. We define = Bq/N, and numerically solve the following system of differential equations on a computer. [Pg.385]

The presence of adsorption hysteresis is the special feature of all adsorbents with a mesopore structure. The adsorption and desorption isotherms differ appreciably from one another and form a closed hysteresis loop. According to the lUPAC classification four main types of hysteresis loops can be distinguished HI, H2, H3 and H4 (ref. l). Experimental adsorption and desorption isotherms in the hysteresis region provide information for calculating the structural characteristics of porous materials-porosity, surface area and pore size distribution. Traditional methods for such calculations are based on the assumption of an unrelated system of pores of simple form, as a rule, cylindrical capillaries. The calculations are based on either the adsorption or the desorption isotherm, ignoring the existence of hysteresis in the adsorption process. This leads to two different pore size distributions. The question of which of these is to be preferred has been the subject of unending discussion. In this report a statistical theory of capillary hysteresis phenomena in porous media has been developed. The analysis is based on percolation theory and pore space networks models, which are widely used for the modeling of such processes by many authors (refs. 2-10). The new percolation methods for porous structure parameters computation are also proposed. [Pg.67]

The most important statistical subjects relevant to reverse engineering are statistical average and statistical reliability. Most statistical averages of material properties such as tensile strength or hardness can be calculated based on their respective normal distributions. However, the Weibull analysis is the most suitable statistical theory for reliability analyses such as fatigue lifing calculation and part life prediction. This chapter will introduce the basic concepts of statistics based on normal distribution, such as probability, confidence level, and interval. It will also discuss the Weibull analysis and reliability prediction. [Pg.211]


See other pages where Subject statistical theory is mentioned: [Pg.830]    [Pg.524]    [Pg.431]    [Pg.245]    [Pg.3]    [Pg.27]    [Pg.431]    [Pg.663]    [Pg.830]    [Pg.1]    [Pg.2182]    [Pg.202]    [Pg.207]    [Pg.189]    [Pg.269]    [Pg.374]    [Pg.421]    [Pg.39]    [Pg.306]    [Pg.462]    [Pg.307]    [Pg.20]    [Pg.35]    [Pg.67]    [Pg.6]    [Pg.186]    [Pg.244]    [Pg.665]    [Pg.108]    [Pg.727]    [Pg.515]   
See also in sourсe #XX -- [ Pg.106 ]

See also in sourсe #XX -- [ Pg.101 ]




SEARCH



Statistical Subject

Theories statistical theory

© 2024 chempedia.info