Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Inference universe

We define the inference universe of the problem to be the Cartesian product of the parameter space and the sample space. It is the p -I- n dimensional space where the first p dimensions are the parameter space, and the remaining n dimensions are the sample space. We do not ever observe the parameter, so the position in those coordinates is always unknown. However, we do observe the sample, so we know the last n coordinates. [Pg.6]

We will let the dimensions be p = 1 and n = 1 for illustrative purposes. This is the case when we have a single parameter and a single observation (or we have a random sample of observations from a one-dimensional exponential family). The inference universe has two dimensions. The vertical dimension is the parameter space and is unobservable. The horizontal dimension is the sample space and is observable. We wish to make inference about where we are in the vertical dimension given that we know where we are in the horizontal dimension. [Pg.6]

If we decide to use a flat prior density that gives equal weight to all values of the parameter, the joint density on the inference universe will be the same as the observation density surface. This is shown in Figure 1.6. Note that this prior density will be improper (the integral over the whole range will be infinite) unless the parameter values have finite lower and upper bounds. When the prior is improper, we do not have a joint probability density for the full Bayesian model. However,... [Pg.9]

Figure 1.5 Posterior density of the parameter in the inference universe. The prior density is shown in the margin. Figure 1.5 Posterior density of the parameter in the inference universe. The prior density is shown in the margin.
Figure 1.6 Posterior density in the inference universe using a flat prior. It has the same shape as the likelihood function. Figure 1.6 Posterior density in the inference universe using a flat prior. It has the same shape as the likelihood function.
When we have p > 2 the same ideas hold. However, we cannot project the surface defined on the inference universe down to a two-dimensional graph. With multiple parameters, Figures 1.1,1.2,1.3, 1.4, 1.5, and 1.6 can be considered to be schematic diagrams that represent the ideas rather than exact representations. [Pg.12]

We will use the two-parameter case to show what happens when there are multiple parameters. The inference universe has at least four dimensions, so we cannot graph the surface on it. The likelihood function is still found by cutting through the surface with a hyperplane parallel to the parameter space passing through the observed values. The likelihood function will be defined on the the two parameter dimensions as the observations are fixed at the observed values and do not vary. We show the bivariate likelihood function in 3D perspective in Figure 1.8. In this example, we have the likelihood function where 9 is the mean and 62 is the variance for a random sample from a normal distribution. We will also use this same curve to illustrate the Bayesian posterior since it would be the joint posterior if we use independent flat priors for the two parameters. [Pg.12]

We have shown that both the likelihood and Bayesian approach arise from surfaces defined on the inference universe, the observation density surface and the joint probability density respectively. The sampling surface is a probability density only in the observation dimensions, while the joint probability density is a probability density in the parameter dimensions as well (when proper priors are used). Cutting these two surfaces with a vertical hyperplane that goes through the observed value of the data yields the likelihood function and the posterior density that are used for likelihood inference and Bayesian inference, respectively. [Pg.16]

Fisher, R.A., 1990. Statistical Methods, Experimental Design and Scientific Inference. Oxford University Press. [Pg.306]

In the so-called "wrinkled flame regime," the "turbulent flame speed" was expected to be controlled by a characteristic value of the turbulent fluctuations of velocity u rather than by chemistry and molecular diffusivities. Shchelkin [2] was the first to propose the law St/Sl= (1 + A u /Si) ), where A is a universal constant and Sl the laminar flame velocity of propagation. For the other limiting regime, called "distributed combustion," Summerfield [4] inferred that if the turbulent diffusivity simply replaces the molecular one, then the turbulent flame speed is proportional to the laminar flame speed but multiplied by the square root of the turbulence Reynolds number Re. ... [Pg.138]

The authors [35] emphasize that their result regarding the first HgS monolayer, which involves reversible underpotential adsorption, suggests that nucleation cannot be considered as a universal mechanism for the formation of anodic films. Analogous conclusions have been inferred for cathodic HgSe films electrodeposited on mercury electrode by the reduction of selenous acid [37] the first monolayer appeared to be reversibly adsorbed, while formation of the following two layers was preceded by nucleation. [Pg.90]

Perhaps the most revolutionary development has been the application of on-line mass spectroscopic detection for compositional analysis. Polymer composition can be inferred from column retention time or from viscometric and other indirect detection methods, but mass spectroscopy has reduced much of the ambiguity associated with that process. Quantitation of end groups and of co-polymer composition can now be accomplished directly through mass spectroscopy. Mass spectroscopy is particularly well suited as an on-line GPC technique, since common GPC solvents interfere with other on-line detectors, including UV-VIS absorbance, nuclear magnetic resonance and infrared spectroscopic detectors. By contrast, common GPC solvents are readily adaptable to mass spectroscopic interfaces. No detection technique offers a combination of universality of analyte detection, specificity of information, and ease of use comparable to that of mass spectroscopy. [Pg.375]

Josephson, J., and Josephson, S., eds., Abductive Inference Computation, Philosophy, Technology. Cambridge University Press, 1994. [Pg.99]

Pearl, J. (2000), Causality Models, Reasoning and Inference, Cambridge University Press, New York. [Pg.346]

The primordial Li abundance was sought primarily because of its ability to constrain the baryon to photon ratio in the Universe, or equivalently the baryon contribution to the critical density. In this way, Li was able to complement estimates from 4He, the primordial abundance of which varied only slightly with baryon density. Li also made up for the fact that the other primordial isotopes, 2H (i.e. D) and 3He, were at that time difficult to observe and/or interpret. During the late 1990 s, however, measurements of D in damped Lyman alpha systems (high column-density gas believed to be related to galaxy discs) provided more reliable constraints on the baryon density than Li could do (e.g. [19]). Even more recently, the baryon density has been inferred from the angular power spectrum of the cosmic microwave background radiation, for example from the WMAP measurements [26]. We consider the role of Li plateau observations post WMAP. [Pg.185]

In the next section each light nuclide is considered in turn, its post-BBN evolution briefly reviewed along with identification of a few of the potential challenges to accurately inferring the primordial abundances from the observational data. Then, having established that the current data - taken at face value - are not entirely consistent with SBBN, I investigate whether changes in the early universe expansion rate can reconcile them. [Pg.333]

Detection of hydrogen is a particularly important problem for astrochemists because to a first approximation all visible matter is hydrogen. The hydrogen molecule is the most abundant molecule in the Universe but it presents considerable detection problems due to its structure and hence spectroscopy. Hydrogen does not possess a permanent dipole moment and so there is no allowed rotation or vibration spectrum and all electronic spectrum transitions are in the UV and blocked by the atmosphere. The launch of the far-UV telescope will allow the detection of H2 directly but up to now its concentration has been inferred from other measurements. The problem of detecting the H atom, however, has been solved using a transition buried deep in the hyperflne structure of the atom. [Pg.79]

When the kinetics of a sorption process do appear to separate according to very small and very large time scales, the almost universal inference made is that pure adsorption is reflected by the rapid kinetics (16,21,22,26). The slow kinetics are interpreted either in terms of surface precipitation (20) or diffusion of the adsorbate into the adsorbent (16,24). With respect to metal cation sorption, "rapid kinetics" refers to time scales of minutes (16,26), whereas for anion sorption it refers to time scales up to hours TT, 21). The interpretation of these time scales as characteristic of adsorption rests almost entirely on the premise that surface phenomena involve little in the way of molecular rearrangement and steric hindrance effects (16,21). [Pg.224]

Hacking, Ian. The Emergence of Probability A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference. Cambridge Cambridge University Press, 1975. [Pg.316]

The low torsion constant at a = —0.025 is very similar to that observed in a supercoiled pBR322 that was partially relaxed by saturation binding of Escherichia coli single-strand binding (ssb) protein, and which persisted for over a month.(56) It is also similar to that recently inferred from an in vivo assay based on variation in repression efficiency with size of a putative DNA loop.(234) Indeed, it appears that anomalously low torsion constants may be universally encountered in the course of either partial or complete relaxation of supercoiled DNAs, regardless of whether the superhelix density is reduced by action of topoisomerase I, binding of ssb protein, binding of intercalated... [Pg.210]

Dolata, D. P, QED Automated Inference in Planning Organic Synthesis, PhD dissertation, University of California, Santa Cruz 1984. [Pg.208]

My thanks to Dr. Juanita Anders of the Uniformed Services University of the Health Sciences, Bethesda, MD, for two careful readings of the manuscript, which resulted in numerous clarifications and improvements. This chapter was written in my private capacity. No official support or endorsement by the Food and Drug Administration is intended or should be inferred. [Pg.172]


See other pages where Inference universe is mentioned: [Pg.6]    [Pg.7]    [Pg.6]    [Pg.7]    [Pg.519]    [Pg.2817]    [Pg.2842]    [Pg.115]    [Pg.696]    [Pg.752]    [Pg.98]    [Pg.530]    [Pg.84]    [Pg.236]    [Pg.453]    [Pg.62]    [Pg.189]    [Pg.331]    [Pg.341]    [Pg.232]    [Pg.327]    [Pg.284]    [Pg.311]    [Pg.213]    [Pg.287]    [Pg.355]    [Pg.121]    [Pg.15]    [Pg.623]    [Pg.218]   
See also in sourсe #XX -- [ Pg.6 ]




SEARCH



Inference

© 2024 chempedia.info