Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Marginal distributions

In this expression, p(H) is referred to as the prior probability of the hypothesis H. It is used to express any information we may have about the probability that the hypothesis H is true before we consider the new data D. p(D H) is the likelihood of the data given that the hypothesis H is true. It describes our view of how the data arise from whatever H says about the state of nature, including uncertainties in measurement and any physical theory we might have that relates the data to the hypothesis. p(D) is the marginal distribution of the data D, and because it is a constant with respect to the parameters it is frequently considered only as a normalization factor in Eq. (2), so that p(H D) x p(D H)p(H) up to a proportionality constant. If we have a set of hypotheses that are exclusive and exliaus-tive, i.e., one and only one must be true, then... [Pg.315]

V 1s n) is the normalized thermal distribution of configurations of the distinguished molecule in isolation [10], i.e., the required marginal distribution. The remaining set of brackets here indicates the average over solvent coordinates. The second set of brackets are not written on the right here because the averaging over solute coordinates is explicitly written out. This last formula is... [Pg.328]

Green and Pimblott (1991) criticize the truncated distributions of Mozumder (1971) and of Dodelet and Freeman (1975) used to calculate the free-ion yield in a multiple ion-pair case. In place of the truncated distribution used by the earlier authors, Green and Pimblott introduce the marginal distribution for all ordered pairs, which is statistically the correct one (see Sect. 9.3 for a description of this distribution). [Pg.239]

Limited Information Maximum Likelihood Estimation). Consider a bivariate distribution for x and y that is a function of two parameters, a and fi The joint density is j x,y a,p). We consider maximum likelihood estimation of the two parameters. The full information maximum likelihood estimator is the now familiar maximum likelihood estimator of the two parameters. Now, suppose that we can factor the joint distribution as done in Exercise 3, but in this case, we have, fix,y a, ft) — f(y x.a.f )f(x a). That is, the conditional density for y is a function of both parameters, but the marginal distribution for x involves only... [Pg.88]

Exercise. Generalize this ring distribution to r variables evenly distributed on a hypersphere in r dimensions, i.e., the microcanonical distribution of an ideal gas. Find the marginal distribution for xx. Show that it becomes Gaussian in the limit r-> oo, provided that the radius of the sphere also grows, proportionally to y/r. [Pg.11]

Exercise. Consider the marginal distribution of a subset of all variables. Express its moments in terms of the moments of the total distribution, and its characteristic function in terms of the total one. [Pg.12]

Show that these marginal distribution functions define again a process X(t). [Pg.64]

Since nB no longer occurs as a variable one may sum over it to obtain the marginal distribution for nx alone,... [Pg.177]

Exercise. It is possible to solve (7.4) explicitly for the special case of an harmonic potential F(x)= —a2x. The result is contained in (6.12), but the explicit evaluation requires a number of elementary integrations. Having found P(x, v, t) one can determine the marginal distribution P(x, t) and verify that it obeys (7.1) when y is large. (More directly one can first determine the marginal distribution from the general expression (6.12), which reduces the number of integrations needed.)... [Pg.218]

Its solution with initial distribution h(y — y0) or/ /2tt establishes a marginal distribution P(y, t) for y alone. Show that it obeys, if F = 2y,... [Pg.243]

This quantity is also a function of the data y, and the optimal sample size is chosen by minimizing the expected Bayes risk E R(n, y, a, c), where this expectation is with respect to the marginal distribution of the data. [Pg.126]

Finally, we adopt a notation involving conditional averages to express several of the important results. This notation is standard in other fields (Resnick, 2001), not without precedent in statistical mechanics (Febowitz et al, 1967), and particularly useful here. The joint probability P A, B) of events A and B may be expressed as P A, B) = P A B)P B) where P B) is the marginal distribution, and P A B) is the distribution of A conditional on B, provided that P B) 0. The expectation of A conditional on B is A B, the expectation of A evaluated with the distribution P(A B) for specified B. In many texts (Resnick, 2001), that object is denoted as E(A B) but the bracket notation for average is firmly established in the present subject so we follow that precedent despite the widespread recognition of a notation (A B) for a different object in quantum mechanics texts. [Pg.18]

Generalized Unear mixed models (GLMMs) provide another type of extension of LME models aimed at non-Gaussian responses, such as binary and count data. In these models, conditional on the random effects, the responses are assumed independent and with distribution in the exponential family (e.g., binomial and Poisson) (8). As with NLME models, exact likelihood methods are not available for GLMMs because they do not allow closed form expressions for the marginal distribution of the responses. QuasUikelihood (9) and approximate likelihood methods have been proposed instead for these models. [Pg.104]

The joint distribution for a first-order Markov chain depends only on the one-step transition probabilities and on the marginal distribution for the initial state of the process. This is because of the Markov property. A first-order Markov chain can be fit to a sample of realizations from the chain by fitting the log-linear (or a nonlinear mixed effects) model to [To, Li, , YtiYt] for T realizations because association is only present between pairs of adjacent, or consecutive, states. This model states that the odds ratios describing the association between To and Yt are the same at any combination of states at the time points 2,..., T, for instance. [Pg.691]

Performing principal component analysis on the ranks can help to assess the dimensionality of the ordering context. Since the marginal distributions of the ranks are the same except for ties, the difference between covariance matrix and correlation matrix is not critical. If there are subsets of indicators that segregate strongly in their loadings, then complexity is confirmed and it may be prudent to consider partitioning of the prioritization process. [Pg.323]

Equations (6.18) and (6.19) is referred to as the conditional model and implies that Y N(x(3 + zU, R). However, inference is not done on the conditional model but instead on the marginal distribution, which is obtained by integrating out the random effects. Under the marginal distribution, the expected value for the ith subject is... [Pg.185]

Figure A.4 Joint pdf of the bivariate normal distribution and the marginal distributions for X and Y. In this example, both X and Y have a standard normal distribution with a correlation between variables of 0.6. Figure A.4 Joint pdf of the bivariate normal distribution and the marginal distributions for X and Y. In this example, both X and Y have a standard normal distribution with a correlation between variables of 0.6.

See other pages where Marginal distributions is mentioned: [Pg.171]    [Pg.328]    [Pg.335]    [Pg.300]    [Pg.219]    [Pg.238]    [Pg.102]    [Pg.111]    [Pg.178]    [Pg.85]    [Pg.88]    [Pg.88]    [Pg.89]    [Pg.124]    [Pg.140]    [Pg.10]    [Pg.217]    [Pg.131]    [Pg.71]    [Pg.103]    [Pg.138]    [Pg.141]    [Pg.154]    [Pg.213]    [Pg.214]    [Pg.46]    [Pg.46]    [Pg.87]    [Pg.227]    [Pg.227]    [Pg.227]    [Pg.227]    [Pg.349]   
See also in sourсe #XX -- [ Pg.10 , Pg.177 ]

See also in sourсe #XX -- [ Pg.227 ]




SEARCH



Joint and Marginal Distribution

Margin

Marginalization

Margining

© 2024 chempedia.info