Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Exponential family

Wu and Cinar (1996) use a polynomial approximation (ARMENSI) of the error density function / based on a generalized exponential family, such as... [Pg.227]

ML is the approach most commonly used to fit a distribution of a given type (Madgett 1998 Vose 2000). An advantage of ML estimation is that it is part of a broad statistical framework of likelihood-based statistical methodology, which provides statistical hypothesis tests (likelihood-ratio tests) and confidence intervals (Wald and profile likelihood intervals) as well as point estimates (Meeker and Escobar 1995). MLEs are invariant under parameter transformations (the MLE for some 1-to-l function of a parameter is obtained by applying the function to the untransformed parameter). In most situations of interest to risk assessors, MLEs are consistent and sufficient (a distribution for which sufficient statistics fewer than n do not exist, MLEs or otherwise, is the Weibull distribution, which is not an exponential family). When MLEs are biased, the bias ordinarily disappears asymptotically (as data accumulate). ML may or may not require numerical optimization skills (for optimization of the likelihood function), depending on the distributional model. [Pg.42]

Exponential Families of Distributions) For each of the following distributions, determine whether it is an exponential family by examining the log likelihood function. Then, identify the sufficient statistics. [Pg.94]

The log likelihood is found by summing these functions. The third term does not factor in the fashion needed to produce an exponential family. There are no sufficient statistics for this distribution. [Pg.94]

Generalized Unear mixed models (GLMMs) provide another type of extension of LME models aimed at non-Gaussian responses, such as binary and count data. In these models, conditional on the random effects, the responses are assumed independent and with distribution in the exponential family (e.g., binomial and Poisson) (8). As with NLME models, exact likelihood methods are not available for GLMMs because they do not allow closed form expressions for the marginal distribution of the responses. QuasUikelihood (9) and approximate likelihood methods have been proposed instead for these models. [Pg.104]

Efron, B. More accurate confidence intervals in exponential family. Biometrika 1992 79 231-245. [Pg.369]

This is an exponential family the sufficient statistics are Ziy and ZiXi.. [Pg.94]

After obtaining an estimator of a population parameter (/a, o) or a parameter associated with a particular family of distributions such as A in an exponential family, the sampling distribution of the estimator is the distribution of the estimator as its possible realizations vary across all possible samples that may arise from a given population. For example, let 0 be a parameter for a population or for a family of distributions. Let Ti,..., y be i.i.d. r.v s with c.d.f. F completely unspecified or where 0... [Pg.46]

Stehlik, M. 2003. Distributions of exact tests in the exponential family. Metrika 57 145-164. [Pg.854]

The latter equation defines the forecast evolution in the system. For example, Karlin (1960) considers densities of the exponential family... [Pg.405]

We refer the reader to Karlin (1960), Scarf (1959) and Iglehart (1964) for further statistical results related to the exponential family. [Pg.405]

Nevertheless, what currently passes for frequentist parametric statistics includes a collection of techniques, concepts, and methods from each of these two schools, despite the disagreements between the founders. Perhaps this is because, for the very important cases of the normal distribution and the binomial distribution, the MLE and the UMVUE coincided. Efron (1986) suggested that the emotionally loaded terms (unbiased, most powerful, admissible, etc.) contributed by Neyman, Pearson, and Wald reinforced the view that inference should be based on likelihood and this reinforced the frequentist dominance. Frequentist methods work well in the situations for which they were developed, namely for exponential families where there are minimal sufficient statistics. Nevertheless, they have fundamental drawbacks including ... [Pg.3]

Where the observation distribution comes from an exponential family, and the prior comes frirni the family that is conjugate to the observation distribution. [Pg.5]

The observation(s) come from the observation density f(y 0) where 0 is the fixed parameter value. It gives the probability density over all possible observation values for the given value of the parameter. The parameter space, 0, is the set of all possible parameter values. The parameter space ordinarily has the same dimension as the total number of parameters, p. The sample space, S, is the set of all possible values of the observation(s). The dimension of the sample space is the number of observations n. Many of the commonly used observation distributions come from the one-dimensional exponential family of distributions. When we are in the one-dimensional exponential family, the sample space may be reduced to a single dimension due to the single sufficient statistic. [Pg.6]

We will let the dimensions be p = 1 and n = 1 for illustrative purposes. This is the case when we have a single parameter and a single observation (or we have a random sample of observations from a one-dimensional exponential family). The inference universe has two dimensions. The vertical dimension is the parameter space and is unobservable. The horizontal dimension is the sample space and is observable. We wish to make inference about where we are in the vertical dimension given that we know where we are in the horizontal dimension. [Pg.6]

Suppose that 0i is the parameter of interest, and 02 is a nuisance parameter. If there is an ancillary sufficient statistic, conditioning on it will give a likelihood that only depends on 0, the parameter of interest, and inference can be based on that conditional likelihood. This can only be true in certain exponential families, so is of limited general use when nuisance parameters are present. Instead, likelihood... [Pg.13]

Function of the data that is independent of the parameter of interest. Fisher developed ancillary statistics as a way to make inferences when nuisance parameters are present. However, it only works in the exponential family of densities so it cannot be used in the genend case. See Cox and Hinkley (1974). [Pg.13]

In this chapter we go over the cases where the posterior distribution can be found easily, without having to do any numerical integration. In these cases, the observations come from a distribution that is a member of the exponential family of distributions, and a conjugate prior is used. The methods developed in this chapter will be used in later chapters as steps to help us draw samples from the posterior for more complicated models having many parameters. [Pg.61]

The density f y ff) is a member of the one-dimensional exponential family of densities if and only if the observation density function can be written... [Pg.62]

The conjugate family. The likelihood function has the same formula as the observation density, only with the observation held fixed and the parameter varying over all possible values. For a one-dimensional exponential family likelihood, we can absorb the factor B y) into the constant of proportionality since it is only a scale factor and does not affect the shape. The conjugate family of priors for a member of the one-dimensional exponential family of densities has the same form as the likelihood. It is given by... [Pg.62]

Thus when the observations are from a one-dimensional exponential family, and the prior is from the conjugate family, the posterior is easily found without any need for integration. All that is needed is the simple updating formulas for the constants. We will now look at some common distributions that are members of the one-dimensional exponential family of densities. [Pg.62]

Let us look at where we have a random sample of normally distributed observations. We will look at the two parameters, the mean fi and the variance separately. In this section we will find the posterior distribution of the mean /x given the variance 7 is a known constant. This way the observations come from a one-dimensional exponential family making the analysis relatively simple. [Pg.75]

More realistically, both parameters are unknown, and then the observations would be from a two-dimensional exponential family. Usually we want to do our inference on the mean y and regard the variance as a nuisance parameter. Using a joint prior,... [Pg.77]

Suppose y is a random draw from a normal fi, [Pg.78]

We see that this is a member of the one-dimensional exponential family with... [Pg.78]

The posterior can be found easily when the observations come from a onedimensional exponential family. The observation density has the form... [Pg.89]

Many commonly used distributions are members of the one-dimensional exponential family. These include the binomial n, n) and Poisson fi) distributions for count data, the geometric ir) and negative binomial r, tt) distributions for waiting time in a Bernoulli process, the exponential ) and gamma n, A) distributions for waiting times in a Poisson process, and the normal p, o ) where [Pg.89]

The normal fj,a ) where both parameters are unknown is a member of a two-dimensional exponential family. Surprisingly, the product of independent conjugate priors for /r and is not the conjugate prior for the case where /x and <7 are both unknown. [Pg.91]

Nelder and Wedderburn (1972) extended the general linear model in two ways. First, they relaxed the assumption that the observations have the normal distribution to allow the observations to come from some one-dimensional exponential family, not necessarily normal. Second, instead of requiring the mean of the observations to equal a linear function of the predictor, they allowed a function of the mean to be linked to (set equal to) the linear predictor. They named this the generalized linear model and called the function set equal to the linear predictor the link function. The logistic regression model satisfies the assumptions of the generalized linear model. They are ... [Pg.182]

The observations yi come from a one-dimensional exponential family of distributions. In the logistic regression case this is the binomial distribution. [Pg.182]

The logistic regression model is an example of a generalized linear model. The observations come from a member of one-dimensional exponential family, in this case binomial. Each observation has its own parameter value that is linked to the linear predictor by a link function, in this case the logit link. The observations are all independent. [Pg.199]


See other pages where Exponential family is mentioned: [Pg.73]    [Pg.74]    [Pg.94]    [Pg.94]    [Pg.113]    [Pg.94]    [Pg.267]    [Pg.83]    [Pg.23]    [Pg.61]    [Pg.61]    [Pg.62]    [Pg.63]    [Pg.66]    [Pg.70]    [Pg.83]   
See also in sourсe #XX -- [ Pg.61 , Pg.89 ]




SEARCH



One-Dimensional Exponential Family of Densities

© 2024 chempedia.info