Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gaussian distribution maximum-likelihood estimates

We assume that the errors in the measurements are statistically independent, scaled by the weights, u , in such a way that they have equal variance ((T ) and come from a Gaussian distribution. In case of these reasonable assumptions weighted least squares coincides with the maximum likelihood estimate. The (weighted) experimental errors of the measurements are given by Y 0)y as in (6.2). This means that the covariance matrix of the experimental errors is given by ... [Pg.232]

These considerations raise a question how can we determine the optimal value of n and the coefficients i < n in (2.54) and (2.56) Clearly, if the expansion is truncated too early, some terms that contribute importantly to Po(AU) will be lost. On the other hand, terms above some threshold carry no information, and, instead, only add statistical noise to the probability distribution. One solution to this problem is to use physical intuition [40]. Perhaps a better approach is that based on the maximum likelihood (ML) method, in which we determine the maximum number of terms supported by the provided information. For the expansion in (2.54), calculating the number of Gaussian functions, their mean values and variances using ML is a standard problem solved in many textbooks on Bayesian inference [43]. For the expansion in (2.56), the ML solution for n and o, also exists, lust like in the case of the multistate Gaussian model, this equation appears to improve the free energy estimates considerably when P0(AU) is a broad function. [Pg.65]

The problem considered here is the estimation of the state vector X (which contains the unknown parameters) from the observations of the vectors = [yo> yi.yk ] Because the collection of variables Y = (yoYi - -yk) jointly gaussian, we can estimate X by maximizing the likelihood of conditional probability distributions p(Xk/Yk), which are given by the values of conditional variables. Moreover, we can also search the estimate X, which minimizes the mean square error k = Xk — Xk. In both cases (maximum likelihood or least squares), the optimal estimate for the jointly gaussian variables is the conditional mean and the error in the estimate is the conventional covariance. [Pg.179]

Unlike maximum likelihood rectification, Bayesian rectification can remove errors even in the absence of process models. Another useful feature of the Bayesian approach is that if the probability distributions of the prior and noise are Gaussian, the error of approximation between the noise-free and rectified measurements can be estimated before rectifying the data as,... [Pg.425]

Assume that an appropriate response surface model q ) has been chosen to represent the experimental data. Then, for estimating the values of the parameters 0 in the model, the method of maximum likelihood can be utilized. Under the assumptions of a Gaussian distribution of the random error terms e, the method of maximum likelihood can be replaced by the more common method of least squares (Box and Draper 1987). In the latter case, the parameters 0 are determined in such a way that the sum of squares of the differences between the value of... [Pg.3620]


See other pages where Gaussian distribution maximum-likelihood estimates is mentioned: [Pg.648]    [Pg.298]    [Pg.692]    [Pg.4]    [Pg.228]    [Pg.192]    [Pg.533]    [Pg.429]   


SEARCH



Gaussian distribution

Likelihood

Maximum likelihood

Maximum likelihood estimates

© 2024 chempedia.info