Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Maximum likelihood estimation parameter estimates

MAXIMUM LIKELIHOOD ESTIMATION OF PARAMETERS FROM VLE DATA... [Pg.278]

MAXIMUM LIKELIHOOD ESTIMATION OF OAnAMETEPS FoOM VLE DCONTROL PARAMETERS K ERE SET AS FOLLOWS -... [Pg.284]

Figure 7 shows that for the maximum likelihood estimator the variance in the slope estimate decreases as the telescope aperture size increases. For the centroid estimator the variance of the slope estimate also decreases with increasing aperture size when the telescope aperture is less than the Fried parameter, ro (Fried, 1966), but saturates when the aperture size is greater than this value. [Pg.391]

Weighted regression of U- " U- °Th- Th isotope data on three or more coeval samples provides robust estimates of the isotopic information required for age calculation. Ludwig (2003) details the use of maximum likelihood estimation of the regression parameters in either coupled XY-XZ isochrons or a single three dimensional XYZ isochron, where A, Y and Z correspond to either (1) U/ Th, °Th/ Th and... [Pg.414]

The above implicit formulation of maximum likelihood estimation is valid only under the assumption that the residuals are normally distributed and the model is adequate. From our own experience we have found that implicit estimation provides the easiest and computationally the most efficient solution to many parameter estimation problems. [Pg.21]

This choice of Qi yields maximum likelihood estimates of the parameters if the error terms in each response variable and for each experiment (eu, i=l,...N j=l,...,w) are all identically and independently distributed (i.i.d) normally with zero mean and variance, o . Namely, (e,) = 0 and COV(s,) = a I where I is the mxm identity matrix. [Pg.26]

An alternative approach to these methods is to obtain the influence function directly from the error distribution. In this case, for the maximum likelihood estimation of the parameters, the i/ function can be chosen as follows ... [Pg.227]

Maximum likelihood estimators also have another desirable property invariance. Let us denote the maximum likelihood estimator of the parameter 6 by single-valued function of 6, the maximum likelihood estimator of f(0) is /([Pg.904]

The optimal parameter p can be found by maximum-likelihood estimation, but even the optimal p will not guarantee that the Box—Cox transformed values are symmetric. Note that all these transformations are only defined for positive data values. In case of negative values, a constant has to be added to make them positive. Within R, the Box—Cox transformation can be performed to data of a vector Jt as follows ... [Pg.48]

The parameter estimation for the mixture model (Equation 5.25) is based on maximum likelihood estimation. The likelihood function L is defined as the product of the densities for the objects, i.e.,... [Pg.227]

This model assumes that any dosage effect has the same mechanism as that which causes the background incidence. Low-dose linearity follows directly from this additive assumption, provided that any fraction of the background effect is additive no matter how small. A best fit curve is fitted to the data obtained from a long-term rodent cancer bioassay using computer programs. The estimates of the parameters in the polynomial are called Maximum Likelihood Estimates (MLE), based upon the statistical procedure used for fitting the curve, and can be considered as best fit estimates. Provided the fit of the model is satisfactory, the estimates of these parameters are used to extrapolate to low-dose exposures. [Pg.303]

Maximum likelihood (ML) is the approach most commonly used to fit a parametric distribution (Madgett 1998 Vose 2000). The idea is to choose the parameter values that maximize the probability of the data actually observed (for fitting discrete distributions) or the joint density of the data observed (for continuous distributions). Estimates or estimators based on the ML approach are termed maximum-likelihood estimates or estimators (MLEs). [Pg.35]

The system of three equations (cost and two shares) can be estimated as discussed in the text. Invariance is achieved by using a maximum likelihood estimator. The five parameters eliminated by the restrictions can be estimated after the others are obtained just by using the restrictions. The restrictions are linear, so the standard errors are also striaghtforward to obtain. [Pg.70]

Therefore, the maximum likelihood estimator is 1/y and its asymptotic variance is Q2/n. Since we found fly) by factoringy(x,y) into fly)flx y) (apparently, given our result), the answer follows immediately. Just divide the expression used in part e. by fly). This is a Poisson distribution with parameter (3y. The log-likelihood function and its fust derivative are... [Pg.86]

Limited Information Maximum Likelihood Estimation). Consider a bivariate distribution for x and y that is a function of two parameters, a and fi The joint density is j x,y a,p). We consider maximum likelihood estimation of the two parameters. The full information maximum likelihood estimator is the now familiar maximum likelihood estimator of the two parameters. Now, suppose that we can factor the joint distribution as done in Exercise 3, but in this case, we have, fix,y a, ft) — f(y x.a.f )f(x a). That is, the conditional density for y is a function of both parameters, but the marginal distribution for x involves only... [Pg.88]

For random sampling from the classical regression model in (17-3), reparameterize the likelihood function in terms of 77 = 1/cr and 8 = (1 o)P- Find the maximum likelihood estimators of 77 and 8 and obtain the a symptotic covariance matrix of the estimators of these parameters. [Pg.90]

Show that the maximum likelihood estimates of the parameters are... [Pg.92]

We have considered maximum likelihood estimation of the parameters of this model at several points. Consider, instead, a GMM estimator based on the result that... [Pg.95]

Using the data above, obtain maximum likelihood estimates of the unknown parameters of the model. [Hint Consider the probabilities as the unknown parameters.]... [Pg.108]

The parameters A,k and b must be estimated from sr The general problem of parameter estimation is to estimate a parameter, 0, given a number of samples, x,-, drawn from a population that has a probability distribution P(x, 0). It can be shown that there is a minimum variance bound (MVB), known as the Cramer-Rao inequality, that limits the accuracy of any method of estimating 0 [55]. There are a number of methods that approach the MVB and give unbiased estimates of 0 for large sample sizes [55]. Among the more popular of these methods are maximum likelihood estimators (MLE) and least-squares estimation (LS). The MLE... [Pg.34]

One powerful technique is Maximum Likelihood Estimation (MLE) which requires the derivation of the Joint Conditional Probability Density Function (PDF) of the output sequence [ ], conditional on the model parameters. The input e n to the system shown in figure 4.25 is assumed to be a white Gaussian noise (WGN) process with zero mean and a variance of 02. The probability density of the noise input is ... [Pg.110]

The aim of parameter estimation is an adaptation of the model function to the observations made to gain model parameters which describe the observed data best. In NONMEM this is done by the minimization of the extended least square objective Oels function, which provides maximum likelihood estimates under Gaussian conditions [13]. The equation calculating the Oels function is given in the following ... [Pg.459]

Both Astrom (53) and Box and Jenkins (54) have developed modeling approaches for equation (13), which involve obtaining maximum likelihood estimates of the parameters in the postulated model followed by diagnostic checking of the sum of the residuals. The Box and Jenkins method also develops a detailed model for the process disturbance. Both of the above references include derivations of the minimum variance control. [Pg.106]

A comparison of the various fitting techniques is given in Table 5. Most of these techniques depend either explicitly or implicitly on a least-squares minimization. This is appropriate, provided the noise present is normally distributed. In this case, least-squares estimation is equivalent to maximum-likelihood estimation.147 If the noise is not normally distributed, a least-squares estimation is inappropriate. Table 5 includes an indication of how each technique scales with N, the number of data points, for the case in which N is large. A detailed discussion on how different techniques scale with N and also with the number of parameters, is given in the PhD thesis of Vanhamme.148... [Pg.112]

Another optimization approach was followed by Wagner [68 ]. Wagner developed a methodology for performing simultaneous model parameter estimation and source characterization, in which he used an inverse model as a non-linear maximum likelihood estimation problem. The hydrogeologic and source parameters were estimated based on hydraulic head and contaminant concentration measurements. In essence, this method is minimizing the following ... [Pg.77]

All parameter combinations enclosed by the elipsoidal surface do not deviate significantly from the maximum likelihood estimates b at the probability level (1 - a). Examples for a two-dimensional case are given in Fig. 11, to be discussed later. [Pg.315]

In such an analysis, one selects a suitable objective function and then varies the parameters so as to maximize or minimize the function. Theoretically, the objective function should be derived using the statistical principles of maximum-likelihood estimation. In practice, however, it is satisfactory to use a weighted-least-squares analysis, as follows ... [Pg.115]

Maximum likelihood estimation Criterion under which the best estimate is the one which maximizes the likelihood of the observed event. Maximum likelihood estimation is the classical statistical criterion for estimating unknown parameter values from observed data (Sielken, Ch. 8). [Pg.398]

The estimates are the maximum likelihood estimates determined by NONMEM. %RSE is the percent relative error calculated by dividing the asymptotic standard error by the parameter estimate. Statistical significance is the significance level as determined by the log likelihood difference. NT = not tested. [Pg.712]

Haines et al. (47) suggested including the criterion Bayesian D-optimality, which maximizes some concave function of the information matrix, which in essence is the minimization of the generalized variance of the maximum likelihood estimators of the two parameters of the logistic regression. The authors underline that toxicity is recorded as an ordinal variable and not a simple binary variable, and that the present design needs to be extended to proportional odds models. [Pg.792]


See other pages where Maximum likelihood estimation parameter estimates is mentioned: [Pg.22]    [Pg.392]    [Pg.648]    [Pg.16]    [Pg.91]    [Pg.102]    [Pg.123]    [Pg.65]    [Pg.106]    [Pg.147]    [Pg.369]    [Pg.154]    [Pg.112]    [Pg.134]    [Pg.164]    [Pg.154]    [Pg.233]    [Pg.272]    [Pg.307]    [Pg.664]   
See also in sourсe #XX -- [ Pg.60 ]




SEARCH



Likelihood

Maximum likelihood

Maximum likelihood estimates

Parameter estimation

© 2024 chempedia.info