Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Maximum likelihood equation

Thus, when the attention of the mathematicians of the time turned to the description of overdetermined systems, such as we are dealing with here, it was natural for them to seek the desired solution in terms of probabilistic descriptions. They then defined the best fitting equation for an overdetermined set of data as being the most probable equation, or, in more formal terminology, the maximum likelihood equation. [Pg.33]

The basis upon which this concept rests is the very fact that not all the data follows the same equation. Another way to express this is to note that an equation describes a line (or more generally, a plane or hyperplane if more than two dimensions are involved. In fact, anywhere in this discussion, when we talk about a calibration line, you should mentally add the phrase ... or plane, or hyperplane... ). Thus any point that fits the equation will fall exactly on the line. On the other hand, since the data points themselves do not fall on the line (recall that, by definition, the line is generated by applying some sort of [at this point undefined] averaging process), any given data point will not fall on the line described by the equation. The difference between these two points, the one on the line described by the equation and the one described by the data, is the error in the estimate of that data point by the equation. For each of the data points there is a corresponding point described by the equation, and therefore a corresponding error. The least square principle states that the sum of the squares of all these errors should have a minimum value and as we stated above, this will also provide the maximum likelihood equation. [Pg.34]

Example 5.10 Exact Solution of the Maximum-Likelihood Equation... [Pg.247]

Pawitan (2001) shows that for logistic regression the solution of the maximum likelihood equations can be found by iterating the following steps until convergence. Let be the parameter vector at step n - 1. [Pg.183]

In the maximum-likelihood method used here, the "true" value of each measured variable is also found in the course of parameter estimation. The differences between these "true" values and the corresponding experimentally measured values are the residuals (also called deviations). When there are many data points, the residuals can be analyzed by standard statistical methods (Draper and Smith, 1966). If, however, there are only a few data points, examination of the residuals for trends, when plotted versus other system variables, may provide valuable information. Often these plots can indicate at a glance excessive experimental error, systematic error, or "lack of fit." Data points which are obviously bad can also be readily detected. If the model is suitable and if there are no systematic errors, such a plot shows the residuals randomly distributed with zero means. This behavior is shown in Figure 3 for the ethyl-acetate-n-propanol data of Murti and Van Winkle (1958), fitted with the van Laar equation. [Pg.105]

VLE data are correlated by any one of thirteen equations representing the excess Gibbs energy in the liquid phase. These equations contain from two to five adjustable binary parameters these are estimated by a nonlinear regression method based on the maximum-likelihood principle (Anderson et al., 1978). [Pg.211]

Parameter Estimation. WeibuU parameters can be estimated using the usual statistical procedures however, a computer is needed to solve readily the equations. A computer program based on the maximum likelihood method is presented in Reference 22. Graphical estimation can be made on WeibuU paper without the aid of a computer however, the results caimot be expected to be as accurate and consistent. [Pg.13]

Table 2.3 is used to classify the differing systems of equations, encountered in chemical reactor applications and the normal method of parameter identification. As shown, the optimal values of the system parameters can be estimated using a suitable error criterion, such as the methods of least squares, maximum likelihood or probability density function. [Pg.112]

If the covariance matrices of the response variables are unknown, the maximum likelihood parameter estimates are obtained by maximizing the Loglikeli-hood function (Equation 2.20) over k and the unknown variances. Following the distributional assumptions of Box and Draper (1965), i.e., assuming that i= 2=...=En= , it can be shown that the ML parameter estimates can be obtained by minimizing the determinant (Bard, 1974)... [Pg.19]

The values of the elements of the weighting matrices R, depend on the type of estimation method being used. When the residuals in the above equations can be assumed to be independent, normally distributed with zero mean and the same constant variance, Least Squares (LS) estimation should be performed. In this case, the weighting matrices in Equation 14.35 are replaced by the identity matrix I. Maximum likelihood (ML) estimation should be applied when the EoS is capable of calculating the correct phase behavior of the system within the experimental error. Its application requires the knowledge of the measurement... [Pg.256]

When experimental data is to be fit with a mathematical model, it is necessary to allow for the fact that the data has errors. The engineer is interested in finding the parameters in the model as well as the uncertainty in their determination. In the simplest case, the model is a linear equation with only two parameters, and they are found by a least-squares minimization of the errors in fitting the data. Multiple regression is just linear least squares applied with more terms. Nonlinear regression allows the parameters of the model to enter in a nonlinear fashion. See Press et al. (1986) for a description of maximum likelihood as it applies to both linear and nonlinear least squares. [Pg.84]

These considerations raise a question how can we determine the optimal value of n and the coefficients i < n in (2.54) and (2.56) Clearly, if the expansion is truncated too early, some terms that contribute importantly to Po(AU) will be lost. On the other hand, terms above some threshold carry no information, and, instead, only add statistical noise to the probability distribution. One solution to this problem is to use physical intuition [40]. Perhaps a better approach is that based on the maximum likelihood (ML) method, in which we determine the maximum number of terms supported by the provided information. For the expansion in (2.54), calculating the number of Gaussian functions, their mean values and variances using ML is a standard problem solved in many textbooks on Bayesian inference [43]. For the expansion in (2.56), the ML solution for n and o, also exists, lust like in the case of the multistate Gaussian model, this equation appears to improve the free energy estimates considerably when P0(AU) is a broad function. [Pg.65]

Optimization pervades the fields of science, engineering, and business. In physics, many different optimal principles have been enunciated, describing natural phenomena in the fields of optics and classical mechanics. The field of statistics treats various principles termed maximum likelihood, minimum loss, and least squares, and business makes use of maximum profit, minimum cost, maximum use of resources, minimum effort, in its efforts to increase profits. A typical engineering problem can be posed as follows A process can be represented by some equations or perhaps solely by experimental data. You have a single performance criterion in mind such as minimum cost. The goal of optimization is to find the values of the variables in the process that yield the best value of the performance criterion. A trade-off usually exists between capital and operating costs. The described factors—process or model and the performance criterion—constitute the optimization problem. ... [Pg.4]

Note that a scalar behaves as a symmetric matrix.) Because of finite sampling, and P cannot be evaluated exactly. Instead, we will search for unbiased estimates a and P of a and P together with unbiased estimates y( and xtj of yt and xu that satisfy the linear model given by equation (5.4.37) and minimize the maximum-likelihood expression in xt and y,. Introducing m Lagrange multipliers A , one for each linear... [Pg.295]

The parameter estimation for the mixture model (Equation 5.25) is based on maximum likelihood estimation. The likelihood function L is defined as the product of the densities for the objects, i.e.,... [Pg.227]

To use the likelihood ratio method to test the hypothesis, we will require the restricted maximum likelihood estimate. Under the hypothesis,the model is the one in Section 15.2.2. The restricted estimate is given in (15-12) and the equations which follow. To obtain them, we make a small modification in our algorithm above. We replace step (3) with... [Pg.66]

The system of three equations (cost and two shares) can be estimated as discussed in the text. Invariance is achieved by using a maximum likelihood estimator. The five parameters eliminated by the restrictions can be estimated after the others are obtained just by using the restrictions. The restrictions are linear, so the standard errors are also striaghtforward to obtain. [Pg.70]

Since the fust likelihood equation implies that at the maximum, a = n / xf, one approach would be to... [Pg.86]

There are a few different ways one might solve these two equations. A grid search over the values of y and 0 is a possibility. A direct maximum likelihood estimator for the tobit model is the simpler choice if one is available. The model with only a constant term is otheiwise the same as the usual model. Using the data... [Pg.112]

The MLE approach involves maximising the Likelihood function with respect to a, k and O. The values of a, k and o which maximise this equation are the Maximum Likelihood estimates of the model. [Pg.395]

The aim of parameter estimation is an adaptation of the model function to the observations made to gain model parameters which describe the observed data best. In NONMEM this is done by the minimization of the extended least square objective Oels function, which provides maximum likelihood estimates under Gaussian conditions [13]. The equation calculating the Oels function is given in the following ... [Pg.459]

Both Astrom (53) and Box and Jenkins (54) have developed modeling approaches for equation (13), which involve obtaining maximum likelihood estimates of the parameters in the postulated model followed by diagnostic checking of the sum of the residuals. The Box and Jenkins method also develops a detailed model for the process disturbance. Both of the above references include derivations of the minimum variance control. [Pg.106]

Assuming a log-linear dispersion model as in equation (8), Nair and Pregibon (1988) showed that D M is the maximum likelihood estimator of dispersion effect, whereas D is the maximum likelihood estimator of j for a fully saturated dispersion model with effects for all possible factors. Nair and Pregibon concluded from this result that Df would be a better statistic to use for initial analyses aimed at identifying possible dispersion effects. [Pg.35]


See other pages where Maximum likelihood equation is mentioned: [Pg.356]    [Pg.356]    [Pg.79]    [Pg.405]    [Pg.366]    [Pg.186]    [Pg.382]    [Pg.249]    [Pg.258]    [Pg.32]    [Pg.86]    [Pg.141]    [Pg.192]    [Pg.107]    [Pg.395]    [Pg.232]    [Pg.154]    [Pg.207]    [Pg.568]   
See also in sourсe #XX -- [ Pg.33 , Pg.34 ]

See also in sourсe #XX -- [ Pg.33 , Pg.34 ]




SEARCH



Likelihood

Maximum likelihood

© 2024 chempedia.info