Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Likelihood maximum

Another class of methods such as Maximum Entropy, Maximum Likelihood and Least Squares Estimation, do not attempt to undo damage which is already in the data. The data themselves remain untouched. Instead, information in the data is reconstructed by repeatedly taking revised trial data fx) (e.g. a spectrum or chromatogram), which are damaged as they would have been measured by the original instrument. This requires that the damaging process which causes the broadening of the measured peaks is known. Thus an estimate g(x) is calculated from a trial spectrum fx) which is convoluted with a supposedly known point-spread function h(x). The residuals e(x) = g(x) - g(x) are inspected and compared with the noise n(x). Criteria to evaluate these residuals are Maximum Entropy (see Section 40.7.2) and Maximum Likelihood (Section 40.7.1). [Pg.557]

The principle of Maximum Likelihood is that the spectrum, y(jc), is calculated with the highest probability to yield the observed spectrum g(x) after convolution with h x). Therefore, assumptions about the noise n x) are made. For instance, the noise in each data point i is random and additive with a normal or any other distribution (e.g. Poisson, skewed, exponential.) and a standard deviation s,. In case of a normal distribution the residual e, = g, - g, = g, - (/ /i), in each data point should be normally distributed with a standard deviation j,. The probability that (J h)i represents the measurement g- is then given by the conditional probability density function Pig, f)  [Pg.557]

Under the assumption that the noise in point i is uncorrelated with the noise in point j, the likelihood that (f /r), for all measurements, i, represents the measured set g, g2,. .., g is the product of all probabilities  [Pg.557]

This likelihood function has to be maximized for the parameters in f. The maximization is to be done under a set of constraints. An important constraint is the knowledge of the peak-shapes. We assume that f is composed of many individual [Pg.557]


For all calculations reported here, binary parameters from VLE data were obtained using the principle of maximum likelihood as discussed in Chapter 6, Binary parameters for partially miscible pairs were obtained from mutual-solubility data alone. [Pg.64]

As indicated in Chapter 6, and discussed in detail by Anderson et al. (1978), optimum parameters, based on the maximum-likelihood principle, are those which minimize the objective function... [Pg.67]

The method used here is based on a general application of the maximum-likelihood principle. A rigorous discussion is given by Bard (1974) on nonlinear-parameter estimation based on the maximum-likelihood principle. The most important feature of this method is that it attempts properly to account for all measurement errors. A discussion of the background of this method and details of its implementation are given by Anderson et al. (1978). [Pg.97]

If this criterion is based on the maximum-likelihood principle, it leads to those parameter values that make the experimental observations appear most likely when taken as a whole. The likelihood function is defined as the joint probability of the observed values of the variables for any set of true values of the variables, model parameters, and error variances. The best estimates of the model parameters and of the true values of the measured variables are those which maximize this likelihood function with a normal distribution assumed for the experimental errors. [Pg.98]

When there is significant random error in all the variables, as in this example, the maximum-likelihood method can lead to better parameter estimates than those obtained by other methods. When Barker s method was used to estimate the van Laar parameters for the acetone-methanol system from these data, it was estimated that = 0.960 and A j = 0.633, compared with A 2 0.857 and A2- = 0.681 using the method of maximum likelihood. Barker s method uses only the P-T-x data and assumes that the T and x measurements are error free. [Pg.100]

In the maximum-likelihood method used here, the "true" value of each measured variable is also found in the course of parameter estimation. The differences between these "true" values and the corresponding experimentally measured values are the residuals (also called deviations). When there are many data points, the residuals can be analyzed by standard statistical methods (Draper and Smith, 1966). If, however, there are only a few data points, examination of the residuals for trends, when plotted versus other system variables, may provide valuable information. Often these plots can indicate at a glance excessive experimental error, systematic error, or "lack of fit." Data points which are obviously bad can also be readily detected. If the model is suitable and if there are no systematic errors, such a plot shows the residuals randomly distributed with zero means. This behavior is shown in Figure 3 for the ethyl-acetate-n-propanol data of Murti and Van Winkle (1958), fitted with the van Laar equation. [Pg.105]

The maximum-likelihood method is not limited to phase equilibrium data. It is applicable to any type of data for which a model can be postulated and for which there are known random measurement errors in the variables. P-V-T data, enthalpy data, solid-liquid adsorption data, etc., can all be reduced by this method. The advantages indicated here for vapor-liquid equilibrium data apply also to other data. [Pg.108]

The maximum-likelihood method, like any statistical tool, is useful for correlating and critically examining experimental information. However, it can never be a substitute for that information. While a statistical tool is useful for minimizing the required experimental effort, reliable calculated phase equilibria can only be obtained if at least some pertinent and reliable experimental data are at hand. [Pg.108]

MAXIMUM LIKELIHOOD ESTIMATION OF PARAMETERS FROM VLE DATA... [Pg.278]

MAXIMUM LIKELIHOOD ESTIMATION OF OAnAMETEPS FoOM VLE DCONTROL PARAMETERS K ERE SET AS FOLLOWS -... [Pg.284]

The importance of distinct a priori knowledge account becomes more perceptible if noisy data are under restoration. The noise / ( shifts the solution of (1) from the Maximum Likelihood (ML) to the so called Default Model for which the function of the image constraint becomes more significant. [Pg.117]

Enderiein J, Goodwin P M, Van Orden A, Ambrose W P, Erdmann R and Keller R A 1997 A maximum likelihood estimator to distinguish single molecules by their fluorescence decays Chem. Phys. Lett. 270 464-70... [Pg.2506]

One limitation of clique detection is that it needs to be run repeatedly with differei reference conformations and the run-time scales with the number of conformations pt molecule. The maximum likelihood method [Bamum et al. 1996] eliminates the need for reference conformation, effectively enabling every conformation of every molecule to a< as the reference. Despite this, the algorithm scales linearly with the number of conformatior per molecule, so enabling a larger number of conformations (up to a few hundred) to b handled. In addition, the method scores each of the possible pharmacophores based upo the extent to which it fits the set of input molecules and an estimate of its rarity. It is nc required that every molecule has to be able to match every feature for the pharmacophor to be considered. [Pg.673]

Parameter Estimation. WeibuU parameters can be estimated using the usual statistical procedures however, a computer is needed to solve readily the equations. A computer program based on the maximum likelihood method is presented in Reference 22. Graphical estimation can be made on WeibuU paper without the aid of a computer however, the results caimot be expected to be as accurate and consistent. [Pg.13]

We thus get the values of a and h with maximum likelihood as well as the variances of a and h Using the value of yj for this a and h, we can also calculate the goodness of fit, P In addition, the linear correlation coefficient / is related by... [Pg.502]

Maximum likelihood methods used in classical statistics are not valid to estimate the 6 s or the q s. Bayesian methods have only become possible with the development of Gibbs sampling methods described above, because to form the likelihood for a full data set entails the product of many sums of the form of Eq. (24) ... [Pg.327]

Sanderson, A. C. 1997 Assemblability Based on Maximum Likelihood Configuration of Tolerances. In Proceedings IEEE International Symposium on Assembly and Ta.sk Planning, Marina del Rey, CA, Wiley, 96-102. [Pg.391]

A leading utility in tlie nortlieast had requested out-of-compliance information to better schedule outages (plant shutdowns). The time to failure, T, of a bus section was assumed to have a Weibuill distribution, tlie parameters of which were estimated by the metliod of maximum likelihood on tlie basis of observed bus section failures shown in Table 21.6.1 for tlie utility s 5x8... [Pg.626]

Although it sounds reasonable to use the maximum likelihood to define our esfimafe of the displacement, there are two questions that remain. Firstly, what is the variance of the error associated with this estimate This defines N which was used in Eq. 22 fo defermine fhe error in fhe wavefront reconstruction. Secondly, is it possible to do better than the centroid In other words is it optimal ... [Pg.387]

Figure 7 shows that for the maximum likelihood estimator the variance in the slope estimate decreases as the telescope aperture size increases. For the centroid estimator the variance of the slope estimate also decreases with increasing aperture size when the telescope aperture is less than the Fried parameter, ro (Fried, 1966), but saturates when the aperture size is greater than this value. [Pg.391]

Figure 7. The variance in the slope estimate versus aperture size for the centroid and maximum likelihood estimators for turbulence defined by ro = 0.25. Figure 7. The variance in the slope estimate versus aperture size for the centroid and maximum likelihood estimators for turbulence defined by ro = 0.25.
Maximum likelihood methods are commonly used to estimate parameters from noisy data. Such methods can be applied to image restoration, possibly with additional constraints (e.g., positivity). Maximum likelihood methods are however not appropriate for solving ill-conditioned inverse problems as will be shown in this section. [Pg.403]

The maximum likelihood (ML) solution is the one which maximizes the probability of the data y given the model among all possible x ... [Pg.404]

In the hope that additional constraints such as positivity (which must hold for the restored brightness distribution) may avoid noise amplification, we can seek for the constrained maximum likelihood (CML) solution ... [Pg.405]

Image Space Reconstruction Algorithm. ISRA (Daube-Witherspoon and MuehUehner, 1986) is a multiplicative and iterative method which yields the constrained maximum likelihood in the case of Gaussian noise. The ISRA solution is obtained using the recursion ... [Pg.407]

Figures 4b and 4c show that neither unconstrained nor non-negative maximum likelihood approaches are able to recover a usable image. Deconvolution by unconstrained/constrained maximum likelihood yields noise amplification — in otfier words, the maximum likelihood solution remains iU-conditioned (i.e. a small change in the data due to noise can produce arbitrarily large changes in the solution) regularization is needed. Figures 4b and 4c show that neither unconstrained nor non-negative maximum likelihood approaches are able to recover a usable image. Deconvolution by unconstrained/constrained maximum likelihood yields noise amplification — in otfier words, the maximum likelihood solution remains iU-conditioned (i.e. a small change in the data due to noise can produce arbitrarily large changes in the solution) regularization is needed.
When started with a smooth image, iterative maximum likelihood algorithms can achieve some level of regularization by early stopping of the iterations before convergence (see e.g. Lanteri et al., 1999). In this case, the regularized solution is not the maximum fikelihood one and it also depends on the initial solution and the number of performed iterations. A better solution is to explicitly account for additional regularization constraints in the penalty criterion. This is explained in the next section. [Pg.408]

We have seen that the maximum likelihood solution ... [Pg.409]


See other pages where Likelihood maximum is mentioned: [Pg.67]    [Pg.79]    [Pg.97]    [Pg.97]    [Pg.241]    [Pg.114]    [Pg.673]    [Pg.62]    [Pg.22]    [Pg.203]    [Pg.227]    [Pg.385]    [Pg.392]    [Pg.392]    [Pg.403]    [Pg.404]    [Pg.405]    [Pg.405]    [Pg.406]    [Pg.408]    [Pg.409]   
See also in sourсe #XX -- [ Pg.96 , Pg.97 ]

See also in sourсe #XX -- [ Pg.366 ]

See also in sourсe #XX -- [ Pg.77 , Pg.260 ]

See also in sourсe #XX -- [ Pg.102 ]

See also in sourсe #XX -- [ Pg.115 , Pg.116 , Pg.117 , Pg.118 , Pg.119 , Pg.120 , Pg.129 ]

See also in sourсe #XX -- [ Pg.18 , Pg.45 ]

See also in sourсe #XX -- [ Pg.371 ]

See also in sourсe #XX -- [ Pg.312 , Pg.317 , Pg.318 ]

See also in sourсe #XX -- [ Pg.140 ]

See also in sourсe #XX -- [ Pg.298 ]

See also in sourсe #XX -- [ Pg.42 , Pg.43 ]

See also in sourсe #XX -- [ Pg.100 ]

See also in sourсe #XX -- [ Pg.125 , Pg.175 , Pg.187 , Pg.188 , Pg.199 ]

See also in sourсe #XX -- [ Pg.68 ]

See also in sourсe #XX -- [ Pg.77 , Pg.260 ]

See also in sourсe #XX -- [ Pg.467 ]

See also in sourсe #XX -- [ Pg.55 , Pg.126 ]

See also in sourсe #XX -- [ Pg.243 , Pg.245 ]

See also in sourсe #XX -- [ Pg.340 ]

See also in sourсe #XX -- [ Pg.169 ]




SEARCH



Amino acids maximum likelihood

Bayes- and Maximum Likelihood Classifiers

Bayes- and Maximum Likelihood Classifiers for Binary Encoded Patterns

Evolution maximum likelihood

Gaussian distribution maximum-likelihood estimates

Implicit Maximum Likelihood Parameter Estimation

Likelihood

Logistic regression model maximum likelihood estimation

MATLAB maximum likelihood

Maximum Likelihood (ML)

Maximum Likelihood (ML) Estimation

Maximum Likelihood Fits

Maximum Likelihood Parameter and State Estimation

Maximum Likelihood and Least Squares Criteria

Maximum likelihood algorithms

Maximum likelihood classifie

Maximum likelihood equation

Maximum likelihood estimate risk assessment

Maximum likelihood estimates

Maximum likelihood estimation

Maximum likelihood estimation linear model

Maximum likelihood estimation parameter estimates

Maximum likelihood estimation, defined

Maximum likelihood estimation, optimal

Maximum likelihood estimator

Maximum likelihood estimators, table

Maximum likelihood expectation

Maximum likelihood expectation maximization

Maximum likelihood method

Maximum likelihood point

Maximum likelihood principal components

Maximum likelihood principal components analysis

Maximum likelihood principl

Maximum likelihood principle

Maximum likelihood rectification

Maximum likelihood solution

Maximum likelihood technique

Maximum-Likelihood Parameter Estimates for ARMA Models

Maximum-Likelihood State-Space Estimates

Maximum-likelihood estimation, numerical

Maximum-likelihood inference

Maximum-likelihood method processing data

Maximum-likelihood phase reconstruction

Maximum-likelihood regression

Maximum-likelihood trees

Restricted maximum likelihood

Restricted maximum likelihood REML)

Restricted maximum likelihood estimate

Statistical distributions maximum likelihood

The Maximum Likelihood Classification

Threshold maximum likelihood

© 2024 chempedia.info