Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Maximum likelihood algorithms

When started with a smooth image, iterative maximum likelihood algorithms can achieve some level of regularization by early stopping of the iterations before convergence (see e.g. Lanteri et al., 1999). In this case, the regularized solution is not the maximum fikelihood one and it also depends on the initial solution and the number of performed iterations. A better solution is to explicitly account for additional regularization constraints in the penalty criterion. This is explained in the next section. [Pg.408]

Prior knowledge and various hypotheses are condensed into models. NONMEM determines the parameter vector including fixed and random effects of each model using the maximum likelihood algorithm. NONMEM uses each model to predict the observed data set and selects the best PPK parameter vector minimising the deviation between model prediction and observed data. Comparing model fits by the criteria discussed in the section Evaluation should decide which hypothesis is the most likely. As a general rule, the model should be as simple as possible and the number of parameters should be at a minimum. [Pg.748]

Successful and economical applications of maximum likelihood classifiers have been reported for binary encoded infrared spectra C356T and nuclear magnetic resonance spectra C3573. An extensive examination of a maximum likelihood algorithm for the interpretation of mass spectra was made by Franzen et.al. C86, 87, 1083 and others C200, 2483. Approximation of the probability density by a mathematical function has found up to now only little interest C3173. [Pg.87]

Other methods are available for deconvolution. One of the most successful of these is maximum likelihood deconvolution. Although a description of this subject is outside the scope of this text, maximum likelihood can give results that are generally superior to Fourier selfdeconvolution. The maximum likelihood algorithm works by searching a response surface of all selected variables until it finds the most likely deconvolution based on the information content of the data. [Pg.262]

L i,o) is the likelihood estimation function of the object, Q(o) is a smoothing function and y is a regularization parameter. The likeUhood function depends on the noise model Poisson or Gaussian. It measures how well the estimate fits the acquired data. The smoothing function, also called the penalty function, attenuates the artifacts generated by noise amplification, y is a balance between the need to fit the data and the need to stabilize the estimate. Setting O to 0 yields the maximum likelihood algorithm. [Pg.229]

The ambient, shaker, and drop weight data from scenario 8 of the progressive damage test have been employed as benchmark data for system identification methods for operational modal analysis. Peeters and Ventura (2003) compare the modal parameter estimates obtained by seven different research teams in the framework of this benchmark. In addition, new modal parameter estimation techniques have been validated on the benchmark data. The best reported result was obtained by applying a subspace identification algorithm (Reynders and De Roeck 2008) and a maximum likelihood algorithm... [Pg.3874]

One limitation of clique detection is that it needs to be run repeatedly with differei reference conformations and the run-time scales with the number of conformations pt molecule. The maximum likelihood method [Bamum et al. 1996] eliminates the need for reference conformation, effectively enabling every conformation of every molecule to a< as the reference. Despite this, the algorithm scales linearly with the number of conformatior per molecule, so enabling a larger number of conformations (up to a few hundred) to b handled. In addition, the method scores each of the possible pharmacophores based upo the extent to which it fits the set of input molecules and an estimate of its rarity. It is nc required that every molecule has to be able to match every feature for the pharmacophor to be considered. [Pg.673]

Image Space Reconstruction Algorithm. ISRA (Daube-Witherspoon and MuehUehner, 1986) is a multiplicative and iterative method which yields the constrained maximum likelihood in the case of Gaussian noise. The ISRA solution is obtained using the recursion ... [Pg.407]

Lanteri, H., Roche, M., and Aime, C., 2002, Penalized maximum likelihood image restoration with positivity constraints multiplicative algorithms. Inverse Problems 18, 1397... [Pg.421]

Thiebaut, E., Conan, J.-M., 1995, Strict a priori constraints for maximum likelihood blind deconvolution, JOSA.A, 12, 485 Thiebaut, E., 2002, Optimization issues in blind deconvolution algorithms, SPIE 4847, 174... [Pg.421]

Note that there is a strong similarity to LDA (Section 5.2.1), because it can be shown that also for LDA the log-ratio of the posterior probabilities is modeled by a linear function of the x-variables. However, for LR, we make no assumption for the data distribution, and the parameters are estimated differently. The estimation of the coefficients b0, b, ..., bm is done by the maximum likelihood method which leads to an iteratively reweighted least squares (IRLS) algorithm (Hastie et al. 2001). [Pg.222]

To use the likelihood ratio method to test the hypothesis, we will require the restricted maximum likelihood estimate. Under the hypothesis,the model is the one in Section 15.2.2. The restricted estimate is given in (15-12) and the equations which follow. To obtain them, we make a small modification in our algorithm above. We replace step (3) with... [Pg.66]

Other methods such as Genetic Algorithm based on evolution principles, maximum entropy method based on Bayesian theory, and maximum likelihood methods have also been developed. ... [Pg.6434]


See other pages where Maximum likelihood algorithms is mentioned: [Pg.77]    [Pg.88]    [Pg.195]    [Pg.109]    [Pg.1178]    [Pg.1872]    [Pg.626]    [Pg.77]    [Pg.88]    [Pg.195]    [Pg.109]    [Pg.1178]    [Pg.1872]    [Pg.626]    [Pg.406]    [Pg.421]    [Pg.648]    [Pg.225]    [Pg.232]    [Pg.259]    [Pg.352]    [Pg.368]    [Pg.47]    [Pg.629]    [Pg.373]    [Pg.146]    [Pg.102]    [Pg.123]    [Pg.152]    [Pg.162]    [Pg.301]    [Pg.258]    [Pg.182]    [Pg.147]    [Pg.195]    [Pg.256]    [Pg.79]    [Pg.50]    [Pg.284]    [Pg.154]    [Pg.271]   
See also in sourсe #XX -- [ Pg.77 , Pg.80 ]




SEARCH



Likelihood

Maximum likelihood

© 2024 chempedia.info