Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Maximum-likelihood inference

Adachi, J., and Hasegawa, M. (1992). MOLPHY Programs for molecular phylogenetics, I. PROTML maximum likelihood inference of protein phylogeny, Computer Science Monograph no. 2 7. Institute of Statistical Mathematics, Tokyo. [Pg.133]

H. Lynn, Maximum likelihood inference for left-censored HIV RNA data. Stat Med 20 33 5 (2001). [Pg.261]

Fit of data to a hypothesis invokes methods of maximum-likelihood inference. Indeed, Popper s philosophy of science has recently been appealed to in support of likelihood approaches to phylogeny reconstruction (DeQneiroz and Poe, 2001), but this is possible only if Popper is taken out of context (Kluge, 2001a). [Pg.84]

Goldman, N., Maximum likelihood inference of phylogenetic trees, with special reference to a Poisson process model of DNA substitution and to parsimony analyses, Syst. Zool., 39, 345-361, 1990. [Pg.189]

Kishino, H., Miyata, T., and Hasegawa, M. (1990) Maximum likelihood inference of protein phylogeny and the origin of chloroplasts./o r a/ of Molecular Evolution, 31,151-160. [Pg.137]

These considerations raise a question how can we determine the optimal value of n and the coefficients i < n in (2.54) and (2.56) Clearly, if the expansion is truncated too early, some terms that contribute importantly to Po(AU) will be lost. On the other hand, terms above some threshold carry no information, and, instead, only add statistical noise to the probability distribution. One solution to this problem is to use physical intuition [40]. Perhaps a better approach is that based on the maximum likelihood (ML) method, in which we determine the maximum number of terms supported by the provided information. For the expansion in (2.54), calculating the number of Gaussian functions, their mean values and variances using ML is a standard problem solved in many textbooks on Bayesian inference [43]. For the expansion in (2.56), the ML solution for n and o, also exists, lust like in the case of the multistate Gaussian model, this equation appears to improve the free energy estimates considerably when P0(AU) is a broad function. [Pg.65]

In addition to these three major methods mentioned, several other computational approaches can also be used to deal with population stratification. For example, ADMIXMAP (22-26) is a model-based method that estimates the individual history of admixture. It can be applied to an admixed population with two or more ancestral populations. It also tests the association of a trait with ancestry at a marker locus with control for population structure. Wu et al. developed a software package in R (PSMIX) for the inference of population stratification and admixture (27). PSMIX is based on the maximum likelihood method. It performs as well as model-based methods such as STRUCTURE and is more computationally efficient. [Pg.39]

Wu, B., Liu, N., and Zhao, H. (2006) PSMIX an R package for population structure inference via maximum likelihood method. BMC Bioinformatics. 7, 317. Available at http //bioinfor-matics.med.yale.edu/PSMIX/. [Pg.40]

Arisue N, Hasegawa M, Hashimoto T (2005) Root of the eukaryota tree as inferred from combined maximum likelihood analyses of multiple molecular sequence data. Mol Biol Evol 22 409-420... [Pg.279]

Fig. 5.2. Phylogeny of monopisthocotylean Monogenea based on SSU rDNA. The tree topology is from a Bayesian analysis with nodal support indicated, from top to bottom, for maximum likelihood (bootstrap%, n = 100), maximum parsimony (bootstrap%, n = 1000) and Bayesian inference (posterior probabilities). Figure from Matejusova etal. (2003). Fig. 5.2. Phylogeny of monopisthocotylean Monogenea based on SSU rDNA. The tree topology is from a Bayesian analysis with nodal support indicated, from top to bottom, for maximum likelihood (bootstrap%, n = 100), maximum parsimony (bootstrap%, n = 1000) and Bayesian inference (posterior probabilities). Figure from Matejusova etal. (2003).
There are often data sets used to estimate distributions of model inputs for which a portion of data are missing because attempts at measurement were below the detection limit of the measurement instrument. These data sets are said to be censored. Commonly used methods for dealing with such data sets are statistically biased. An example includes replacing non-detected values with one half of the detection limit. Such methods cause biased estimates of the mean and do not provide insight regarding the population distribution from which the measured data are a sample. Statistical methods can be used to make inferences regarding both the observed and unobserved (censored) portions of an empirical data set. For example, maximum likelihood estimation can be used to fit parametric distributions to censored data sets, including the portion of the distribution that is below one or more detection limits. Asymptotically unbiased estimates of statistics, such as the mean, can be estimated based upon the fitted distribution. Bootstrap simulation can be used to estimate uncertainty in the statistics of the fitted distribution (e.g. Zhao Frey, 2004). Imputation methods, such as... [Pg.50]

Fig. 1 Maximum-likelihood phytogeny (fastD-NAml) of 17 Phaeocystis species/strains and other prymnesiophytes inferred from 18S rDNA. The class Pavlovophyceae was used as outgroup. Bootstrap values are placed on the nodes that are identical from ML/NJ/MP analyses. The scale bar corresponds to two base changes per 100 nucleotides. Redrawn from Lange et al. (2002)... Fig. 1 Maximum-likelihood phytogeny (fastD-NAml) of 17 Phaeocystis species/strains and other prymnesiophytes inferred from 18S rDNA. The class Pavlovophyceae was used as outgroup. Bootstrap values are placed on the nodes that are identical from ML/NJ/MP analyses. The scale bar corresponds to two base changes per 100 nucleotides. Redrawn from Lange et al. (2002)...
In order to avoid the disadvantages, seen or inferred, of the simple addition of q values, various analysts have either calculated or assumed a distribution (for each tumor type) representing the likelihood for the plausible range of estimates of the linear term (q ) of the multistage model (qi), and then they used the Monte Carlo procedure to add the distributions rather than merely adding specific points on the distributions such as the maximum likelihood estimate (MLE) or 95% confidence limit. A combined potency estimate (q for all sites) is then obtained as the 95% confidence limit on the summed distribution. This resembles the approach for multiple carcinogens by Kodell et al. (1995) noted above. [Pg.719]

There are several concepts for the description of phase in quantum theory at present. Some of them are accenting the theoretical aspects, other the experimental ones. Quantization based on the correspondence principle leads to the formulation of operational quantum phase concepts. For example, the well-known operational approach formulated by Noh et al. [63,64] is motivated by the correspondence principle in classical wave theory. Further generalization may be given in the framework of quantum estimation theory. The prediction may be improved using the maximum-likelihood estimation. The optimization of phase inference will be pursued in the following. [Pg.528]

Maximal likelihood was first presented by R.A. Fisher (1921) (when he was 22 years old ) and is the backbone of statistical estimation. The object of maximum likelihood is to make inferences about the parameters of a distribution 0 given a set of observed data. Maximum likelihood is an estimation procedure that finds an estimate of 0 (an estimator called 0) such that the likelihood of actually observing the data is maximal. The Likelihood Principle holds that all the information contained in the data can be summarized by a likelihood function. The standard approach (when a closed form solution can be obtained) is to derive the likelihood function, differentiate it with respect to the model parameters, set the resulting equations equal to zero, and then solve for the model parameters. Often, however, a closed form solution cannot be obtained, in which case optimization is done to find the set of parameter values that maximize the likelihood (hence the name). [Pg.351]

This is the primary means of obtaining information about the canopy source distribution of a scalar from atmospheric concentration measurements. A formal discrete solution is found by matrix inversion of Et]. (17), choosing the number of source layers (m) to be ecjual to the number of concentration measurements (n) so that D j is a scjuare matrix. However, this solution provides no redundancy in concentration information, and therefore no possibility for smoothing measurement errors in the concentration profile, which can cause large errors in the inferred source profile. A simple means of overcoming this problem is to include redundant concentration information, and then find the sources , which produce the best fit to the measured concentrations c, by maximum-likelihood estimation. By minimizing the squared error between measured values and concentrations predicted by Eq. (17), 4>j is found (Raupach, 1989b) to be the solution of m linear ec[uations... [Pg.50]

In the previous section, we discussed various modes of inference about the parameters in the models that we specify for the data. We now wish to discuss the actual numerical computations necessary to obtain the parameter estimates. It should be noted that while maximum likelihood and Bayesian inference are modes of estimation, the choice of the numerical algorithm is a separate decision. [Pg.191]

In high-dimensional data, analytic estimation is frequently infeasible or extremely inefficient as such, Bayesian methods provide an extremely effective alternative to maximum-likelihood estimation. The Bayesian approach to modeling and inference entails initially specifying a sampling probability model and a prior distribution for the unknown parameters. The prior distribution for the parameters reflects our knowledge about the parameters before the data are observed. Next, the posterior distribution is calculated. This distribution includes what we learn about the unknown parameters once we have seen the data. Additionally, it is employed to predict fumre observations. [Pg.243]

Brooks, J., M.J. van der Laan, D.E. Singer, and A.S. Go. Targeted maximum likelihood estimation of causal effects in rightsurvival data with time-dependent CO variates Warfarin, stroke, and death in atrial fibrillation. Causal Infer, 1 235-254,2013. [Pg.190]

Gruber, S. and M.J. van der Laan. An application of collaborative targeted maximum likelihood estimation in causal inference and genomics. Int J Biostat, 6(1), Article 18, 2010. [Pg.190]


See other pages where Maximum-likelihood inference is mentioned: [Pg.53]    [Pg.53]    [Pg.88]    [Pg.409]    [Pg.273]    [Pg.242]    [Pg.271]    [Pg.183]    [Pg.476]    [Pg.486]    [Pg.9]    [Pg.199]    [Pg.82]    [Pg.325]    [Pg.345]    [Pg.691]    [Pg.192]    [Pg.296]    [Pg.2728]    [Pg.176]    [Pg.192]    [Pg.168]    [Pg.267]    [Pg.715]   
See also in sourсe #XX -- [ Pg.84 ]

See also in sourсe #XX -- [ Pg.84 ]




SEARCH



Inference

Likelihood

Maximum likelihood

© 2024 chempedia.info