Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Maximum likelihood technique

Once the background is subtracted, the component of the spectrum due to the annihilation of ortho-positronium is usually visible (see Figure 6.5(a), curve (ii) and the fitted line (iv)). The analysis of the spectrum can now proceed, and a number of different methods have been applied to derive annihilation rates and the amplitudes of the various components. One method, introduced by Orth, Falk and Jones (1968), applies a maximum-likelihood technique to fit a double exponential function to the free-positron and ortho-positronium components (where applicable). Alternatively, the fits to the components can be made individually, if their decay rates are sufficiently well separated, by fitting to the longest component (usually ortho-positronium) first and then subtracting this from the... [Pg.275]

Verdonck FAM, Jaworska J, Thas O, Vanrolleghem PA. 2001. Determining environmental standards using bootstrapping, Bayesian and maximum likelihood techniques a comparative study. Anal Chim Acta 446 429-438. [Pg.366]

There seem to be three problems here. Maximum likelihood techniques are rarely applied to morphological data because (unlike molecular data) there is little empirical justification for assuming certain patterns or models of evolution. For instance, the arguments of bone fusion or fragmentation of the skull of crossopterygian fishes were basically sterile. Second, likelihood estimates of the fossil record only measure how well we have sampled the available... [Pg.173]

In the first instance, when the results were analyzed by simple mean and standard deviation analysis, Amico et al. [16-18] got large relative standard deviatiOTi, indicating limitatimi of this method for the proper characterizatiOTi of the diameter. Then, they used Weibull probability density and cumulative distribution functions [20,56,58] to estimate two parameters, the characteristic life and a dimensionless positive pure number, which were supposed to determine the shape and scale of the distribution curve. For this, they adopted two methods, the maximum likelihood technique, which requires the solution of two nonlinear equations, and the analytical method using the probability plot as mentioned earlier for coir fibers. [Pg.229]

Estimates for the parameters aj and Aji are determined by using the maximum likelihood technique and maximizing the negative logarithm of the likelihood function C. [Pg.53]

Lanteri, H., Roche, M., Cuevas, O., Aime, C., 2001, A general method to devise maximum-likelihood signal restoration multiplicative algorithms with nonnegativity constraints. Signal Processing 81, 945 Lucy, L.B., 1974, An iterative technique for the rectification of observed distributions, ApJ 79, 745... [Pg.421]

While it is perfectly permissible to estimate a and b on this basis, the calculation can only be done in an iterative fashion, that is, both a and b are varied in increasingly smaller steps (see Optimization Techniques, Section 3.5) and each time the squared residuals are calculated and summed. The combination of a and b that yields the smallest of such sums represents the solution. Despite digital computers, Adcock s solution, a special case of the maximum likelihood method, is not widely used the additional computational effort and the more complicated software are not justified by the improved (a debatable notion) results, and the process is not at all transparent, i.e., not amenable to manual verification. [Pg.96]

As discussed before, in the conventional data reconciliation approach, auxiliary gross error detection techniques are required to remove any gross error before applying reconciliation techniques. Furthermore, the reconciled states are only the maximum likelihood states of the plant, if feasible plant states are equally likely. That is, P x = 1 if the constraints are satisfied and P x = 0 otherwise. This is the so-called binary assumption (Johnston and Kramer, 1995) or flat distribution. [Pg.219]

To alleviate these assumptions the maximum likelihood rectification (MLR) technique was proposed by Johnston and Kramer. This approach incorporates the prior distribution of the plant states, P x, into the data reconciliation process to obtain the maximum likely rectified states given the measurements. Mathematically the problem can be stated (Johnston and Kramer, 1995) as... [Pg.219]

The most commonly used techniques for estimating trees for sequences may be grouped into three categories (1) distance methods, (2) maximum parsimony, and (3) maximum likelihood based methods. There are other methods but they are not widely used. Further, each of these categories covers many variations and even distinct methods with different properties and assumptions. These methods have often been divided different ways (different from the three categories here) such as cladistic versus phenetic, character-based versus non-character-based, method-based versus criterion-based, and others. These divisions may merely reflect particular predjudices by the person making them and can be artificial. [Pg.121]

Mathematically, all this iirformation is used to calculate the best fit of the model to the experimental data. Two techniques are currently used, least squares and maximum hkelihood. Least-squares refinement is the same mathematical approach that is used to fit the best line through a number of points, so that the sum of the squares of the deviations from the line is at a minimum. Maximum likelihood is a more general approach that is the more common approach currently used. This method is based on the probability function that a certain model is correct for a given set of observations. This is done for each reflection, and the probabilities are then combined into a joint probability for the entire set of reflections. Both these approaches are performed over a number of cycles until the changes in the parameters become small. The refinement has then converged to a final set of parameters. [Pg.465]

In this article, we will not focus on recent developments in refinement techniques, which benefited recently from better statistical treatments such as maximum likelihood targets for refinement (Adams et al., 1999), but rather will describe in detail some of the newest developments in MR to get the best... [Pg.97]

Two adjustable parameters of fhe equafions can be found by an optimization technique using Marquardt s or Rosenbrock s maximum likelihood method of minimizafion... [Pg.25]

One powerful technique is Maximum Likelihood Estimation (MLE) which requires the derivation of the Joint Conditional Probability Density Function (PDF) of the output sequence [ ], conditional on the model parameters. The input e n to the system shown in figure 4.25 is assumed to be a white Gaussian noise (WGN) process with zero mean and a variance of 02. The probability density of the noise input is ... [Pg.110]

This comparison is performed on the basis of an optimality criterion, which allows one to adapt the model to the data by changing the values of the adjustable parameters. Thus, the optimality criteria and the objective functions of maximum likelihood and of weighted least squares are derived from the concept of conditioned probability. Then, optimization techniques are discussed in the cases of both linear and nonlinear explicit models and of nonlinear implicit models, which are very often encountered in chemical kinetics. Finally, a short account of the methods of statistical analysis of the results is given. [Pg.4]

A comparison of the various fitting techniques is given in Table 5. Most of these techniques depend either explicitly or implicitly on a least-squares minimization. This is appropriate, provided the noise present is normally distributed. In this case, least-squares estimation is equivalent to maximum-likelihood estimation.147 If the noise is not normally distributed, a least-squares estimation is inappropriate. Table 5 includes an indication of how each technique scales with N, the number of data points, for the case in which N is large. A detailed discussion on how different techniques scale with N and also with the number of parameters, is given in the PhD thesis of Vanhamme.148... [Pg.112]

The whole procedure, with or without salts, may not be based upon sound statistical principles. Rather than using various object functions, it appears better to use a reliable statistical technique such as the method of maximum likelihood (24) or the Bayesian approach (25), both of which take into account the errors in all experimental observations in a logically justifiable fashion. The various discrepancies and anomalies noted in the present work would be moderated by using either... [Pg.174]

It is worth mentioning that statistical and probabilistic techniques have had a significant impact in how heavy atoms are found and models are refined (e.g., SHARP, SOLVE, REFMAC). Baysian statistics and maximum likelihood methods are now used instead of least-squares methods. One may want to consider how various data collection strategies may affect the later steps in the process by keeping this in mind, i.e., high redundancy in the data makes for better statistics. [Pg.478]

Fariss, R. H., and V. J. Law, An efficient computational technique for generalized application of maximum likelihood to improve correlation of experimental data, Comput. Chem. Eng., 3, 95-104 (1979). [Pg.135]

Therefore, novel techniques potentially applicable to solving crystal structures are under continuous testing and development. A recent collective monograph on the structure determination from powder diffraction data provides an excellent discussion of the problem and introduces different approaches that may be used in its solution. In this chapter, unconventional structure solution methods are only briefly reviewed most of them are still controversial and do not always work well with different kinds of compounds and data, although solutions of several complex structures have been demonstrated. Summarized below are the genetic algorithm, maximum entropy, maximum likelihood, and simulated annealing methods. [Pg.497]

Maximum likelihood method, similar to maximum entropy, works in reciprocal space and results in finding a raw model that has the best chance to be improved by applying small steps to achieve full agreement between the observed and calculated structure amplitudes, rather than the final structure. Maximum likelihood and maximum entropy techniques are often combined together. - ... [Pg.498]

Cancer potency is taken as the upper 95% confidence limit (UCL) on the linear term qi, which is called the cancer slope factor (CSF) or q, in units of (mg/kg-day), derived using maximum likelihood estimate techniques. Cancer risks at low doses equals dose X q " and the dose associated with a specific risk equals risk/ j . The calculated risks are upper bound estimates and calculated doses are lower bound estimates. These are further summarized below. [Pg.402]


See other pages where Maximum likelihood technique is mentioned: [Pg.434]    [Pg.326]    [Pg.156]    [Pg.434]    [Pg.326]    [Pg.156]    [Pg.203]    [Pg.97]    [Pg.548]    [Pg.558]    [Pg.352]    [Pg.629]    [Pg.9]    [Pg.182]    [Pg.192]    [Pg.175]    [Pg.28]    [Pg.442]    [Pg.455]    [Pg.482]    [Pg.97]    [Pg.297]    [Pg.247]   
See also in sourсe #XX -- [ Pg.229 , Pg.230 ]




SEARCH



Likelihood

Maximum likelihood

© 2024 chempedia.info