Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Likelihood function, analysis

The likelihood function is an expression for p(a t, n, C), which is the probability of the sequence a (of length n) given a particular alignment t to a fold C. The expression for the likelihood is where most tlireading algorithms differ from one another. Since this probability can be expressed in terms of a pseudo free energy, p(a t, n, C) x exp[—/(a, t, C)], any energy function that satisfies this equation can be used in the Bayesian analysis described above. The normalization constant required is akin to a partition function, such that... [Pg.337]

If this procedure is followed, then a reaction order will be obtained which is not masked by the effects of the error distribution of the dependent variables If the transformation achieves the four qualities (a-d) listed at the first of this section, an unweighted linear least-squares analysis may be used rigorously. The reaction order, a = X + 1, and the transformed forward rate constant, B, possess all of the desirable properties of maximum likelihood estimates. Finally, the equivalent of the likelihood function can be represented b the plot of the transformed sum of squares versus the reaction order. This provides not only a reliable confidence interval on the reaction order, but also the entire sum-of-squares curve as a function of the reaction order. Then, for example, one could readily determine whether any previously postulated reaction order can be reconciled with the available data. [Pg.160]

In a regular application of Bayes s rule, a prior estimate of probability and a likelihood function are combined to produce a posterior estimate of probability, which may then be used as an input in a risk analysis. Bayes s rule is... [Pg.93]

In general, the advantage of these response surface models is that they enable the description of nonlinear concentration-response relationships, and that differences in slopes and functional form of the individual concentration-response curves can be accounted for. The complete n + 1 dimensional concentration-response surface is fitted to the complete data set, which takes into account that the parameters of the individual concentration-response relationships are actually predictors for the complete mixture data set. Different likelihood functions can be used to adjust the analysis for different types of endpoints. Each approach has its own specific advantages, and response surface models for IA have also been developed (Haas et al. 1997 Jonker et al. 2005). The user needs to have some programming skills and statistical knowledge to judge the result. Specifically, the user needs to know how to... [Pg.140]

The construction of a noninformative prior is a nontrivial task, requiring analysis of likelihood functions for prospective data. The construction is simplest when the likelihood l 6) for a single parameter is data-translated in some coordinate d>(0) then the noninformative prior density takes the form p (t)) = const, over the permitted range of special form of a more general one derived by Jeffreys (1961) (see Section 5.4) we illustrate it here by two examples. [Pg.84]

As mentioned earlier, incorporating prior information does not in itself constitute a Bayesian approach. Priors have been used in non-Bayesian settings in population PK analysis and other analyses. Applications using the PRIOR subroutine in NONMEM have been described previously (3,16). In this setting the prior information can be viewed as a penalty on the likelihood function, and its implementation is similar in spirit to the maximum a posteriori (MAP) procedures used commonly... [Pg.144]

However, for a large number of observed data points, repeated evaluations of the factor p(V 0, C) for different values of 0 becomes computationally prohibitive. It is obvious from Equation (4.9) that it requires the computation of the solution X of the algebraic equation F( )X = Y and the determinant of the x matrix F( ). This task is computationally very expensive for large N even though the former can be done efficiently by pre-conditioners [43,49,124]. Repeated evaluations of the likelihood function for thousands of times in the optimization process is computationally prohibitive for large N. Therefore, the exact Bayesian approach described above, based on direct use of the measured data V, becomes practically infeasible. In the next section, the model updating problem will be formulated with a nonsta-tionary response measurement. Standard random vibration analysis will be reviewed. Then, an approximated approach is introduced and it overcomes the computational obstacles and renders the problem practically feasible. [Pg.164]

In Eq. (9.43) f(X) is the prior probability density function. It reflects the— subjective—assessment of component behaviour which the analyst had before the lifetime observations were carried out. L(EA.) is the likelihood function. It is the conditional probability describing the observed failures under the condition that f(5t) applies to the component under analysis. Eor failure rates L(E/X) is usually represented by a Poisson distribution of Eq. (9.30) and for unavaUabUities by the binomial distribution of Eq. (9.35). The denominator in Eq. (9.43) serves for normalizing so that the result lies in the domain of probabilities [0, 1] f(X/E) finally is the new probability density function, which is called posterior probability density function. It represents a synthesis of the notion of component failure behaviour before the observation and the observation itself. Thus it is the mathematical expression of a learning process. [Pg.340]

Having said all of this, it is important to remember, however (Popper, 1976 Appendix IX), ... that non-statistical theories have as a rule a form totally different from that of the h here described, that is, they are of the form of a universal proposition. The question thus becomes whether systematics, or phylogeny reconstruction, can be construed in terms of a statistical theory that satisfies the rejection criteria formulated by Popper (see footnote 1) and that, in case of favorable evidence, allows the comparison of degree of corroboration versus Fisher s likelihood function. As far as phylogenetic analysis is concerned, I found no indication in Popper s writing that history is subject to the same logic as the test of random samples of statistical data. As far as a metric for degree of corroboration relative to a nonstatistical hypothesis is concerned. Popper (1973 58-59 see also footnote 1) clarified. [Pg.85]

Statistical data analysis of operation time till failure shows that operation time till failure T as random variable follows Weibull distribution (according to performed goodness of fit tests). The parameters k and p are assumed as independent random variables with prior probability density functions p x)—gamma pdf with mean value equals to prior (DPSIA) estimate of k and variance—10% of estimate value, / 2(j)— inverse gamma (as conjugate prior (Bernardo et al, 2003 Berthold et al, 2003)) pdf with mean value equals to prior (DPSIA) estimate of p and variance—10% of estimate value. Failure data tj, j =1,2,. .., 28. Thus, likelihood function is... [Pg.421]

Marcoulaki et al. (2012) assumed that all workers have the same accident and recovery rates. Thus, the authors used data from all workers together and treated them as homogenous in order to formulate the likelihood function of the Bayesian model. However, due to the existence of subjective and individual characteristics it is expected that a same class of employees have a unique accident and recovery rates even though they have similar functions in the workplace or are allocated in the same occupational environment as other employees. Therefore, a population variability assessment over the rates is more appropriate for accidents analysis. [Pg.1301]

In this section, we illustrate and discuss the use of the Bayesian Population Variability analysis and Markov-based models by means of an example. We start by supposing that we are interested in assessing the average distribution of work time loss due to occupational accidents for workers of a hydroelectric power company in Brazil. Runtime data were collected from the timeline of operation employees between 01/01/2005 and 09/31/2012 in order to construct the likelihood function. [Pg.1307]

Even though the likelihood function has a significant influence on the model updating results, its construction has received much less attention than the construction of the prior PDF. This is mainly due to the fact that most often very little or no information is at hand regarding the characteristics of the error(s) only in selected cases, a realistic estimate can be made concerning the probabilistic model representing the prediction error, for instance, based on the analysis of measurement results. Most often, it is simply assumed that the probabilistic model of the prediction error is known and fixed, so that the parameter set reduces to 0 = 0m M . [Pg.1525]

Clearly, the choice of the prior is crucial, as it influences our analysis. It is this subjective nature of the Bayesian approach that is the cause of controversy, since in statistics we would like to think that different people who look at the data will come to the same conclusions. We hope that the data will be sufficiently informative that the likelihood function is sharply peaked around specific valnes of 0 and a i.e., that the inference problem is data-dominated. In this case, the estimates are rather insensitive to the choice of prior, as long as it is nonzero near the peak of the likelihood function. When this is not the case, the prior influences the results of the analysis. [Pg.386]

Mixmre models have come up frequently in Bayesian statistical analysis in molecular and structural biology [16,28] as described below, so a description is useful here. Mixture models can be used when simple forms such as the exponential or Dirichlet function alone do not describe the data well. This is usually the case for a multimodal data distribution (as might be evident from a histogram of the data), when clearly a single Gaussian function will not suffice. A mixture is a sum of simple forms for the likelihood ... [Pg.327]

In the above analysis, y was considered to be a reaction rate. Clearly, any dependent variable can be used. Note, however, that if the dependent variable, y, is distributed with constant error variance, then the function z will also have constant error variance and the unweighted linear least-squares analysis is rigorous. If, in addition, y has error that is normal and independent, the least-squares analysis would provide a maximum likelihood estimate of A. On the other hand, if any transformation of the reaction rate is felt to fulfill more nearly these characteristics, the transformation may be made on y, ru r2 and the same analysis may be applied. One common transformation will be logarithmic. [Pg.143]

In the panel data models estimated in Example 21.5.1, neither the logit nor the probit model provides a framework for applying a Hausman test to determine whether fixed or random effects is preferred. Explain. (Hint Unlike our application in the linear model, the incidental parameters problem persists here.) Look at the two cases. Neither case has an estimator which is consistent in both cases. In both cases, the unconditional fixed effects effects estimator is inconsistent, so the rest of the analysis falls apart. This is the incidental parameters problem at work. Note that the fixed effects estimator is inconsistent because in both models, the estimator of the constant terms is a function of 1/T. Certainly in both cases, if the fixed effects model is appropriate, then the random effects estimator is inconsistent, whereas if the random effects model is appropriate, the maximum likelihood random effects estimator is both consistent and efficient. Thus, in this instance, the random effects satisfies the requirements of the test. In fact, there does exist a consistent estimator for the logit model with fixed effects - see the text. However, this estimator must be based on a restricted sample observations with the sum of the ys equal to zero or T muust be discarded, so the mechanics of the Hausman test are problematic. This does not fall into the template of computations for the Hausman test. [Pg.111]

Linear discriminant analysis (LDA) is also a probabilistic classifier in the mold of Bayes algorithms but can be related closely to both regression and PCA techniques. A discriminant function is simply a function of the observed vector of variables (K) that leads to a classification rule. The likelihood ratio (above), for example, is an optimal discriminant for the two-class case. Hence, the classification rule can be stated as... [Pg.196]

Once the background is subtracted, the component of the spectrum due to the annihilation of ortho-positronium is usually visible (see Figure 6.5(a), curve (ii) and the fitted line (iv)). The analysis of the spectrum can now proceed, and a number of different methods have been applied to derive annihilation rates and the amplitudes of the various components. One method, introduced by Orth, Falk and Jones (1968), applies a maximum-likelihood technique to fit a double exponential function to the free-positron and ortho-positronium components (where applicable). Alternatively, the fits to the components can be made individually, if their decay rates are sufficiently well separated, by fitting to the longest component (usually ortho-positronium) first and then subtracting this from the... [Pg.275]

This comparison is performed on the basis of an optimality criterion, which allows one to adapt the model to the data by changing the values of the adjustable parameters. Thus, the optimality criteria and the objective functions of maximum likelihood and of weighted least squares are derived from the concept of conditioned probability. Then, optimization techniques are discussed in the cases of both linear and nonlinear explicit models and of nonlinear implicit models, which are very often encountered in chemical kinetics. Finally, a short account of the methods of statistical analysis of the results is given. [Pg.4]


See other pages where Likelihood function, analysis is mentioned: [Pg.95]    [Pg.96]    [Pg.98]    [Pg.68]    [Pg.137]    [Pg.704]    [Pg.176]    [Pg.110]    [Pg.192]    [Pg.162]    [Pg.221]    [Pg.80]    [Pg.1616]    [Pg.1619]    [Pg.1303]    [Pg.1303]    [Pg.2]    [Pg.179]    [Pg.465]    [Pg.383]    [Pg.203]    [Pg.68]    [Pg.124]    [Pg.166]    [Pg.118]    [Pg.430]    [Pg.30]    [Pg.5]   


SEARCH



Functional analysis

Functions analysis

Likelihood

Likelihood function

© 2024 chempedia.info