Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian priors

The use of Bayesian priors, coupled with efficient stochastic search algorithms, provides one approach that solves both problems. The stochastic search significantly improves the chances of finding good models, whereas Bayesian priors focus the search on models obeying effect heredity. [Pg.238]

Integrated biomarker models may provide models for the Bayesian individualization of treatments. Within this context the models provide the Bayesian priors when only sparse data are available on a single patient. From the Bayesian priors and the sparse data, individualized patient parameters can be estimated. This approach leads to individualized dosing while taking into consideration the impact of patient characteristics such as demographics or disease classihcation. [Pg.467]

Figure 3.13 Model parameter estimates as a function of the prior standard deviation for clearance. A 1-compartment model with absorption was fit to the data in Table 3.5 using a proportional error model and the SAAM II software system. Starting values were 5000 mL/h, 110 L, and 1.0 per hour for clearance (CL), volume of distribution (Vd), and absorption rate constant (ka), respectively. The Bayesian prior mean for clearance was fixed at 4500 mL/h while the standard deviation was systematically varied. The error bars represent the standard error of the parameter estimate. The open symbols are the parameter estimates when prior information is not included in the model. Figure 3.13 Model parameter estimates as a function of the prior standard deviation for clearance. A 1-compartment model with absorption was fit to the data in Table 3.5 using a proportional error model and the SAAM II software system. Starting values were 5000 mL/h, 110 L, and 1.0 per hour for clearance (CL), volume of distribution (Vd), and absorption rate constant (ka), respectively. The Bayesian prior mean for clearance was fixed at 4500 mL/h while the standard deviation was systematically varied. The error bars represent the standard error of the parameter estimate. The open symbols are the parameter estimates when prior information is not included in the model.
Assurance. A sort of chimeric probability in which a frequentist power is averaged using a Bayesian prior distribution. It is thus the unconditional expected probability of a significant result as opposed to the conditional probability given a particular clinically relevant difference. [Pg.455]

Limited Projections and Views Bayesian 3D Reconstruction Using Gibbs Priors. [Pg.113]

O. Venard. Eddy current tomography a bayesian approach with a compound weak membrane-beta prior model. In Advances in Signal Processing for Non Destructive Evaluation of Materials, 1997. [Pg.333]

In the next subsection, I describe how the basic elements of Bayesian analysis are formulated mathematically. I also describe the methods for deriving posterior distributions from the model, either in terms of conjugate prior likelihood forms or in terms of simulation using Markov chain Monte Carlo (MCMC) methods. The utility of Bayesian methods has expanded greatly in recent years because of the development of MCMC methods and fast computers. I also describe the basics of hierarchical and mixture models. [Pg.322]

The Bayesian alternative to fixed parameters is to define a probability distribution for the parameters and simulate the joint posterior distribution of the sequence alignment and the parameters with a suitable prior distribution. How can varying the similarity matrix... [Pg.332]

Zhu et al. [15] and Liu and Lawrence [61] formalized this argument with a Bayesian analysis. They are seeking a joint posterior probability for an alignment A, a choice of distance matrix 0, and a vector of gap parameters. A, given the data, i.e., the sequences to be aligned p(A, 0, A / i, R2). The Bayesian likelihood and prior for this posterior distribution is... [Pg.335]

Unfortunately, some authors describing their work as Bayesian inference or Bayesian statistics have not, in fact, used Bayesian statistics rather, they used Bayes rule to calculate various probabilities of one observed variable conditional upon another. Their work turns out to comprise derivations of informative prior distributions, usually of the form piQi, 02,..., 0 1 = which is interpreted as the posterior distribution... [Pg.338]

A similar formalism is used by Thompson and Goldstein [90] to predict residue accessibilities. What they derive would be a very useful prior distribution based on multiplying out independent probabilities to which data could be added to form a Bayesian posterior distribution. The work of Arnold et al. [87] is also not Bayesian statistics but rather the calculation of conditional distributions based on the simple counting argument that p(G r) = p(a, r)lp(r), where a is some property of interest (secondary structure, accessibility) and r is the amino acid type or some property of the amino acid type (hydro-phobicity) or of an amino acid segment (helical moment, etc). [Pg.339]

The Revea-nd Thomas Bayes, in a posthumously published paper (1763)., pren ided a systematic framework for the introduction of prior knowledge into probability estimates (C rellin, 1972), Indeed, Bayesian methods may be viewed as nothing more than convoluting two distributions. If it were this simple, why the controversy ... [Pg.50]

The controversy (for a lucid discussion refer to Mann, Shefer and Singpurwala, 1976) between "Bayesians" and "classicists" has nothing to do with precedence, for Bayes preceded much of classical statistics. The argument hinges on a) what prior knowledge is acceptable, and b) the treatment of probabilities as random variables themselves. [Pg.50]

A number of issues arise in using the available data to estimate (he rates of location-dependent fire occurrence. These include the possible reduction in the frequency of fires due to increased awareness. Apostolakis and Kazarians (1980) use the data of Table 5.2-1 and Bayesian analysis to obtain the results in Table 5.2-2 using conjugate priors (Section 2.6.2), Since the data of Table 5.2-1 are binomially distributed, a gamma prior is used, with a and P being the parameters of the gamma prior as presented inspection 2.6.3.2. For example, in the cable- spreading room fromTable 5.2-2, the values of a and p (0.182 and 0.96) yield a mean frequency of 0.21, while the posterior distribution a and p (2.182 and 302,26) yields a mean frequency of 0.0072. [Pg.198]

Mosleh, Kazarians, and Gekler obtained a Bayesian estimate of the failure rate, Z, of a coolant recycle pump in llie hazard/risk study of a chemical plant. The estimate was based on evidence of no failures in 10 years of operation. Nuclear industry experience with pumps of similar types was used to establish tire prior distribution of Z. Tliis experience indicated tliat tire 5 and 95 percentiles of lire failure rate distribution developed for tliis category were 2.0 x 10" per hour (about one failure per 57 years of operation) and 98.3 x 10 per hour (about one failure per year). Extensive experience in other industries suggested the use of a log-nonnal distribution witli tlie 5 and 95 percentile values as llie prior distribution of Z, tlie failure rate of the coolant recycle pump. [Pg.614]

Tlie number of defective items in a sample of size n produced by a certain machine has a binomial distribution with parameters n and p, where p is the probability tliat an item produced is defective. For die case of 2 observed defectives in a sample of size 20, obtain die Bayesian estimate of p if die prior distribution of p is specified by the pdf... [Pg.636]

We take a Bayesian approach to research process modeling, which encourages explicit statements about the prior degree of uncertainty, expressed as a probability distribution over possible outcomes. Simulation that builds in such uncertainty will be of a what-if nature, helping managers to explore different scenarios, to understand problem structure, and to see where the future is likely to be most sensitive to current choices, or indeed where outcomes are relatively indifferent to such choices. This determines where better information could best help improve decisions and how much to invest in internal research (research about process performance, and in particular, prediction reliability) that yields such information. [Pg.267]

One challenge in applying this approach, which relies on prior estimates of method prediction reliability, is how to deal with differences between future compounds to be tested and the universe of all compounds on which the collected experience of R D process effectiveness has been based. If new active compounds fall within the space previously sampled, then knowledge of chemical properties is just another kind of conditioning within a Bayesian network if they fall outside this space, then the initial model of both outcomes and predictions has an unpredictable error. The use of sampling theory and models of diversity [16] are therefore promising extensions of the above approach. [Pg.271]

Friedman [12] introduced a Bayesian approach the Bayes equation is given in Chapter 16. In the present context, a Bayesian approach can be described as finding a classification rule that minimizes the risk of misclassification, given the prior probabilities of belonging to a given class. These prior probabilities are estimated from the fraction of each class in the pooled sample ... [Pg.221]

It is worthwhile noting that Box and Draper (1965) arrived at the same determinant criterion following a Bayesian argument and assuming that is unknown and that the prior distribution of the parameters is noninformative. [Pg.19]

Under certain conditions we may have some prior information about the parameter values. This information is often summarized by assuming that each parameter is distributed normally with a given mean and a small or large variance depending on how trustworthy our prior estimate is. The Bayesian objective function, SB(k), that should be minimized for algebraic equation models is... [Pg.146]

For simplicity and in order to avoid potential misrepresentation of the experimental equilibrium surface, we recommend the use of 2-D interpolation. Extrapolation of the experimental data should generally be avoided. It should be kept in mind that, if prediction of complete miscibility is demanded from the EoS at conditions where no data points are available, a strong prior is imposed on the parameter estimation from a Bayesian point of view. [Pg.238]

An alternative method, which uses the concept of maximum entropy (MaxEnt), appeared to be a formidable improvement in the treatment of diffraction data. This method is based on a Bayesian approach among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution, with the entropy defined as... [Pg.48]


See other pages where Bayesian priors is mentioned: [Pg.321]    [Pg.107]    [Pg.323]    [Pg.144]    [Pg.789]    [Pg.387]    [Pg.393]    [Pg.411]    [Pg.413]    [Pg.131]    [Pg.321]    [Pg.107]    [Pg.323]    [Pg.144]    [Pg.789]    [Pg.387]    [Pg.393]    [Pg.411]    [Pg.413]    [Pg.131]    [Pg.114]    [Pg.320]    [Pg.320]    [Pg.330]    [Pg.336]    [Pg.340]    [Pg.341]    [Pg.50]    [Pg.56]    [Pg.410]    [Pg.413]    [Pg.118]    [Pg.670]    [Pg.420]    [Pg.421]    [Pg.34]    [Pg.49]   
See also in sourсe #XX -- [ Pg.393 , Pg.411 , Pg.415 ]




SEARCH



BAYESIAN ESTIMATION WITH INFORMATIVE PRIORS

Bayesian

Bayesian Statistics Using Conjugate Priors

Bayesian analyses informative priors

Bayesians

Prior

© 2024 chempedia.info