Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sampling Bayesian approach

Friedman [12] introduced a Bayesian approach the Bayes equation is given in Chapter 16. In the present context, a Bayesian approach can be described as finding a classification rule that minimizes the risk of misclassification, given the prior probabilities of belonging to a given class. These prior probabilities are estimated from the fraction of each class in the pooled sample ... [Pg.221]

Monte Carlo data for y were generated according to with mean x, to simulate process sampling data. A window size of 25 was used here and to demonstrate the performance of the Bayesian approach. [Pg.222]

The Bayesian approach reverses the role of the sample and model the sample is fixed and unique, and the model itself is uncertain. This viewpoint corresponds more closely to the practical situation facing the individual researcher there is only 1 sample, and there are doubts either what model to use, or, for a specified model, what parameter values to assign. The model uncertainty is addressed by considering that the model parameters are distributed. In other words Bayesian interpretation of a confidence interval is that it indicates the level of belief warranted by the data the... [Pg.82]

Apart from this pedagogical aspect (cf Lee 1989, preface), there is a more technical reason to prefer the Bayesian approach to the confidence approach. The Bayesian approach is the more powerfnl one eventnally, for extending a model into directions necessary to deal with its weaknesses. These are various relaxations of distribntional assnmptions. The conceptnal device of an infinite repetition of samples, as in the freqnentist viewpoint, does not yield enongh power to accomplish these extensions. [Pg.83]

A partially Bayesian approach was suggested by Chipman et al. (1997). They used independent prior distributions for each main effect being active. The prior distribution selected for Pj was a mixture of normals, namely, N(0, r ) with prior probability 1 — tzj and N(0, Cj if) with prior probability ttj, where Cj greatly exceeds 1. The prior distribution for a2 was a scaled inverse-x2. They then used the Gibbs-sampling-based stochastic search variable selection method of George and McCulloch (1993) to obtain approximate posterior probabilities for Pj, that is, for each factor they obtained the posterior probability that Pj is from /V(0, cj if) rather than from N(0, r ). They treated this as a posterior probability that the corresponding factor is active and used these probabilities to evaluate the posterior probability of each model. [Pg.182]

Obtaining parametric maps necessarily requires estimating the vector of the parameter 0 from K-noised samples. The general theory of estimation59,60 provides solutions that can be applied in the domain of quantitative MRI. In practice, the ML approach is the most commonly used, because it concerns the estimation of non-random parameters, unlike the Bayesian approach, which is mostly applied to segment the images.61 The LS approaches defined by... [Pg.226]

The analysis of clinical pharmacokinetic data offers additional challenges. Typically, the number of samples available from an individual patient can be limited. In some cases, only one or two samples may be available. If population-based pharmacokinetic values are available, it may still be possible to analyze this limited clinical information using a Bayesian approach. Using patient and population information, the objective function becomes a function of both the residual between the observed and calculated data (as in weighted least squares) and the residual between the population and the calculated values of the parameters, as shown in Eq. (23) ... [Pg.2766]

Probabilistic models have been developed for characterizing compliance. The most commonly cited probabilistic approach is the hierarchical Markov model. Other more recently developed approaches range from a random sampling probabilistic model approach, to likelihood approaches, Bayesian approaches, and a missing dosing history approach. It is up to the pharmacometrician to choose the method that would best characterize his/her nonadherence data. The application example reinforces the importance of compliance to prescribed drug therapy, and how steady-state pharmacokinetics can be disrupted in the presence of noncompliance. [Pg.178]

This ability is available in many software programs. NONMEM (Iconus, EUicott City, MD) has been widely used to estimate population models arising from both sparse and intensely sampled data. Other programs include WinNonMix (Pharsight Corp., Palo Alto, CA), Kinetica 2000 (Innaphase Corp, Philadelphia, PA), and Pop-Kinetics (SAAM Institute, Seattle, WA). ADAPT II and WinNonlin have focused on PK/PD models and have been combined with Bayesian approaches to estimate population models. [Pg.467]

Estimation of organophosphorus (OP) pesticide exposure to children in an agricultural community Examination of the quantitative relation between exposure to isocyanates and occupational asthma Explanation of new framework to obtain exposure estimates through the Bayesian approach Combined direct (personal air samples) and indirect (activity pattern model) approaches used in human air pollution exposure assessment Reconstruction of contaminant doses to the public from operations at the Rocky Flats nuclear weapons facility Estimation of historical exposures to machining fluids in the automotive industry... [Pg.763]

Pezeshk H, Gittins J (2002) A fully Bayesian approach to calculating sample sizes for clinical trials with binary responses. Drug Information Journal 36 143 150. [Pg.212]

In high-dimensional data, analytic estimation is frequently infeasible or extremely inefficient as such, Bayesian methods provide an extremely effective alternative to maximum-likelihood estimation. The Bayesian approach to modeling and inference entails initially specifying a sampling probability model and a prior distribution for the unknown parameters. The prior distribution for the parameters reflects our knowledge about the parameters before the data are observed. Next, the posterior distribution is calculated. This distribution includes what we learn about the unknown parameters once we have seen the data. Additionally, it is employed to predict fumre observations. [Pg.243]

Figure 2.20 shows a typical result of the problem. The dashed line is the tme relationship and the crosses are the measurements. Even though the samples are quite scattered in the low range of X, the Bayesian approach reflects the correct weighting for different measurements. On the other hand, the traditional least-squares method simply minimizes the 2-norm of the difference... [Pg.42]

The present paper will put forward the point that a Bayesian framework may be viewed as rather natural for tackling issues (a) and (b) altogether. Indeed, beyond the forceful epistemological and decision-theory feamres of a Bayesian approach, it includes by definition a double-level probabilistic model separating epistemic and aleatory components and offers a traceable process to mix the encoding of engineering expertise inside priors and the observations inside an updated epistemic layer that proves mathematically consistent even when dealing with very low-size samples. [Pg.1700]

Uncertainties owning to the limited size of the sample or the equivalent sufficient statistic k,T can be quantified following the Bayesian approach, proposed by Lindley (1965) and Bedford Cook (2001). [Pg.1350]

We have shown that both the likelihood and Bayesian approach arise from surfaces defined on the inference universe, the observation density surface and the joint probability density respectively. The sampling surface is a probability density only in the observation dimensions, while the joint probability density is a probability density in the parameter dimensions as well (when proper priors are used). Cutting these two surfaces with a vertical hyperplane that goes through the observed value of the data yields the likelihood function and the posterior density that are used for likelihood inference and Bayesian inference, respectively. [Pg.16]

The development and implementation of computational methods for drawing random samples from the incompletely known posterior has revolutionized Bayesian statistics. Computational Bayesian statistics breaks free from the limited class of models where the posterior can be found analytically. Statisticians can use observation models, and choose prior distributions that are more realistic, and calculate estimates of the parameters from the Monte Carlo samples from the posterior. Computational Bayesian methods can easily deal with complicated models that have many parameters. This makes the advantages that the Bayesian approach offers accessible to a much wider class of models. [Pg.20]

A more realistic model when we have a random sample of observations from a normalifi, cr ) distribution is that both parameters are unknown. The parameter fi is usually the only parameter of interest, and is a nuisance parameter. We want to do inference on /x while taking into account the additional uncertainty caused by the unknown value of tr. The Bayesian approach allows us to do this by marginalizing out the nuisance parameter from the joint posterior of the parameters given the data. The joint posterior will be proportional to the joint prior times the joint likelihood. [Pg.80]

In the computational Bayesian approach, we want to draw a sample from the actual posterior, not its approximation. As we noted before, we know its shape. Our approach will be to use the Metropolis-Hastings algorithm with an independent candidate density. We want a candidate density that is as close as possible to the posterior so many candidates will be accepted. We want the candidate density to have heavier tails than the posterior, so we move around the parameter space quickly. That will let us have shorter burn-in and use less thinning. We use the maximum likelihood vector 0ml matched curvature covariance matrix Vj fz, as the... [Pg.207]

In the computational Bayesian approach, we want to draw a sample from the actual posterior, not its approximation. The proportional likelihood is given in Equation 9.7, and multiplying by the prior will give the true proportional posterior. Our approach... [Pg.218]


See other pages where Sampling Bayesian approach is mentioned: [Pg.508]    [Pg.197]    [Pg.274]    [Pg.787]    [Pg.76]    [Pg.210]    [Pg.210]    [Pg.309]    [Pg.415]    [Pg.196]    [Pg.11]    [Pg.14]    [Pg.224]    [Pg.259]    [Pg.263]    [Pg.1670]    [Pg.274]    [Pg.642]    [Pg.41]    [Pg.41]    [Pg.574]    [Pg.1]    [Pg.22]    [Pg.22]    [Pg.47]    [Pg.53]    [Pg.184]    [Pg.199]    [Pg.204]   
See also in sourсe #XX -- [ Pg.41 ]




SEARCH



Bayesian

Bayesians

Sample Approach

Sample size Bayesian approaches

© 2024 chempedia.info