Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian framework

If we consider the relative merits of the two forms of the optimal reconstructor, Eq. s 16 and 17, we note that both require a matrix inversion. Computationally, the size of the matrix inversion is important. Eq. 16 inverts an M x M (measurements) matrix and Eq. 17 a P x P (parameters) matrix. In a traditional least squares system there are fewer parameters estimated than there are measurements, ie M > P, indicating Eq. 16 should be used. In a Bayesian framework we are hying to reconstruct more modes than we have measurements, ie P > M, so Eq. 17 is more convenient. [Pg.380]

Baldi P, Long AD. 2001. A Bayesian framework for the analysis of microarray expression data regularized t-test and statistical inference of gene changes. Bioinformatics 17 509. [Pg.405]

Li Z, Chan C. Inferring pathways and networks with a Bayesian framework. Fa e/ J2004 18 746-748. Li Z, Chan C. Integrating gene expression and metabolic profiles. J Eiol Chem 2004 279 27124-27137. [Pg.71]

Click detection within a Bayesian framework has introduced the concept of an explicit model for the corrupting noise process through the noise density pvu h Effective... [Pg.377]

Ukidwe, N. W. and Bakshi, B.R., A multiscale Bayesian framework for designing efficient and sustainable industrial systems, AlChE Sustainability Engineering Conference Proceedings, Austin, TX, November, pp. 179-187, 2004. [Pg.267]

Mixed effects models under a Bayesian framework have been widely studied and used with the use of Markov chain Monte Carlo methods (10). These methods have gained particular popularity as complex problems became easily formulated using the WinBUGS software (11). See Congdon (12) for an extensive coverage of topics and examples and implementation in WinBUGS. [Pg.104]

Similar to the non-Bayesian framework for analysis of repeated measures data, the Bayesian setting also shares the same format for describing stages 1 and 2 of the hierarchical model but has the addition of the third stage assigned to specification of the priors (see Ref. 5 for an in-depth discussion of the hierarchical framework for analysis, and for a comparison between MCMC and maximum likelihood methods... [Pg.138]

Sensitivity analysis is about asking how sensitive your model is to perturbations of assumptions in the underlying variables and structure. Models developed under any platform should be subject to some form of sensitivity analysis. Those constructed under a Bayesian framework may be subject to further sensitivity analysis associated with assumptions that may be made in the specihcation of the prior information. In general, therefore, a sensitivity analysis will involve some form of perturbation of the priors. There are generally scenarios where this may be important. First, the choice of a noninformative prior could lead to an improper posterior distribution that may be more informative than desired (see Gelman (18) for some discussion on this). Second, the use of informative priors for PK/PD analysis raises the issue of introduction of bias to the posterior parameter estimates for a specihed subject group that is, the prior information may not have been exchangeable with the current data. [Pg.152]

In contrast to the hypothesis testing style of model selection/discrimination, the posterior predictive check (PPC) assesses the predictive performance of the model. This approach allows the user to reformulate the model selection decision to be based on how well the model performs. This approach has been described in detail by Gelman et al. (27) and is only briefly discussed here. PPC has been assessed for PK analysis in a non-Bayesian framework by Yano et al. (40). Yano and colleagues also provide a detailed assessment of the choice of test statistics. The more commonly used test statistic is a local feature of the data that has some importance for model predictions for example, the maximum or minimum concentration might be important for side effects or therapeutic success (see Duffull et al. (6)) and hence constitutes a feature of the data that the model would do well to describe accurately. The PPC can be defined along the fines that posterior refers to conditioning of the distribution of the parameters on the observed values of the data, predictive refers to the distribution of future unobserved quantities, and check refers to how well the predictions reflect the observations (41). This method is used to answer the question Does the observed data look plausible under the posterior distribution This method is therefore solely a check of internal consistency of the model in question. [Pg.156]

Once a PBPK model is developed and implemented, it should be tested for mass balance consistency, as weU as through simulated test cases that can highlight potential errors. These test cases often include software boundary conditions, such as zero dose and high initial tissue concentrations. Some parameters in the PBPK model may have to be estimated through available in vivo data via standard techniques such as nonlinear regression or maximum likelihood estimation (30). Furthermore, in vivo data can be used to update existing (or prior) PBPK model parameter estimates in a Bayesian framework, and thus help in the rehnement of the PBPK model. The Markov chain Monte Carlo (MCMC) (31-34) is one of the... [Pg.1077]

T Van Gestel, J Suykens, G Lanckriet, A Lambrechts, B De Moor, and J Vandewalle. Bayesian framework for least squares support vector machine classifiers, Gaussian processes, and kernel Fisher discriminant analysis. Neural Computation, 15 1115-1148, 2002. [Pg.300]

Comparison of the ratings of experienced raters with previously recorded industrial hygiene measurements for occupations in Australia Estimation of the levels of exposure misclassification by expert assessment in a study of lung cancer in central and eastern Europe and Liverpool Application of Bayesian framework for retrospective exposure assessment of workers in a nickel smelter Determination of the level of information required by industrial hygienists to develop reliable exposure estimates Explanation of new framework to obtain exposure estimates through the Bayesiem approach Validation of a new method for structured subjective assessment of past concentration... [Pg.757]

Application of Bayesian framework for retrospective exposure assessment of workers in a nickel smelter... [Pg.762]

Hence, the marginal model implies that Y N (x(3, zGzt+R). Inference is based on this marginal model unless the data are analyzed within a Bayesian framework. The major difference between the conditional model and marginal model is that the conditional model is conditioned on the random effects, whereas the marginal model does not depend on the random effects. So, the expected value for a subject is the population mean in the absence of further information (the random effects), and the variance for that subject is the total variance, not just the within-subject variance, without any further knowledge on that subject. [Pg.185]

The other dominant approach for estimation is known as Bayesian estimation. We do not attempt to give a detailed exposition of the topic here much greater detail can be found in Gehnan et al. (2004) and Carhn and Louis (2000). In this approach, the parameters are treated as random variables themselves. This is in contrast to the maximum-likehhood estimation procedure, in which the parameters are treated as unknown constants that only take one possible value. In the Bayesian framework, the goal is to make inference about the parameters. This is done in the following way. The parameters themselves have a certain probabUistic model that we call a prior distribution. We then construct a posterior probability model for the parameters by the use of Bayes s mle ... [Pg.190]

In terms of estimation procedures, the two dominant paradigms currently are the maximum-likelihood method and the Bayesian framework. [Pg.191]

In this section, the general Bayesian framework is presented. It was originally presented for structural model updating using input-output measurements in Beck and Katafygiotis [19]. Consider a linear or nonlinear dynamical system with input-output relationship ... [Pg.33]

Valiant, L. G. 1984. A theory of the learnable. Communications of the ACM 27 1134-1142. Wolpert, D. H., ed. 1995. The relationship between PAC, the statistical physics framework, the Bayesian framework, and the VC framework. The Mathematics of Generalization The Proceedings of the SFI/CNLS Workshop on Formal Approaches to Supervised Learning. Reading, Mass. Addison-Wesley. [Pg.40]

Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimizes the cost criterion. There are numerous algorithms available for training neural network models most of them can be viewed as a straightforward application of optimization theory and statistical estimation. Recent developments in this field use particle swarm optimization and other swarm intelligence techniques. [Pg.917]

Data-informed calibration of experts in Bayesian framework... [Pg.76]


See other pages where Bayesian framework is mentioned: [Pg.330]    [Pg.133]    [Pg.137]    [Pg.138]    [Pg.92]    [Pg.42]    [Pg.341]    [Pg.482]    [Pg.66]    [Pg.138]    [Pg.331]    [Pg.357]    [Pg.427]    [Pg.270]    [Pg.297]    [Pg.9]    [Pg.101]    [Pg.108]    [Pg.192]    [Pg.224]    [Pg.234]    [Pg.254]    [Pg.918]    [Pg.167]    [Pg.76]    [Pg.77]   
See also in sourсe #XX -- [ Pg.412 ]

See also in sourсe #XX -- [ Pg.229 ]




SEARCH



Bayesian

Bayesians

© 2024 chempedia.info