Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Transitional Markov chain Monte Carlo

Computational Issues Transitional Markov Chain Monte Carlo Method... [Pg.228]

Ching, J. and Chen, Y. C. Transitional Markov chain Monte Carlo method for Bayesian model updating, model class selection and model averaging. Journal of Engineering Mechanics (ASCE) 133(7) (2007), 816-832. [Pg.281]

For general unidentifiable cases, the evidence integral in Eq. 40 can be computed using the transitional Markov chain Monte Carlo (TMCMC) method (Ching and Chen 2007). [Pg.30]

Beck JL, Katafygiotis LS (1998) Updating models and their uncertainties. I Bayesian statistical framework. J Eng Mech (ASCE) 124(4) 455 61 Beck JL, Yuen KV (2004) Model selection using response measurements Bayesian probabilistic approach. J Eng Mech (ASCE) 130(2) 192-203 Ching J, Chen YC (2007) Transitional Markov chain Monte Carlo method for Bayesian model updating, model class selection and model averaging. J Eng Mech (ASCE) 133(7) 816-832 Durovic ZM, Kovacevic BD (1999) Robust estimation with unknown noise statistics. IEEE Trans Automat Control 44(6) 1292-1296... [Pg.32]

An HMM is essentially a Markov chain (—> Monte Carlo methods). Each state inside the Markov chain can produce a letter, and it does so by a stochastic Poisson process that chooses one of a finite number of letters in an alphabet. Each letter has a characteristic probability of being chosen. That probability depends on the state and on the letter. After the state produces a letter, the Markov chain moves to another state. This happens according to a transition probability that depends on the prior and succeeding state. [Pg.426]

General class of algorithmic methods that involve a stochastic element, i.e., that let the computer make random decisions (Binder Landau, 2000). An important subclass, the so-called Markov Chain Monte Carlo (MCMC) methods can be understood as acting on Markov chains. A Markov chain is a stochastic finite automaton that consists of states and transitions between states (Feller, 1968). At each time we consider ourselves as resident in a certain state of the Markov chain. At discrete time steps we leave this state and move to another state of the chain. This transition is taken with a certain probability characteristic for the given Markov chain. The probability depends only on the two states involved in the transition. Often one can only move to a state in a small set of states from a given state. This set of states is called the neighborhood of the state from which the move originates. [Pg.428]

One example of the NMR reconstraction problem employs the reversible-jump Markov chain Monte-Carlo method [16]. It assumes that the model spectram S Fi,F2) is made up of a limited number m of two-dimensional Gaussian resonance lines. Then m, the linewidths, intensities, and frequency co-ordinates are varied until the Markov chain reaches convergence. The allowed transitions between the current map M and the new map M comprise movement, merging or splitting of resonance lines, and birth or death of component responses. Compatibility with the experimental traces is checked by projecting M at the appropriate angles. The procedure has been found to be stable and reproducible [16]. [Pg.16]

If we would consider age-dependent transition rates (i.e. time inhomogeneous Markov chains), the easy transition rate updating procedure by gamma posterior distribution would not work, as the posterior distribution would be much more complicated and the analytical expression would not be known. Thus, in order to obtain posterior estimates one would have to employ numerical methods, like Markov Chain Monte Carlo methods (Gilks, 1996). [Pg.1130]

P. Peskun, The Choice of Transition Matrix in Monte Carlo Sampling Methods Using Markov Chains, thesis. University of Toronto (1970). [Pg.166]

Presently Monte Carlo calculations are based on the technique proposed by Metropolis [22] in 1953 which involves selecting the successive configurations in such a way that they build up a Markov chain [23], The one-step transition probabilities pij are defined as the probability that beginning from the i configuration with qj(N), the configuration j with qj,N> is reached in one step. These probabilities are the elements of the one-step probability matrix associated to the Markov chain and they must fulfill the following conditions ... [Pg.128]

The difficulty arises from the fact that the one-step transition probabilities of the Markov chain involve only ratios of probability densities, in which Z(N,V,T) cancels out. This way, the Metropolis Markov chain procedure intentionally avoids the calculation of the configurational integral, the Monte Carlo method not being able to directly apply equation (31). [Pg.140]

The Monte Carlo method is easily carried out in any convenient ensemble since it simply requires the construction of a suitable Markov chain for the importance sampling. The simulations in the original paper by Metropolis et al. [1] were carried out in the canonical ensemble corresponding to a fixed number of molecules, volume and temperature, N, V, T). By contrast, molecular dynamics is naturally carried out in the microcanonical ensemble, fixed (N, V, E), since the energy is conserved by Newton s equations of motion. This implies that the temperature of an MD simulation is not known a priori but is obtained as an output of the calculation. This feature makes it difficult to locate phase transitions and, perhaps, gave the first motivation to generalize MD to other ensembles. [Pg.428]

Such stochastic modelling was advanced by Klein and Virk Q) as a probabilistic, model compound-based prediction of lignin pyrolysis. Lignin structure was not considered explicitly. Their approach was extended by Petrocelli (4) to include Kraft lignins and catalysis. Squire and coworkers ( ) introduced the Monte Carlo computational technique as a means of following and predicting coal pyrolysis routes. Recently, McDermott ( used model compound reaction pathways and kinetics to determine Markov Chain states and transition probabilities, respectively, in a rigorous, kinetics-oriented Monte Carlo simulation of the reactions of a linear polymer. Herein we extend the Monte Carlo... [Pg.241]

Under certain circumstances it can be shown that in the limit of large number of iterations of the process, the random variable X tends to X (in distribution). Thus the Monte-Carlo Markov Chain method can be used to generate sets of independent realizations of the random variable which can be used to calculate an expectation. In practice, one prefers to use a long sequence of the iterates from a single starting value. The efficacy (specifically the ergodicity and rate of convergence) of the Monte-Carlo Markov-Chain method depends on the choice of the transition density. [Pg.414]

The state-transition model can be analyzed using a number of approaches as a Markov chains, using semi-Markov processes or using Monte Carlo simulation (Fishman 1996). The applicability of each method depends on the assumptions that can be made regarding faults occurrence and a repair time. In case of the Markov approach, it is necessary to assume that both the faults and renewals occur with constant intensities (i.e. exponential distribution). Also the large number of states makes Markov or semi-Markov method more difficult to use. Presented in the previous section reliability model includes random values with exponential, truncated normal and discrete distributions as well as some periodic relations (staff working time), so it is hard to be solved by analytical methods. [Pg.2081]

In the Metropolis Monte Carlo scheme [20], the move from the state i to j is given in two stages. The first stage produces a trial move from state i to state j and is represent by a probability (/ j) called the underlying matrix of the Markov chain. For example, this trial move can consist of the translation of one particle chosen from N particles. The probability is then a(i j) = l/N. The second stage is represented by acc(i j), which determines whether the trial move is accepted or rejected. The transition matrix n(i j) is written as a function of these two stages ... [Pg.360]

With this simple acceptance criterion, the Metropolis Monte Carlo method generates a Markov chain of states or conformations that asymptotically sample the XTT probability density function. It is a Markov chain because the acceptance of each new state depends only on the previous state. Importantly, with transition probabilities defined by Eqs. 15.23 and 15.24, the transition matrix has the limiting, equihbrium distribution as the eigenvector corresponding to the largest eigenvalue of 1. [Pg.265]


See other pages where Transitional Markov chain Monte Carlo is mentioned: [Pg.9]    [Pg.219]    [Pg.228]    [Pg.278]    [Pg.413]    [Pg.417]    [Pg.645]    [Pg.9]    [Pg.219]    [Pg.228]    [Pg.278]    [Pg.413]    [Pg.417]    [Pg.645]    [Pg.100]    [Pg.204]    [Pg.354]    [Pg.247]    [Pg.312]    [Pg.422]    [Pg.164]    [Pg.251]    [Pg.10]    [Pg.370]    [Pg.15]    [Pg.144]    [Pg.661]    [Pg.340]    [Pg.116]    [Pg.253]    [Pg.359]    [Pg.33]   


SEARCH



Chain transition

Markov

Markov chain

Markov chain Monte Carlo

Markovic

Monte Markov chain

Monte-Carlo chains

Transitional Markov chain Monte Carlo simulation

© 2024 chempedia.info