Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Markov chains

An important feature of the random-walk theory is that in a sequence of trials each trial is independent of the others. Markov advanced the theory by generalizing the condition that the outcome of any trial may depend on the outcome of the preceding trial that is, the probability of an event is conditioned by the previous event. This idea clearly fits the description of the configuration of a polymer chain. [Pg.101]

According to the fundamental limit theorem (an extension of the Bernoulli theorem by Laplace) if an event A occurs m times in a series of n independent trials with constant probability p and if n — oo, then the distribution function tends to be [Pg.102]


The Boltzmaim weight appears implicitly in the way the states are chosen. The fomi of the above equation is like a time average as calculated in MD. The MC method involves designing a stochastic algorithm for stepping from one state of the system to the next, generating a trajectory. This will take the fomi of a Markov chain, specified by transition probabilities which are independent of the prior history of the system. [Pg.2256]

P. Deuflhard, W. Huisinga, A. Fischer, Ch. Schiitte. Identification of Almost Invariant Aggregates in Nearly Uncoupled Markov Chains. Preprint, Preprint SC 98-03, Konrad Zuse Zentrum, Berlin (1998)... [Pg.115]

The equilibrium distribution of the system can be determined by considering the result c applying the transition matrix an infinite number of times. This limiting dishibution c the Markov chain is given by pij jt = lim, o p(l)fc -... [Pg.431]

Closely related to the transition matrix is the stochastic matrix, whose elements are labelle a . TTiis matrix gives the probability of choosing the two states m and n between whic the move is to be made. It is often known as the underlying matrix of the Markov chain, the probability of accepting a trial move from m to n is then the probability of makir a transition from m to n (7r, ) is given by multiplying the probability of choosing states... [Pg.431]

In the next subsection, I describe how the basic elements of Bayesian analysis are formulated mathematically. I also describe the methods for deriving posterior distributions from the model, either in terms of conjugate prior likelihood forms or in terms of simulation using Markov chain Monte Carlo (MCMC) methods. The utility of Bayesian methods has expanded greatly in recent years because of the development of MCMC methods and fast computers. I also describe the basics of hierarchical and mixture models. [Pg.322]

F. Simulation via Markov Chain Monte Carlo Methods... [Pg.326]

In some cases, we may not be able to draw directly from the posterior distribution. The difficulty lies in calculating the denominator of Eq. (18), the marginal data distribution p(y). But usually we can evaluate the ratio of the probabilities of two values for the parameters, p(Q, y)/p(Qu y), because the denominator in Eq. (18) cancels out in the ratio. The Markov chain Monte Carlo method [40] proceeds by generating draws from some distribution of the parameters, referred to as the proposal distribution, such that the new draw depends only on the value of the old draw, i.e., some function We accept... [Pg.326]

WK Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 97-109, 1970. [Pg.346]

The MC method can be implemented by a modification of the classic Metropolis scheme [25,67]. The Markov chain is generated by a three-step sequence. The first step is identical to the classic Metropolis algorithm a randomly selected molecule i is displaced within a small cube of side length 26r centered on its original position... [Pg.25]

A Hidden Markov Model (HMM) is a general probabilistic model for sequences of symbols. In a Markov chain, the probability of each symbol depends only on the preceding one. Hidden Markov models are widely used in bioinformatics, most notably to replace sequence profile in the calculation of sequence alignments. [Pg.584]

Lowry, G. G., Ed. Markov Chains and Monte Carlo Calculations in Polymer Science Marcel Dekker New York, 1970. [Pg.188]

Some Basic Ideas on Random Variables and Markov Chains... [Pg.668]

A Markov chain is a sequence of trials that samples a random variable and satisfies two conditions, namely that the outcome of each trial belongs to a finite set of outcomes nd that the outcome of each trial depends only on the... [Pg.669]

For a number of copolymers, whose kinetics of formation is described by nonideal models, the statistics of alternation of monomeric units in macromolecules cannot be characterized by a Markov chain however, it may be reduced to the extended Markov chain provided that units apart from their chemical nature... [Pg.173]

In the framework of this ultimate model [33] there are m2 constants of the rate of the chain propagation kap describing the addition of monomer to the radical Ra whose reactivity is controlled solely by the type a of its terminal unit. Elementary reactions of chain termination due to chemical interaction of radicals Ra and R is characterized by m2 kinetic parameters k f . The stochastic process describing macromolecules, formed at any moment in time t, is a Markov chain with transition matrix whose elements are expressed through the concentrations Ra and Ma of radicals and monomers at this particular moment in the following way [1,34] ... [Pg.176]

When calculating the average copolymer composition and the probabilities P Uk] of the sequences of monomeric units it is possible to set Ta=0 in the expressions in (7), that is to neglect the finiteness of the size of the macromolecules. In this case the absorbing Markov chain (7) is replaced by the ergodic Markov chain with transition matrix Q whose elements ... [Pg.177]

The instantaneous composition of a copolymer X formed at a monomer mixture composition x coincides, provided the ideal model is applicable, with stationary vector ji of matrix Q with the elements (8). The mathematical apparatus of the theory of Markov chains permits immediately one to wright out of the expression for the probability of any sequence P Uk in macromolecules formed at given x. This provides an exhaustive solution to the problem of sequence distribution for copolymers synthesized at initial conversions p l when the monomer mixture composition x has had no time to deviate noticeably from its initial value x°. As for the high-conversion copolymerization products they evidently represent a mixture of Markovian copolymers prepared at different times, i.e. under different concentrations of monomers in the reaction system. Consequently, in order to calculate the probability of a certain sequence Uk, it is necessary to average its instantaneous value P Uk over all conversions p preceding the conversion p up to which the synthesis was conducted. [Pg.177]

Because the dependence of probability P Uk on x should be established by means of the theory of Markov chains, in order to make such an averaging it is necessary to know how the monomer mixture composition drifts with conversion. This kind of information is available [2,27] from the solution of the following set of differential equations ... [Pg.177]

This is the simplest of the models where violation of the Flory principle is permitted. The assumption behind this model stipulates that the reactivity of a polymer radical is predetermined by the type of bothjts ultimate and penultimate units [23]. Here, the pairs of terminal units MaM act, along with monomers M, as kinetically independent elements, so that there are m3 constants of the rate of elementary reactions of chain propagation ka ]r The stochastic process of conventional movement along macromolecules formed at fixed x will be Markovian, provided that monomeric units are differentiated by the type of preceding unit. In this case the number of transient states Sa of the extended Markov chain is m2 in accordance with the number of pairs of monomeric units. No special problems presents writing down the elements of the matrix of the transitions Q of such a chain [ 1,10,34,39] and deriving by means of the mathematical apparatus of the Markov chains the expressions for the instantaneous statistical characteristics of copolymers. By way of illustration this matrix will be presented for the case of binary copolymerization ... [Pg.180]

In order to obtain the expression for the components of the vector of instantaneous copolymer composition it is necessary, according to general algorithm, to firstly determine the stationary vector ji of the extended Markov chain with the matrix of transitions (13) which describes the stochastic process of conventional movement along macromolecules with labeled units and then to erase the labels. In this particular case such a procedure reduces to the summation ... [Pg.181]

Rigorous kinetic analysis has shown [41] that the products of binary copolymerization, formed under the conditions of constant concentrations of monomers, may be described by the extended Markov chain with four states Sa, if to label monomeric units conventionally coloring them in red and black. Unit Ma is presumed to be black when the corresponding monomer Ma adds to the radical as the first monomer of the complex. In other cases, when monomer Ma adds individually or as the second monomer of the complex, the unit Ma is assumed to be red. As a result the state of a monomeric unit is characterized by two attributes, one of which is its type (a=l,2) while the second one is its color (r,b). For example, we shall speak about the unit being in the state Sx provided it is of the first type and red-colored, i.e. Mrx. The other states Sa are determined in a similar manner ... [Pg.182]

Essentially, in realistic polymer chains, a monomeric unit does not remember the way it appeared in the macroradical. All the experimental characteristics of a copolymer chemical structure are naturally described in terms of uncolored units. Consequently, having preliminarily calculated these characteristics in the ensemble of macromolecules with colored units, it is then necessary to erase colors bearing in mind that every state in a chain of uncolored units is an enhancement of a corresponding pair of states in a chain of colored units. The latter is the Markov chain with transient states (19), whose matrix of transitions looks as follows ... [Pg.182]

Upon expressing from the equilibrium condition the complex concentration M12 through the concentrations of monomers, and substituting the expression found into relationship (21) we obtain, invoking the formalism of the Markov chains, final formulas enabling us to calculate instantaneous statistical characteristics of the ensemble of macromolecules with colored units. A subsequent color erasing procedure is carried out in the manner described above. For example, when calculating instantaneous copolymer composition, this procedure corresponds to the summation of the appropriate components of the stationary vector jt of the extended Markov chain ... [Pg.183]

Here Jta(x) denotes the a-th component of the stationary vector x of the Markov chain with transition matrix Q whose elements depend on the monomer mixture composition in microreactor x according to formula (8). To have the set of Eq. (24) closed it is necessary to determine the dependence of x on X in the thermodynamic equilibrium, i.e. to solve the problem of equilibrium partitioning of monomers between microreactors and their environment. This thermodynamic problem has been solved within the framework of the mean-field Flory approximation [48] for copolymerization of any number of monomers and solvents. The dependencies xa=Fa(X)(a=l,...,m) found there in combination with Eqs. (24) constitute a closed set of dynamic equations whose solution permits the determination of the evolution of the composition of macroradical X(Z) with the growth of its length Z, as well as the corresponding change in the monomer mixture composition in the microreactor. [Pg.184]

An exhaustive statistical description of living copolymers is provided in the literature [25]. There, proceeding from kinetic equations of the ideal model, the type of stochastic process which describes the probability measure on the set of macromolecules has been rigorously established. To the state Sa(x) of this process monomeric unit Ma corresponds formed at the instant r by addition of monomer Ma to the macroradical. To the statistical ensemble of macromolecules marked by the label x there corresponds a Markovian stochastic process with discrete time but with the set of transient states Sa(x) constituting continuum. Here the fundamental distinction from the Markov chain (where the number of states is discrete) is quite evident. The role of the probability transition matrix in characterizing this chain is now played by the integral operator kernel ... [Pg.185]

As the result of theoretical consideration of polycondensation of an arbitrary mixture of such monomers it was proved [55,56] that the alternation of monomeric units along polymer molecules obey the Markovian statistics. If all initial monomers are symmetric, i.e. they resemble AaScrAa, units Sa(a=l,...,m) will correspond to the transient states of the Markov chain. The probability vap of transition from state Sa to is the ratio Q /v of two quantities Qa/9 and va which represent, respectively, the number of dyads (SaSp) and monads (Sa) per one monomeric unit. Clearly, Qa(S is merely a ratio of the concentration of chemical bonds of the u/i-ih type, formed as a result of the reaction between group Aa and Ap, to the overall concentration of monomeric units. The probability va0 of a transition from the transient state Sa to an absorbing state S0 equals l-pa where pa represents the conversion of groups Aa. [Pg.188]

Somewhat more complicated is the Markov chain describing the products of polycondensation with participation of asymmetric monomers. Any of them, AjSaAj, comprises a tail-to-head oriented monomeric unit Sa. It has been demonstrated [55,56] that the description of molecules of polycondensation copolymers can be performed using the Markov chain whose transient states correspond to the oriented units. A transient state of this chain ij corresponds to a monomeric unit at the left and right edge of which the groups A, and A are positioned, respectively. A state ji corresponds here to the same unit but is oriented in the opposite direction. However, a drawback of this Markov chain worthy of mention is the excessive number of its states. [Pg.188]

It is possible, however, to eliminate this drawback [56] by enlarging the above Markov chain through a combination of several of its states into a single one. Such an enlargement is attainable in two ways. Following the first of them it is necessary as a transient state (j) of the enlarged chain to choose the sum of states lj + 2j +...+ mj, whereas the second way suggests that as such a state (i) the... [Pg.188]


See other pages where Markov chains is mentioned: [Pg.2257]    [Pg.430]    [Pg.166]    [Pg.320]    [Pg.1047]    [Pg.752]    [Pg.304]    [Pg.790]    [Pg.631]    [Pg.669]    [Pg.162]    [Pg.162]    [Pg.164]    [Pg.164]    [Pg.174]    [Pg.174]    [Pg.178]    [Pg.189]    [Pg.189]    [Pg.189]   
See also in sourсe #XX -- [ Pg.255 , Pg.297 ]

See also in sourсe #XX -- [ Pg.128 , Pg.129 , Pg.130 ]

See also in sourсe #XX -- [ Pg.14 ]

See also in sourсe #XX -- [ Pg.82 ]

See also in sourсe #XX -- [ Pg.89 ]

See also in sourсe #XX -- [ Pg.247 ]

See also in sourсe #XX -- [ Pg.19 ]

See also in sourсe #XX -- [ Pg.85 , Pg.89 ]

See also in sourсe #XX -- [ Pg.6 , Pg.24 ]

See also in sourсe #XX -- [ Pg.16 , Pg.44 , Pg.136 , Pg.595 ]

See also in sourсe #XX -- [ Pg.167 , Pg.168 , Pg.252 , Pg.274 , Pg.689 , Pg.690 , Pg.691 , Pg.692 , Pg.856 ]

See also in sourсe #XX -- [ Pg.184 , Pg.188 , Pg.195 , Pg.212 , Pg.218 , Pg.283 ]

See also in sourсe #XX -- [ Pg.27 ]

See also in sourсe #XX -- [ Pg.69 , Pg.72 , Pg.80 ]

See also in sourсe #XX -- [ Pg.45 , Pg.46 , Pg.50 , Pg.52 , Pg.53 , Pg.54 , Pg.55 , Pg.56 ]

See also in sourсe #XX -- [ Pg.281 ]

See also in sourсe #XX -- [ Pg.428 ]

See also in sourсe #XX -- [ Pg.154 , Pg.183 ]

See also in sourсe #XX -- [ Pg.50 ]

See also in sourсe #XX -- [ Pg.59 , Pg.67 , Pg.69 ]

See also in sourсe #XX -- [ Pg.245 , Pg.246 , Pg.410 ]

See also in sourсe #XX -- [ Pg.18 ]

See also in sourсe #XX -- [ Pg.122 ]

See also in sourсe #XX -- [ Pg.96 , Pg.98 , Pg.101 ]

See also in sourсe #XX -- [ Pg.42 ]

See also in sourсe #XX -- [ Pg.325 ]

See also in sourсe #XX -- [ Pg.615 ]

See also in sourсe #XX -- [ Pg.56 ]

See also in sourсe #XX -- [ Pg.104 ]

See also in sourсe #XX -- [ Pg.322 , Pg.351 , Pg.356 ]

See also in sourсe #XX -- [ Pg.5 , Pg.10 , Pg.24 ]

See also in sourсe #XX -- [ Pg.354 ]




SEARCH



Adaptive Markov Chain Monte Carlo Simulation

Aperiodic Markov chains

Applications of Markov Chains in Chemical Reactions

Applications of Markov Chains in Chemical Reactors

Chain copolymerization first-order Markov model

Continuous-lag Markov Chains

Conventional Markov-chain Monte Carlo sampling

Fundamentals of Markov Chains

Gibbs sampler, Markov chain Monte Carlo

Gibbs sampler, Markov chain Monte Carlo methods

Going Forward with Markov Chain Monte Carlo

Higher-order Markov chains

Integration, method Markov chains

Irreducibility of Finite Markov Chains

Markov

Markov Chain Monte Carlo Sampling from Posterior

Markov Chain Theory Definition of the Probability Matrix

Markov Chain model

Markov Chains Discrete in Time and Space

Markov Chains with Continuous State Space

Markov chain Monte Carlo

Markov chain Monte Carlo method

Markov chain Monte Carlo sampling

Markov chain Monte Carlo simulation

Markov chain analysis

Markov chain assumptions

Markov chain continuous state space

Markov chain continuous time

Markov chain discrete time

Markov chain mechanism, first order

Markov chain principles

Markov chain theory

Markov chain theory definition

Markov chain theory probabilities

Markov chain theory probability matrix

Markov chains Metropolis algorithm

Markov chains detailed balance

Markov chains ergodic

Markov chains first-order

Markov chains in Monte Carlo

Markov chains irreducible

Markov chains second-order

Markov chains steady state equation

Markov chains time invariant

Markov chains time-reversible

Markov chains, processes

Markovic

Metropolis algorithm, Markov chain Monte

Monte Markov chain

Sampling from a Markov Chain

Stochastic simulation Markov chain

The Markov chain theory for ternary systems

Time-Invariant Markov Chains with Finite State Space

Time-Reversible Markov Chains and Detailed Balance

Transitional Markov chain Monte Carlo

Transitional Markov chain Monte Carlo simulation

© 2024 chempedia.info