Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Markov chain principles

Besides the fugacity models, the environmental science literature reports the use of models based on Markov chain principle to evaluate the environmental fate of chemicals in multimedia environment. Markov chain is a random process, and its theory lies in using transition matrix to describe the transition of a substance among different states [39,40]. If the substance has all together n different kinds of states,... [Pg.51]

Ground surface subsidence is also an important environmental issue in mining industry. Liu et al. (1999) reported a numerical study on ground surface deformation and subsidence due to water drainage in a large-scale open-pit coalmine, using random media theory, Markov Chain principles and superposition techniques. The predicted and the... [Pg.42]

This is the simplest of the models where violation of the Flory principle is permitted. The assumption behind this model stipulates that the reactivity of a polymer radical is predetermined by the type of bothjts ultimate and penultimate units [23]. Here, the pairs of terminal units MaM act, along with monomers M, as kinetically independent elements, so that there are m3 constants of the rate of elementary reactions of chain propagation ka ]r The stochastic process of conventional movement along macromolecules formed at fixed x will be Markovian, provided that monomeric units are differentiated by the type of preceding unit. In this case the number of transient states Sa of the extended Markov chain is m2 in accordance with the number of pairs of monomeric units. No special problems presents writing down the elements of the matrix of the transitions Q of such a chain [ 1,10,34,39] and deriving by means of the mathematical apparatus of the Markov chains the expressions for the instantaneous statistical characteristics of copolymers. By way of illustration this matrix will be presented for the case of binary copolymerization ... [Pg.180]

Thus, as can be inferred from the foregoing, the calculation of any statistical characteristics of the chemical structure of Markovian copolymers is rather easy to perform. The methods of statistical chemistry [1,3] can reveal the conditions for obtaining a copolymer under which the sequence distribution in macromolecules will be describable by a Markov chain as well as to establish the dependence of elements vap of transition matrix Q of this chain on the kinetic and stoichiometric parameters of a reaction system. It has been rigorously proved [ 1,3] that Markovian copolymers are formed in such reaction systems where the Flory principle can be applied for the description of macromolecular reactions. According to this fundamental principle, the reactivity of a reactive center in a polymer molecule is believed to be independent of its configuration as well as of the location of this center inside a macromolecule. [Pg.148]

Examples 2.17-2.22 (and 2.41, 2.42) relate to what is normally called random walk [7, p.26 4, p.89]. In principle, we imagine a particle moving in a straight line in unit steps. Each step is one unit to the right with probability p or one unit to the left with probability q, where p + q = 1. The particle moves until it reaches one of the two extreme points called boundary points. The possibilities for its behavior at these points determine several different kinds of Markov chains... [Pg.56]

In the following, we derive the Kolmogorov differential equation on the basis of a simple model and report its various versions. In principle, this equation gives the rate at which a certain state is occupied by the system at a certain time. This equation is of a fundamental importance to obtain models discrete in space and continuous in time. The models, later discussed, are the Poisson Process, the Pure Birth Process, the Polya Process, the Simple Death Process and the Birth-and-Death Process. In section 2.1-3 this equation, i.e. Eq.2-30, has been derived for Markov chains discrete in space and time. [Pg.133]

However, whether we have access to the configmrations in parallel or sequentially is irrelevant, which permits us to conclude that, for a sufficiently long and ergodic Markov chain, displacements will be accepted on average with the correct probability dictated by the principles of statistical physics (i.e., the probability density of a given statistical physical ensemble). [Pg.186]

Based on the ME principle, gamma-distributed prior PDFs are assumed for all parameters (Soize 2003), and the Bayesian updating scheme is performed for all three alternative model classes using Markov Chain Monte Carlo... [Pg.1529]

Methods that will be used in this study are partially derived from well-known methods in the fields of production/inventory models, the queuing theory and Markov Decision Processes. The other methods that will be used, apart from simulation, are all based on the use of Markov chains. In a continuous review situation queuing models using Markov processes can be of much help. Queuing models assume that only the jobs or clients present in the system can be served, the main principle of production to order. Furthermore, all kinds of priority rules and distributions for demand and service times have been considered in literature. Therefore we will use a queuing model in a continuous review situation. [Pg.10]

By the choice of the action space, all stationary policies have transition probability matrices representing recurrent aperiodic Markov chains. If the number of possible states is limited, i.e. if the one-period demand is bounded, we can determine the optimal production policy. The optimal policy can be determined by a policy iteration method, but we will use the method of successive iteration, as described by Odoni(1969), since this method is faster in our situation. The optimal policy is the policy which achieves the minimum expected costs per transition, which will be denoted by g. Defining the quantity v (r) as the total expected costs from the next n transitions if the current state is r and if an optimal policy is followed, the iteration scheme takes the form described in the optimality principle by Bellman (1957) ... [Pg.39]

In principle, we can generate microscopic states uniformly in state space and then apply the Metropolis criterion, as discussed in the next section, to keep states with frequency proportional to their probabiUty. Indeed, states can be generated in any arbitrary way. Of course, for dense systems, unless the states are carefully constructed, the majority of generated states will not be accepted in the growing Markov chain because of their high energy. [Pg.263]


See other pages where Markov chain principles is mentioned: [Pg.48]    [Pg.51]    [Pg.67]    [Pg.468]    [Pg.48]    [Pg.51]    [Pg.67]    [Pg.468]    [Pg.334]    [Pg.93]    [Pg.202]    [Pg.93]    [Pg.30]    [Pg.5]    [Pg.380]    [Pg.1120]    [Pg.1126]    [Pg.185]    [Pg.553]    [Pg.513]    [Pg.232]    [Pg.158]    [Pg.352]    [Pg.285]   
See also in sourсe #XX -- [ Pg.51 ]




SEARCH



Markov

Markov chain

Markovic

© 2024 chempedia.info