Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Markov-type stochastic process

By introducing the stochastic Markov type connection process through the following equation ... [Pg.233]

In the framework of this ultimate model [33] there are m2 constants of the rate of the chain propagation kap describing the addition of monomer to the radical Ra whose reactivity is controlled solely by the type a of its terminal unit. Elementary reactions of chain termination due to chemical interaction of radicals Ra and R is characterized by m2 kinetic parameters k f . The stochastic process describing macromolecules, formed at any moment in time t, is a Markov chain with transition matrix whose elements are expressed through the concentrations Ra and Ma of radicals and monomers at this particular moment in the following way [1,34] ... [Pg.176]

This is the simplest of the models where violation of the Flory principle is permitted. The assumption behind this model stipulates that the reactivity of a polymer radical is predetermined by the type of bothjts ultimate and penultimate units [23]. Here, the pairs of terminal units MaM act, along with monomers M, as kinetically independent elements, so that there are m3 constants of the rate of elementary reactions of chain propagation ka ]r The stochastic process of conventional movement along macromolecules formed at fixed x will be Markovian, provided that monomeric units are differentiated by the type of preceding unit. In this case the number of transient states Sa of the extended Markov chain is m2 in accordance with the number of pairs of monomeric units. No special problems presents writing down the elements of the matrix of the transitions Q of such a chain [ 1,10,34,39] and deriving by means of the mathematical apparatus of the Markov chains the expressions for the instantaneous statistical characteristics of copolymers. By way of illustration this matrix will be presented for the case of binary copolymerization ... [Pg.180]

An exhaustive statistical description of living copolymers is provided in the literature [25]. There, proceeding from kinetic equations of the ideal model, the type of stochastic process which describes the probability measure on the set of macromolecules has been rigorously established. To the state Sa(x) of this process monomeric unit Ma corresponds formed at the instant r by addition of monomer Ma to the macroradical. To the statistical ensemble of macromolecules marked by the label x there corresponds a Markovian stochastic process with discrete time but with the set of transient states Sa(x) constituting continuum. Here the fundamental distinction from the Markov chain (where the number of states is discrete) is quite evident. The role of the probability transition matrix in characterizing this chain is now played by the integral operator kernel ... [Pg.185]

Many stochastic processes are of a special type called birth-and-death processes or generation-recombination processes . We employ the less loaded name one-step processes . This type is defined as a continuous time Markov process whose range consists of integers n and whose transition matrix W permits only jumps between adjacent sites,... [Pg.134]

Some restrictions are imposed when we start the application of limit theorems to the transformation of a stochastic model into its asymptotic form. The most important restriction is given by the rule where the past and future of the stochastic processes are mixed. In this rule it is considered that the probability that a fact or event C occurs will depend on the difference between the current process (P(C) = P(X(t)e A/V(X(t))) and the preceding process (P (C/e)). Indeed, if, for the values of the group (x,e), we compute = max[P (C/e) — P(C)], then we have a measure of the influence of the process history on the future of the process evolution. Here, t defines the beginning of a new random process evolution and tIt- gives the combination between the past and the future of the investigated process. If a Markov connection process is homogenous with respect to time, we have = 1 or Tt O after an exponential evolution. If Tt O when t increases, the influence of the history on the process evolution decreases rapidly and then we can apply the first type limit theorems to transform the model into an asymptotic... [Pg.238]

If we consider the evolution of the liquid element together with the state of probabilities of elementary evolutions, we can observe that we have a continuous Markov stochastic process. If we apply the model given in Eq. (4.68), Pj(z, t) is the probability of having the liquid element at position x and time t evolving by means of a type 1 elementary process (displacement with a d-v flow rate along a positive direction of x). This probability can be described through three independent events ... [Pg.260]

In this chapter we introduce Markov chains. These are a special type of stochastic process, which are processes that move around a set of possible values where the future values can t be predicted with certainty. There is some chance element in the evolution of the process through time. The set of possible values is called the state space of the process. Markov chains have the "memoryless" property that, given the past and present states, the future state only depends on the present state. This chapter will give us the necessary background knowledge about Markov chains that we will need to understand Markov chain Monte Carlo sampling. [Pg.101]

In Section 5.1 we introduce the stochastic processes. In Section 5.2 we will introduce Markov chains and define some terms associated with them. In Section 5.3 we find the n-step transition probability matrix in terms of one-step transition probability matrix for time invariant Markov chains with a finite state space. Then we investigate when a Markov ehain has a long-run distribution and discover the relationship between the long-run distribution of the Markov chain and the steady state equation. In Section 5.4 we classify the states of a Markov chain with a discrete state space, and find that all states in an irreducible Markov chain are of the same type. In Section 5.5 we investigate sampling from a Markov chain. In Section 5.6 we look at time-reversible Markov chains and discover the detailed balance conditions, which are needed to find a Markov chain with a given steady state distribution. In Section 5.7 we look at Markov chains with a continuous state space to determine the features analogous to those for discrete space Markov chains. [Pg.101]

A Markov chain is a special type of stochastic process where the future development of the process, given the present state and the past states, only depends on the present state. This is known as the Markov property. [Pg.122]

Gillespie s algorithm numerically reproduces the solution of the chemical master equation, simulating the individual occurrences of reactions. This type of description is called a jump Markov process, a type of stochastic process. A jump Markov process describes a system that has a probability of discontinuously transitioning from one state to another. This type of algorithm is also known as kinetic Monte Carlo. An ensemble of simulation trajectories in state space is required to accurately capture the probabilistic nature of the transient behavior of the system. [Pg.297]

Perikinetic motion of small particles (known as colloids ) in a liquid is easily observed under the optical microscope or in a shaft of sunlight through a dusty room - the particles moving in a somewhat jerky and chaotic manner known as the random walk caused by particle bombardment by the fluid molecules reflecting their thermal energy. Einstein propounded the essential physics of perikinetic or Brownian motion (Furth, 1956). Brownian motion is stochastic in the sense that any earlier movements do not affect each successive displacement. This is thus a type of Markov process and the trajectory is an archetypal fractal object of dimension 2 (Mandlebroot, 1982). [Pg.161]

Dynamic programming (DP) is an approach for the modeling of dynamic and stochastic decision problems, the analysis of the structural properties of these problems, and the solution of these problems. Dynamic programs are also referred to as Markov decision processes (MDP). Slight distinctions can be made between DP and MDP, such as that in the case of some deterministic problems the term dynamic programming is used rather than Markov decision processes. The term stochastic optimal control is also often used for these types of problems. We shall use these terms synonymously. [Pg.2636]

It is often remarked that stochastic models of chemical reactions can be easily extended to birth and death type phenomena that take place in other populations of entities. Although we agree with this approach in principle, we have to remark that from the mathematical point of view the relationship between the three categories, namely stochastic models of reactions, simple birth and death processes (Karlin MacGregor, 1957) and Markov population processes (Kingman, 1969) is not simple and is illustrated in Figure 5.1. [Pg.104]

Connections among stochastic models of reactions, simple birth and death-type processes, compartmental systems and population Markov processes are illustrated in Fig. 5.11. [Pg.143]

Nassar et al. [10] employed a stochastic approach, namely a Markov process with transient and absorbing states, to model in a unified fashion both complex linear first-order chemical reactions, involving molecules of multiple types, and mixing, accompanied by flow in an nonsteady- or steady-state continuous-flow reactor. Chou et al. [11] extended this system with nonlinear chemical reactions by means of Markov chains. An assumption is made that transitiions occur instantaneously at each instant of the discretized time. [Pg.542]


See other pages where Markov-type stochastic process is mentioned: [Pg.2037]    [Pg.164]    [Pg.144]    [Pg.169]    [Pg.288]    [Pg.132]    [Pg.161]    [Pg.187]    [Pg.945]    [Pg.21]    [Pg.167]    [Pg.42]    [Pg.206]    [Pg.156]    [Pg.199]    [Pg.245]    [Pg.242]    [Pg.219]    [Pg.329]    [Pg.685]    [Pg.25]    [Pg.61]   


SEARCH



Markov

Markov process

Markov stochastic processes

Markovic

Process type

Processing types

Stochastic process

© 2024 chempedia.info