Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Continuous time Markov chain

The major problem of Markov chains continuous in time and space is that the availability of analytical solutions of the governing equations, which depend also on the boundary conditions, is limited to simplified situations and for more complicated cases, niunerical solutions are called for. [Pg.179]

W. J. Anderson, Continuous-Time Markov Chains An Applications-Oriented Approach, Springer-Verlag, New York, 1991. [Pg.314]

K.K. Yin, H. Yang, P. Daoutidis, G.G. Yin, Simulation of population dynamics using continuous-time finite state Markov chains, Compt. Chem. Eng. 27 (2003) 235-249. [Pg.272]

Markov chains or processes are named after the Russian mathematician A.A.Markov (1852-1922) who introduced the concept of chain dependence and did basic pioneering work on this class of processes [1]. A Markov process is a mathematical probabilistic model that is very useful in the study of complex systems. The essence of the model is that if the initial state of a system is known, i.e. its present state, and the probabilities to move forward to other states are also given, then it is possible to predict the future state of the system ignoring its past history. In other words, past history is immaterial for predicting the future this is the key-element in Markov chains. Distinction is made between Markov processes discrete in time and space, processes discrete in space and continuous in time and processes continuous in space and time. This book is mainly concerned with processes discrete in time and space. [Pg.6]

Markov chains have extensively been dealt with in refs.[2-8, 15-18, 84], mainly by mathematicians. Based on the material of these articles and books, a coherent and a short "distillate" is presented in the following. The detailed mathematics is avoided and numerous examples are presented, demonstrating the potential of the Markov-chain method. Distinction has been made with respect to processes discrete in time and space, processes discrete in space and continuous in time as well as processes continuous in space and time. [Pg.11]

Example 2.2 is also a Markov chain. It deals with a pulse input of some dye introduced into a perfectly-mixed continuous flow reactor. Here the system is a fluid element containing some of the dye-pulse. The state of the system is the concentration of the dye-pulse in the reactor, which is a continuous function of time. The change of system s concentration with time is the state transition given by... [Pg.22]

MARKOV CHAINS DISCRETE IN SPACE AND CONTINUOUS IN TIME 2.2-1 Introduction... [Pg.132]

In the following, we derive the Kolmogorov differential equation on the basis of a simple model and report its various versions. In principle, this equation gives the rate at which a certain state is occupied by the system at a certain time. This equation is of a fundamental importance to obtain models discrete in space and continuous in time. The models, later discussed, are the Poisson Process, the Pure Birth Process, the Polya Process, the Simple Death Process and the Birth-and-Death Process. In section 2.1-3 this equation, i.e. Eq.2-30, has been derived for Markov chains discrete in space and time. [Pg.133]

The models discrete in space and continuous in time as well as those continuous in space and time, led many times to non-linear differential equations for which an analytical solution is extremely difficult or impossible. In order to solve the equations, simplifications, e.g. linearization of expressions and assumptions must be carried out. However, if this is not sufficient, one must apply numerical solutions. This led the author to a major conclusion that there are many advantages of using Markov chains which are discrete in time and space. The major reason is that physical models can be presented in a unified description via state vector and a one-step transition probability matrix. Additional reasons are detailed in Chapter 1. It will be shown later that this presentation coincides also with the fact that it yields the finite difference equations of the process under consideration on the basis of which the differential equations have been derived. [Pg.180]

Deflnitions. The basic elements of Markov chains associated with Eq.(2-24) are the system, the state space, the initial state vector and the one-step transition probability matrix. Considering refs.[26-30], each of the elements will be defined in the following with special emphasize to chemical reactions occurring in a batch perfectly-mixed reactor or in a single continuous plug-flow reactor. In the latter case, which may simulated by perfectly-mixed reactors in series, all species reside in the reactor the same time. [Pg.187]

O. Cappe, C. P. Robert, and T. Ryden, Reversible jump, birth-and-death and more general continuous time Markov chain Monte Carlo samplers. JRSS Ser B Stat Method 65 679-700 (2003). [Pg.163]

Markov chain is the term used to describe a process observed at discrete intervals. However, some investigators prefer to describe Markov chains as a special case of a continuous-time Markov process. That is, the process is only observed at discrete intervals, but in reality it is a continuous-time Markov process (16). Therefore, the Markov process can be used to collectively describe all processes and chains. [Pg.690]

A discussion of the continuous-time model, the time-nonhomogenous model, and the semi-Markov chain is beyond the scope of this chapter (e.g., see Norris (13),... [Pg.692]

We have related the continuous-time chain to a discrete-time chain with a fast clock, whose time unit is the small quantity h but whose transition probabilities Pifh) are proportionately small for i + j by (29). This allows us to analyze the continuous-time chain using discrete-time results. AU the basic calculations for continuous-time, finite-state Markov chains may be carried out by taking a Unlit as h — 0 of the discrete-time approximation. For example, the transition matrix P(t), defined in (28), may be derived as foUows. We divide the time intervM [0, t into a large number N of short intervals of length h = t/N, so that the transition matrix P(t) is the A-step transition matrix corresponding to P(h). It foUows from (29) that P(f) is approximately the A-step transition matrix corresponding to the transition matrix I + hQ. This approximation becomes exact as /t — 0, and we have... [Pg.2154]

The generator Q determines how a continuous-time Markov chain evolves via (26). There is another, more direct prescription for the evolution of the chain in terms of Q. If the chain is now in state i, then the time T until the next change of state has the exponential distribution with rate qj. [Pg.2155]

The Poisson counting process of Section 2 is a continuous-time Markov chain N on the infinite state space 0, 1, 2,. . . , with generator... [Pg.2155]

Thus, the transitions are always from a state n to the state n + 1. The transitions are, of course, arrivals because they cause the count N to increase by 1. The probability of a transition in a short interval of time h is approximately Xh for any n by (26). This observation corresponds precisely with the description of the Poisson process in terms of coin tossing in Section 2. Moreover, the fact that the time tetween arrivals in a Poisson process is exponential may be seen now as a consequence of the fact, expressed in (33), that the holding times in any continuous-time Markov chain are exponentially distributed. [Pg.2155]

Let us briefly describe the long-run behavior of continuous-time, finite-state Markov chains. We assume irreducibility as before, which in this case means simply that the entries of P(t) are all positive fort > 0. Periodicity does not arise in continuous time. There is a unique distribution w satisfying... [Pg.2155]

TMs relates it to the parameters of the cham—the entries of its generator Q. These are the steady-state equations in the continuous-time case. For finite-state irreducible chains, these equations have a unique solution whose components add to 1, and this solution is the steady-state distribution v. As in the discrete-time case, it is also the limiting distribution of the Markov chain and gives the long-run proportion of time spent in each state. These results extend to the infinite-state case, assuming positive recurrence, as in Subsection 3.3. [Pg.2156]

Now we consider some special continuous-time Markov chains that arise in discussing simple queueing models. We restrict attention to the continuous-time case, as this is the setting of most of the practical models treated later. [Pg.2156]

Let X be a continuous-time Markov chain with generator Q and steady-state distribution it. The quantity ir q j, sometimes called ths flux from i to j, is the rate at which transitions from i to j occur in steady state. Contrast this with the transition rate q j itself, which is the rate of transitions from i to j when the chain is in state i. [Pg.2156]

In this subsection we treat several queues of the Ml Ml si K type. These queues have Poisson arrivals, exponential service times, s servers, and capacity K. For these queues, the number-in-system L(f), t 0, is a continuous-time Markov chain, in fact, a birth-and-death process (Subsection 3.6). The Markov property arises from the exponentitility of service and interarrival times—see the discussion following (33). The queueing discipline is taken to be FIFO in every case. The results presented here follow fairly directly from (38). [Pg.2158]

Market power, in supply cheiin management, 2127-2128 Market research, 269 Market turbulence, 311-314 Markov chains, 2150-2156 in continuous time, 2154-2156 and Markov property, 2150-2151 queueing model based on, 2153-2154, 2158-2159... [Pg.2750]

So far we have considered a single mesoscopic equation for the particle density and a corresponding random walk model, a Markov process with continuous states in discrete time. It is natural to extend this analysis to a system of mesoscopic equations for the densities of particles Pi(x,n), i = 1,2,..., m. To describe the microscopic movement of particles we need a vector process (X , S ), where X is the position of the particle at time n and S its state at time n. S is a sequence of random variables taking one of m possible values at time n. One can introduce the probability density Pj(jc, n) = 9P(X < x,S = i)/dx and an imbedded Markov chain with the m x m transition matrix H = (/i ), so that the matrix entry corresponds to the conditional probability of a transition from state i to state j. [Pg.59]

As mentioned on page 61, CTRWs are known as semi-Markov processes in the mathematical literature. In this section we provide a brief account of semi-Markov processes. They were introduced by P. Levy and W. L. Smith [253,415]. Recall that for a continuous-time Markov chain, the transitions between states at random times T are determined by the discrete chain X with the transition matrix H = (hij). The waiting time = T - for a given state i is exponentially distributed with the transition rate k , which depends only on the current state i. The natural generalization is to allow arbitrary distributions for the waiting times. This leads to a semi-Markov process. The reason for such a name is that the underlying process is a two-component Markov chain (X , T ). Here the random sequence X represents the state at the th transition, and T is the time of the nth transition. Obviously,... [Pg.67]

The standard continuous-time Markov chain is a special case of a semi-Markov process with the transition kernel... [Pg.68]

Note that the Poisson process plays a very important role in random walk theory. It can be defined in two ways (1) as a continuous-time Markov chain with constant intensity, i.e., as a pure birth process with constant birth rate k (2) as a renewal process. In the latter case, it can be represented as (3.25) with T = Here... [Pg.69]

In the case of a discrete time Markov chain with a continuous state space, we may simply suppress the variable t in the above formulas, and write... [Pg.410]

For now, assume in addition that the total repair time is exponentially distributed. Under this assumption, each functional group can be modelled as a continuous time Markov chain with -f 1 states. We let state / 0,..., ff correspond to the situation in which there are i pieces of defect equipment in the group. When in state i < R + 1, we move to state i + 1 with rate This transition corresponds to a failure in a piece of equipment within the group. When in state / > 0, we move to state / — 1 with rate t . This transition corresponds to a fiiushed repair. [Pg.576]

The General Markov Reward Model considers the continuous time Markov chain with a set of states and transition intensity matrix... [Pg.1510]

The resulting model is the Continuous-Time Markov Chain (CTMC) reported in Figure 2, where ... [Pg.1895]

Dellaert studies two lead time policies, CON and DEL, where DEL considers the probability distribution of the flow time in steady-state while quoting lead times. He models the problem as a continuous-time Markov chain, where the states are denoted by (n, 5)=(number of jobs, state of the machine). Interarrival, service and setup times are assumed to follow the exponential distribution, although the results can also be generalized to other distributions, such as Erlang. For both policies, he derives the pdf of the flow time, and relying on the results in [88] (the optimal lead time is a unique minimum of strictly convex functions), he claims that the optimal solution can be found by binary search. [Pg.532]


See other pages where Continuous time Markov chain is mentioned: [Pg.170]    [Pg.171]    [Pg.346]    [Pg.265]    [Pg.10]    [Pg.132]    [Pg.2154]    [Pg.11]    [Pg.172]    [Pg.1128]    [Pg.1129]    [Pg.1274]    [Pg.1885]    [Pg.58]    [Pg.406]   
See also in sourсe #XX -- [ Pg.265 ]




SEARCH



Continuous time

Markov

Markov chain

Markovic

© 2024 chempedia.info