Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Markov chain discrete time

Demonstration of the fundamentals has been performed also on the basis of examples generated from unusual sources, art and the Bible. Surprisingly, biblical stories and paintings can be nicely analyzed by applying Markov chains discrete in time and space. [Pg.11]

MARKOV CHAINS DISCRETE IN TIME AND SPACE 2.1-1 The conditional probability... [Pg.11]

MARKOV CHAINS DISCRETE IN SPACE AND CONTINUOUS IN TIME 2.2-1 Introduction... [Pg.132]

In the following, we derive the Kolmogorov differential equation on the basis of a simple model and report its various versions. In principle, this equation gives the rate at which a certain state is occupied by the system at a certain time. This equation is of a fundamental importance to obtain models discrete in space and continuous in time. The models, later discussed, are the Poisson Process, the Pure Birth Process, the Polya Process, the Simple Death Process and the Birth-and-Death Process. In section 2.1-3 this equation, i.e. Eq.2-30, has been derived for Markov chains discrete in space and time. [Pg.133]

An exhaustive statistical description of living copolymers is provided in the literature [25]. There, proceeding from kinetic equations of the ideal model, the type of stochastic process which describes the probability measure on the set of macromolecules has been rigorously established. To the state Sa(x) of this process monomeric unit Ma corresponds formed at the instant r by addition of monomer Ma to the macroradical. To the statistical ensemble of macromolecules marked by the label x there corresponds a Markovian stochastic process with discrete time but with the set of transient states Sa(x) constituting continuum. Here the fundamental distinction from the Markov chain (where the number of states is discrete) is quite evident. The role of the probability transition matrix in characterizing this chain is now played by the integral operator kernel ... [Pg.185]

In order to show these aspects, let us consider discrete-time Markov chains. The matrix of transition probabilities is denoted / ohbS which is the conditional probability that the system is in the state at time + 1 if it was in the state O) at time n. The time is counted in units of the time interval r. The matrix of transition probabilities satisfies... [Pg.121]

The process described above is thus repeated with constant time intervals. So, we have a discrete time t = nAr where n is the number of displacement steps. By the rules of probability balance and by the prescriptions of the Markov chain theory, the probability that shows a particle in position i after n motion steps and having a k-type motion is written as follows ... [Pg.217]

The easiest nontrivial example is a time-discrete Markov chain on a discrete state space. For example, take the chain with state space S — 1, 2,3,4 and one-step transition probabilities as illustrated in Fig. 1. [Pg.502]

Assume that we successfully identified a metastable decomposition into the sets Di, Dm for a given lag time t. Due to our above results the dynamics is jumping from sets Dk to set Dj with probability p(r, Dk,Dj) during time t. Then, it is an intriguing idea to describe the effective dynamics of the system by means of the Markov chain with discrete states Di,..., Dm and transition matrix P = (Py) with Py = p t, D, Dj). This effective dynamics is Markovian and thus cannot take into account that there may be memory in the system that is much longer than the time span r used to compute the metastable decomposition. [Pg.505]

Markov chains or processes are named after the Russian mathematician A.A.Markov (1852-1922) who introduced the concept of chain dependence and did basic pioneering work on this class of processes [1]. A Markov process is a mathematical probabilistic model that is very useful in the study of complex systems. The essence of the model is that if the initial state of a system is known, i.e. its present state, and the probabilities to move forward to other states are also given, then it is possible to predict the future state of the system ignoring its past history. In other words, past history is immaterial for predicting the future this is the key-element in Markov chains. Distinction is made between Markov processes discrete in time and space, processes discrete in space and continuous in time and processes continuous in space and time. This book is mainly concerned with processes discrete in time and space. [Pg.6]

Markov chains have extensively been dealt with in refs.[2-8, 15-18, 84], mainly by mathematicians. Based on the material of these articles and books, a coherent and a short "distillate" is presented in the following. The detailed mathematics is avoided and numerous examples are presented, demonstrating the potential of the Markov-chain method. Distinction has been made with respect to processes discrete in time and space, processes discrete in space and continuous in time as well as processes continuous in space and time. [Pg.11]

It should be emphasized that the transition matrix, Eq.(2-91), applies to the time interval between two consecutive service completion where the process between the two completions is of a Markov-chain type discrete in time. The transition matrix is of a random walk type, since apart from the first row, the elements on any one diagonal are the same. The matrix indicates also that there is no restriction on the size of the queue which leads to a denumerable infinite chain. If, however, the size of the queue is limited, say N - 1 customers (including the one being served), in such a way that arriving customers who find the queue full are turned away, then the resulting Markov chain is finite with N states. Immediately after a service completion there can be at most N -1 customers in the queue, so that the imbedded Markov chain has the state space SS = [0, 1,2,. .., N - 1 customers] and the transition matrix ... [Pg.115]

In the preceding chapter Markov chains has been dealt with as processes discrete in space and time. These processes involve countably many states Si, S2,. .. and depend on a discrete time parameter, that is, changes occur at fixed steps n = 0, 1, 2,. ... [Pg.132]

The models discrete in space and continuous in time as well as those continuous in space and time, led many times to non-linear differential equations for which an analytical solution is extremely difficult or impossible. In order to solve the equations, simplifications, e.g. linearization of expressions and assumptions must be carried out. However, if this is not sufficient, one must apply numerical solutions. This led the author to a major conclusion that there are many advantages of using Markov chains which are discrete in time and space. The major reason is that physical models can be presented in a unified description via state vector and a one-step transition probability matrix. Additional reasons are detailed in Chapter 1. It will be shown later that this presentation coincides also with the fact that it yields the finite difference equations of the process under consideration on the basis of which the differential equations have been derived. [Pg.180]

Throughout this chapter it has been decided to apply Markov chains which are discrete in time and space. By this approach, reactions can be presented in a unified description via state vector and a one-step transition probability matrix. Consequently, a process is demonstrated solely by the probability of a system to occupy or not to occupy a state. In addition, complicated cases for which analytical solutions are impossible are avoided. [Pg.187]

In the subsequent sections an overview of Markov models is provided, followed by a discussion of the Markovian assumption, the discrete time Markov chain, a mixed effects Markov model, and a hybrid mixed effects Markov and proportional odds model suited for data sets that exhibit the characteristics that can be described with such models. [Pg.689]

Markov chain is the term used to describe a process observed at discrete intervals. However, some investigators prefer to describe Markov chains as a special case of a continuous-time Markov process. That is, the process is only observed at discrete intervals, but in reality it is a continuous-time Markov process (16). Therefore, the Markov process can be used to collectively describe all processes and chains. [Pg.690]

When a random variable (potentially) changes states at discrete time points (e.g., every 3 minutes), and the states come from a set of discrete (often, also finite) possible states, a discrete-time Markov chain is used to describe the process. [Pg.690]

Discrete-time Markov chains are discrete-time stochastic processes with a discrete state space. Let the state of the random variable at time t be represented by y, then the stochastic process can be represented by (yi, y2, y, ...). [Pg.691]

P. A. Jensen and J. F. Bard, Mathematics of discrete-time Markov chains, in Operations Research Models and Methods. Wiley, Hoboken, NJ, 2002, pp. 466-492. [Pg.697]

HMMs are statistical models for systems that behave like a Markov chain, which a discrete-time stochastic process formed on the basis of the Markov property. [Pg.27]

General class of algorithmic methods that involve a stochastic element, i.e., that let the computer make random decisions (Binder Landau, 2000). An important subclass, the so-called Markov Chain Monte Carlo (MCMC) methods can be understood as acting on Markov chains. A Markov chain is a stochastic finite automaton that consists of states and transitions between states (Feller, 1968). At each time we consider ourselves as resident in a certain state of the Markov chain. At discrete time steps we leave this state and move to another state of the chain. This transition is taken with a certain probability characteristic for the given Markov chain. The probability depends only on the two states involved in the transition. Often one can only move to a state in a small set of states from a given state. This set of states is called the neighborhood of the state from which the move originates. [Pg.428]

Then a discrete time Markov chain model with the system observed just prior to the beginning of transfer can be used to show that the line efiSdency is given by... [Pg.1647]

In this subsection, we develop a very simple queueing model. This model is a Markov chain IXf) representing the number of jobs present in a queueing system observed at regular discrete times t = 0, 1, 2,. . . The state space is 0, 1, 2,. . . . There are two types of transitions possible arrivals and departures. We write p for the probability that a job arrives in the next time step. We write q for the probability that a job will complete service in the next time step, assuming that there is at least one job present (L(t) > 0). If we write r for 1 — p — q, which is the probability of no state change when there is at least one job present, then the transition matrix of the chain is... [Pg.2153]

By the Markov property of X, the process X(0), X(h), X(2h),. . . that results when we observe X at intervals of length (i is a discrete-time Markov chain with transition matrix P h). If we think of h as a small time interval, then equation (26) shows that the transition probabilities of this discretetime chain are given approximately by... [Pg.2154]

We have related the continuous-time chain to a discrete-time chain with a fast clock, whose time unit is the small quantity h but whose transition probabilities Pifh) are proportionately small for i + j by (29). This allows us to analyze the continuous-time chain using discrete-time results. AU the basic calculations for continuous-time, finite-state Markov chains may be carried out by taking a Unlit as h — 0 of the discrete-time approximation. For example, the transition matrix P(t), defined in (28), may be derived as foUows. We divide the time intervM [0, t into a large number N of short intervals of length h = t/N, so that the transition matrix P(t) is the A-step transition matrix corresponding to P(h). It foUows from (29) that P(f) is approximately the A-step transition matrix corresponding to the transition matrix I + hQ. This approximation becomes exact as /t — 0, and we have... [Pg.2154]

TMs relates it to the parameters of the cham—the entries of its generator Q. These are the steady-state equations in the continuous-time case. For finite-state irreducible chains, these equations have a unique solution whose components add to 1, and this solution is the steady-state distribution v. As in the discrete-time case, it is also the limiting distribution of the Markov chain and gives the long-run proportion of time spent in each state. These results extend to the infinite-state case, assuming positive recurrence, as in Subsection 3.3. [Pg.2156]

Discrete-time Markov chain simulation is used to forecast population ageing. It allows to identify the elderly people care needs and the workload in short-term, medium-term and long-term and to predict the future costs. An application is presented in [8]. [Pg.91]

So far we have considered a single mesoscopic equation for the particle density and a corresponding random walk model, a Markov process with continuous states in discrete time. It is natural to extend this analysis to a system of mesoscopic equations for the densities of particles Pi(x,n), i = 1,2,..., m. To describe the microscopic movement of particles we need a vector process (X , S ), where X is the position of the particle at time n and S its state at time n. S is a sequence of random variables taking one of m possible values at time n. One can introduce the probability density Pj(jc, n) = 9P(X < x,S = i)/dx and an imbedded Markov chain with the m x m transition matrix H = (/i ), so that the matrix entry corresponds to the conditional probability of a transition from state i to state j. [Pg.59]

As mentioned on page 61, CTRWs are known as semi-Markov processes in the mathematical literature. In this section we provide a brief account of semi-Markov processes. They were introduced by P. Levy and W. L. Smith [253,415]. Recall that for a continuous-time Markov chain, the transitions between states at random times T are determined by the discrete chain X with the transition matrix H = (hij). The waiting time = T - for a given state i is exponentially distributed with the transition rate k , which depends only on the current state i. The natural generalization is to allow arbitrary distributions for the waiting times. This leads to a semi-Markov process. The reason for such a name is that the underlying process is a two-component Markov chain (X , T ). Here the random sequence X represents the state at the th transition, and T is the time of the nth transition. Obviously,... [Pg.67]


See other pages where Markov chain discrete time is mentioned: [Pg.132]    [Pg.164]    [Pg.144]    [Pg.104]    [Pg.377]    [Pg.169]    [Pg.10]    [Pg.26]    [Pg.112]    [Pg.171]    [Pg.690]    [Pg.691]    [Pg.132]    [Pg.161]    [Pg.59]    [Pg.89]    [Pg.11]    [Pg.68]   
See also in sourсe #XX -- [ Pg.690 , Pg.691 ]




SEARCH



Discrete-time

Discretization chain

Markov

Markov Chains Discrete in Time and Space

Markov chain

Markovic

© 2024 chempedia.info