Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Continuous time state transition matrix

As opposed to the liquid-crystal transformation, the liquid-glass transformation is not a phase transition and therefore it can not be characterized by a certain transition temperature. Nevertheless, the term "the vitrification temperature , Tv, is widely used. It has the following physical meaning. As opposed to crystallization, vitrification occurs when the temperature changes continuously, i.e. over some temperature interval, rather than jump-wise. Inside this interval, the sample behaves as a liquid relative to some of the processes occurring in it, and as a solid relative to other processes occurring in it. The character of this behaviour is determined by the ratio between the characteristic time of the process, t, and the characteristic relaxation time of the matrix, x = t//G, where tj is the macroscopic viscosity and G is the matrix elasticity module. If t x, then the matrix should be considered as a solid relative to the process, and if t > x it should be considered as a liquid. The relation tjx = 1 can be considered as the condition of the matrix transition from the liquid to the solid (vitreous) state, and the temperature Tv at which this condition is realized as the temperature of vitrification. Evidently, Tv determined by such means will be somewhat different for the processes with different characteristic times t. However, due to the rapid (exponential) dependence of the viscosity rj on T, the dependence of Tw on t (i.e. on the kind of process) will be comparatively weak (logarith-... [Pg.139]

Just as the rule whereby a discrete-time chain evolves is defined by a matrix P, the transition matrix, the rule whereby a continuous-time chain evolves is defined by a matrix Q, the generator matrix. The off-diagonal entry q, i + j, of Q has the interpretation that q jh is approximately the probability that the chain, starting from state i at a time t, makes a transition to state j by time t + h, where h is small ... [Pg.2154]

We have related the continuous-time chain to a discrete-time chain with a fast clock, whose time unit is the small quantity h but whose transition probabilities Pifh) are proportionately small for i + j by (29). This allows us to analyze the continuous-time chain using discrete-time results. AU the basic calculations for continuous-time, finite-state Markov chains may be carried out by taking a Unlit as h — 0 of the discrete-time approximation. For example, the transition matrix P(t), defined in (28), may be derived as foUows. We divide the time intervM [0, t into a large number N of short intervals of length h = t/N, so that the transition matrix P(t) is the A-step transition matrix corresponding to P(h). It foUows from (29) that P(f) is approximately the A-step transition matrix corresponding to the transition matrix I + hQ. This approximation becomes exact as /t — 0, and we have... [Pg.2154]

So far we have considered a single mesoscopic equation for the particle density and a corresponding random walk model, a Markov process with continuous states in discrete time. It is natural to extend this analysis to a system of mesoscopic equations for the densities of particles Pi(x,n), i = 1,2,..., m. To describe the microscopic movement of particles we need a vector process (X , S ), where X is the position of the particle at time n and S its state at time n. S is a sequence of random variables taking one of m possible values at time n. One can introduce the probability density Pj(jc, n) = 9P(X < x,S = i)/dx and an imbedded Markov chain with the m x m transition matrix H = (/i ), so that the matrix entry corresponds to the conditional probability of a transition from state i to state j. [Pg.59]

As mentioned on page 61, CTRWs are known as semi-Markov processes in the mathematical literature. In this section we provide a brief account of semi-Markov processes. They were introduced by P. Levy and W. L. Smith [253,415]. Recall that for a continuous-time Markov chain, the transitions between states at random times T are determined by the discrete chain X with the transition matrix H = (hij). The waiting time = T - for a given state i is exponentially distributed with the transition rate k , which depends only on the current state i. The natural generalization is to allow arbitrary distributions for the waiting times. This leads to a semi-Markov process. The reason for such a name is that the underlying process is a two-component Markov chain (X , T ). Here the random sequence X represents the state at the th transition, and T is the time of the nth transition. Obviously,... [Pg.67]

The General Markov Reward Model considers the continuous time Markov chain with a set of states and transition intensity matrix... [Pg.1510]

The models discrete in space and continuous in time as well as those continuous in space and time, led many times to non-linear differential equations for which an analytical solution is extremely difficult or impossible. In order to solve the equations, simplifications, e.g. linearization of expressions and assumptions must be carried out. However, if this is not sufficient, one must apply numerical solutions. This led the author to a major conclusion that there are many advantages of using Markov chains which are discrete in time and space. The major reason is that physical models can be presented in a unified description via state vector and a one-step transition probability matrix. Additional reasons are detailed in Chapter 1. It will be shown later that this presentation coincides also with the fact that it yields the finite difference equations of the process under consideration on the basis of which the differential equations have been derived. [Pg.180]

Deflnitions. The basic elements of Markov chains associated with Eq.(2-24) are the system, the state space, the initial state vector and the one-step transition probability matrix. Considering refs.[26-30], each of the elements will be defined in the following with special emphasize to chemical reactions occurring in a batch perfectly-mixed reactor or in a single continuous plug-flow reactor. In the latter case, which may simulated by perfectly-mixed reactors in series, all species reside in the reactor the same time. [Pg.187]

In Section 5.1 we introduce the stochastic processes. In Section 5.2 we will introduce Markov chains and define some terms associated with them. In Section 5.3 we find the n-step transition probability matrix in terms of one-step transition probability matrix for time invariant Markov chains with a finite state space. Then we investigate when a Markov ehain has a long-run distribution and discover the relationship between the long-run distribution of the Markov chain and the steady state equation. In Section 5.4 we classify the states of a Markov chain with a discrete state space, and find that all states in an irreducible Markov chain are of the same type. In Section 5.5 we investigate sampling from a Markov chain. In Section 5.6 we look at time-reversible Markov chains and discover the detailed balance conditions, which are needed to find a Markov chain with a given steady state distribution. In Section 5.7 we look at Markov chains with a continuous state space to determine the features analogous to those for discrete space Markov chains. [Pg.101]


See other pages where Continuous time state transition matrix is mentioned: [Pg.109]    [Pg.142]    [Pg.224]    [Pg.220]    [Pg.1606]    [Pg.406]    [Pg.522]    [Pg.108]    [Pg.94]    [Pg.569]    [Pg.393]    [Pg.317]    [Pg.340]    [Pg.318]    [Pg.236]    [Pg.322]    [Pg.2877]    [Pg.367]    [Pg.393]    [Pg.336]    [Pg.60]    [Pg.1129]    [Pg.1355]    [Pg.249]    [Pg.117]    [Pg.158]    [Pg.44]    [Pg.367]    [Pg.54]   
See also in sourсe #XX -- [ Pg.269 ]




SEARCH



Continuous time

Matrix continuity

Matrix timing

State, continuity

State-transition matrix

Transit time

Transition continuous

Transition matrix

Transition time

© 2024 chempedia.info