Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Discrete state-space processes

We will only consider state-spaces with a countable number of states. First the Pij s, t) transition probability and the absolute distributions are defined, then the evolution equations are derived  [Pg.99]

Under mild conditions the following limits (infinitesimal probabilities) exist  [Pg.99]

The infinitesimal transition probabilities Pj it) have Puit) = 0 and Y.Pjk(t) = U ye 1 -Using the Chapman-Kolmogorov equation we can derive [Pg.99]

These kinds of equations have to be specified to obtain continuous time discrete state space models of chemical reactions. [Pg.99]


The terminology is nonstandard, and in physical literature the Kramers-Mpyal expansion is given as a (nonsystematic) procedure to approximate discrete state-space processes by continuous processes. The point that we want to emphasise here is the clear fact that, even in the case of a continuous state-space, the process itself can be noncontinuous, when the Lindeberg condition is not fulfilled. The functions for the higher coefficients do not necessarily have to vanish. [Pg.98]

Discrete-time Markov chains are discrete-time stochastic processes with a discrete state space. Let the state of the random variable at time t be represented by y, then the stochastic process can be represented by (yi, y2, y, ...). [Pg.691]

Discrete state space stochastic models of chemical reactions can be identified with the Markovian jump process. In this case the temporal evolution can be described by the master equation ... [Pg.10]

An (A, (p) dynamic system is deterministic if knowing the state of the system at one time means that the system is uniquely specified for all r 6 T. In many cases, the state of a system can be assigned to a set of values with a certain probability distribution, therefore the future behaviour of the system can be determined stochastically. Discrete time, discrete state-space (first order) Markov processes (i.e. Markov chains) are defined by the formula... [Pg.18]

A great amount of stochastic physics investigates the approximation of jump processes by diffusion processes, i.e. of the master equation by a Fokker-Planck equation, since the latter is easier to solve. The rationale behind this procedure is the fact that the usual deterministic (CCD) and stochastic (CDS) models differ from each other in two aspects. The CDS model offers a stochastic description with a discrete state space. In most applications, where the number of particles is large and may approach Avogadro s number, the discreteness should be of minor importance. Since the CCD model adopts a continuous state-space, it is quite natural to adopt CCS model as an approximation for fluctuations. [Pg.110]

One of the most extensively discussed topics of the theory of stochastic physics is whether the evolution equations of the discrete state-space stochastic processes, i.e. the master equations of the jump processes, can be approximated asymptotically by Fokker-Planck equations when the volume of the system increases. We certainly do not want to deal with the details of this problem, since the literature is comprehensive. Many opinions about this question have been expressed in a discussion (published in Nicolis et ai, 1984). However, some comments have to be made. [Pg.110]

We consider the discrete state space Q s Q ,np,..., Qvn-2 of tho rotational isomeric configurations of a chain of N bonds having v states accessible to each bond. The stochastic process of v xv " transitions between those configurations is the object of study. In the following, the terms states and system will be used interchangeably for configurations and chain , respectively. [Pg.155]

A linear, discrete, state-space model of a process is usually described by the following equations. [Pg.371]

We have invoked so far Markov processes in discrete state space as the natural model of fluctuations, since the latter are the consequence of the discrete nature of the microscopic processes underlying the macroscopic evolution laws. [Pg.185]

In Section 5.1 we introduce the stochastic processes. In Section 5.2 we will introduce Markov chains and define some terms associated with them. In Section 5.3 we find the n-step transition probability matrix in terms of one-step transition probability matrix for time invariant Markov chains with a finite state space. Then we investigate when a Markov ehain has a long-run distribution and discover the relationship between the long-run distribution of the Markov chain and the steady state equation. In Section 5.4 we classify the states of a Markov chain with a discrete state space, and find that all states in an irreducible Markov chain are of the same type. In Section 5.5 we investigate sampling from a Markov chain. In Section 5.6 we look at time-reversible Markov chains and discover the detailed balance conditions, which are needed to find a Markov chain with a given steady state distribution. In Section 5.7 we look at Markov chains with a continuous state space to determine the features analogous to those for discrete space Markov chains. [Pg.101]

The combinatorial problem is represented by a discrete decision process (DDP) (Ibaraki, 1978) where the underlying information in the problem is captured by an explicit state-space model (Nilsson, 1980). [Pg.275]

It may be useful to point out a few topics that go beyond a first course in control. With certain processes, we cannot take data continuously, but rather in certain selected slow intervals (c.f. titration in freshmen chemistry). These are called sampled-data systems. With computers, the analysis evolves into a new area of its own—discrete-time or digital control systems. Here, differential equations and Laplace transform do not work anymore. The mathematical techniques to handle discrete-time systems are difference equations and z-transform. Furthermore, there are multivariable and state space control, which we will encounter a brief introduction. Beyond the introductory level are optimal control, nonlinear control, adaptive control, stochastic control, and fuzzy logic control. Do not lose the perspective that control is an immense field. Classical control appears insignificant, but we have to start some where and onward we crawl. [Pg.8]

Method of Lines. The method of lines is used to solve partial differential equations (12) and was already used by Cooper (I3.) and Tsuruoka (l4) in the derivation of state space models for the dynamics of particulate processes. In the method, the size-axis is discretized and the partial differential a[G(L,t)n(L,t)]/3L is approximated by a finite difference. Several choices are possible for the accuracy of the finite difference. The method will be demonstrated for a fourth-order central difference and an equidistant grid. For non-equidistant grids, the Lagrange interpolation formulaes as described in (15 ) are to be used. [Pg.148]

Each of the optional dynamical models mentioned above involves a homogeneous Markov process Xt = Xt teT in either continuous or discrete time on some state space X. The motion of Xt is given in terms of the stochastic... [Pg.499]

In the t3rpical case the d3mamical process under investigation lives on a continuous state space such that the transfer operator does not have the form of a nice stochastic matrix. Therefore, discretization of the transfer operator is needed to 3deld a stochastic matrix with which one can proceed as in the example above. [Pg.503]

Markov chains or processes are named after the Russian mathematician A.A.Markov (1852-1922) who introduced the concept of chain dependence and did basic pioneering work on this class of processes [1]. A Markov process is a mathematical probabilistic model that is very useful in the study of complex systems. The essence of the model is that if the initial state of a system is known, i.e. its present state, and the probabilities to move forward to other states are also given, then it is possible to predict the future state of the system ignoring its past history. In other words, past history is immaterial for predicting the future this is the key-element in Markov chains. Distinction is made between Markov processes discrete in time and space, processes discrete in space and continuous in time and processes continuous in space and time. This book is mainly concerned with processes discrete in time and space. [Pg.6]

It should be emphasized that the transition matrix, Eq.(2-91), applies to the time interval between two consecutive service completion where the process between the two completions is of a Markov-chain type discrete in time. The transition matrix is of a random walk type, since apart from the first row, the elements on any one diagonal are the same. The matrix indicates also that there is no restriction on the size of the queue which leads to a denumerable infinite chain. If, however, the size of the queue is limited, say N - 1 customers (including the one being served), in such a way that arriving customers who find the queue full are turned away, then the resulting Markov chain is finite with N states. Immediately after a service completion there can be at most N -1 customers in the queue, so that the imbedded Markov chain has the state space SS = [0, 1,2,. .., N - 1 customers] and the transition matrix ... [Pg.115]

In the preceding chapter Markov chains has been dealt with as processes discrete in space and time. These processes involve countably many states Si, S2,. .. and depend on a discrete time parameter, that is, changes occur at fixed steps n = 0, 1, 2,. ... [Pg.132]

In the following, we derive the Kolmogorov differential equation on the basis of a simple model and report its various versions. In principle, this equation gives the rate at which a certain state is occupied by the system at a certain time. This equation is of a fundamental importance to obtain models discrete in space and continuous in time. The models, later discussed, are the Poisson Process, the Pure Birth Process, the Polya Process, the Simple Death Process and the Birth-and-Death Process. In section 2.1-3 this equation, i.e. Eq.2-30, has been derived for Markov chains discrete in space and time. [Pg.133]

The models discrete in space and continuous in time as well as those continuous in space and time, led many times to non-linear differential equations for which an analytical solution is extremely difficult or impossible. In order to solve the equations, simplifications, e.g. linearization of expressions and assumptions must be carried out. However, if this is not sufficient, one must apply numerical solutions. This led the author to a major conclusion that there are many advantages of using Markov chains which are discrete in time and space. The major reason is that physical models can be presented in a unified description via state vector and a one-step transition probability matrix. Additional reasons are detailed in Chapter 1. It will be shown later that this presentation coincides also with the fact that it yields the finite difference equations of the process under consideration on the basis of which the differential equations have been derived. [Pg.180]

The characteristics of the state space being measured can be used to classify the Markov process. For most purposes, a discrete or finite space is assumed and this implies that there are a finite number of states that will be reached by the process (14). A continuous or infinite process is also possible. Time intervals of observation of a process can be used to classify a Markov process. Processes can be observed at discrete or restricted intervals, or continuously (15). [Pg.690]

In this section, classical state-space models are discussed first. They provide a versatile modeling framework that can be linear or nonlinear, continuous- or discrete-time, to describe a wide variety of processes. State variables can be defined based on physical variables, mathematical solution convenience or ordered importance of describing the process. Subspace models are discussed in the second part of this section. They order state variables according to the magnitude of their contributions in explaining the variation in data. State-space models also provide the structure for... [Pg.89]

In order to simplify the description of this system one neglects the fast dynamics in the potential wells and considers only the transitions from one well to the other which happen on a much slower time scale. Under the assumption that the potential barrier AU between the two wells is large compared to the noise strength D and the relaxation in the wells is fast compared to the time scale of the jumps between the wells, the transitions can be considered as a rate process. Such a rate process has a probability per unit time to cross the barrier, which is independent on the time which has elapsed since the last crossing event. The resulting dynamics in the reduced discrete phase space which consists just of two discrete states left and right is thus still a Markovian one, i.e. the present state determines the future evolution to a maximal extent. [Pg.50]

Suppose now that we have a stochastic process X(t), t a 0, observed for till times, not just integer times. We ttike the state space to be the finite set 1, 2,. .., as before. The Markov property in this situation is just as described in Subsection 3.1 for the discrete-time case knowing the state of X at a time t, its evolution after time t is independent of its history before time t. Briefly, the future is independent of the past, given the present. [Pg.2154]

The study of the development of these profiles over time by using a probabilistic graph of transitions between the clusters inferred by k-TSSI (k-Testable Languages in the Strict Sense Inference) algorithm. The objective is to deduce Markov process which has a discrete (finite or countable) state-space. [Pg.91]

Other kinds of Fokker-Planck equations can be also derived. The continuous state-space stochastic model of a chemical reaction, which considers the reaction as a diffusion process , neglects the essential discreteness of the mesoscopic events. However, some shortcomings of (5.65) have been eliminated by using a direct Fokker-Planck equation obtained by means of nonlinear transport theory (Grabert et al., 1983). [Pg.111]

The particles of interest to us have both internal and external coordinates. The internal coordinates of the particle provide quantitative characterization of its distinguishing traits other than its location while the external coordinates merely denote the location of the particles in physical space. Thus, a particle is distinguished by its internal and external coordinates. We shall refer to the joint space of internal and external coordinates as the particle state space. One or more of either the internal and/or external coordinates may be discrete while the others may be continuous. Thus, the external coordinates may be discrete if particles can occupy only discrete sites in a lattice. There are several ways in which the internal coordinates may be discrete. A simple example is that of particle size in a population of particles, initially all of uniform size, undergoing pure aggregation, for in this case the particle size can only vary as integral multiples of the initial size. For a more exotic example, let the particle be an emulsion droplet (a liquid) in which a precipitation process is carried out producing a discrete number of precipitate particles. Then the number of precipitate particles may serve to describe the discrete internal coordinate of the droplet, which is the main entity of population balance. [Pg.3]


See other pages where Discrete state-space processes is mentioned: [Pg.99]    [Pg.99]    [Pg.164]    [Pg.617]    [Pg.506]    [Pg.510]    [Pg.161]    [Pg.102]    [Pg.77]    [Pg.175]    [Pg.505]    [Pg.509]    [Pg.112]    [Pg.132]    [Pg.171]    [Pg.258]    [Pg.72]    [Pg.161]    [Pg.58]    [Pg.410]    [Pg.201]    [Pg.205]   


SEARCH



Discrete process

Discrete state-space

Discrete states

Process state

Space processes

Space processing

State-space

© 2024 chempedia.info