Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Markov evolution

As far as the explanation above allows one to express the studied system with the necessary objects of a cellular stochastic model, we can now describe the temperature changes inside the exchanger with a discrete Markov evolution that starts with an input cell of the hot or cold fluid. Indeed, relations (4.328) or (4.329) can now be particularized giving the expressions below whereas the matrix of the transition probabilities is described with relation (4.338). [Pg.313]

Here Jta(x) denotes the a-th component of the stationary vector x of the Markov chain with transition matrix Q whose elements depend on the monomer mixture composition in microreactor x according to formula (8). To have the set of Eq. (24) closed it is necessary to determine the dependence of x on X in the thermodynamic equilibrium, i.e. to solve the problem of equilibrium partitioning of monomers between microreactors and their environment. This thermodynamic problem has been solved within the framework of the mean-field Flory approximation [48] for copolymerization of any number of monomers and solvents. The dependencies xa=Fa(X)(a=l,...,m) found there in combination with Eqs. (24) constitute a closed set of dynamic equations whose solution permits the determination of the evolution of the composition of macroradical X(Z) with the growth of its length Z, as well as the corresponding change in the monomer mixture composition in the microreactor. [Pg.184]

The state of the entire system at time t is described by the /V-particle phase space probability density function, P(x/V, t). In MPC dynamics the time evolution of this function is given by the Markov chain,... [Pg.98]

In addition to the fact that MPC dynamics is both simple and efficient to simulate, one of its main advantages is that the transport properties that characterize the behavior of the macroscopic laws may be computed. Furthermore, the macroscopic evolution equations can be derived from the full phase space Markov chain formulation. Such derivations have been carried out to obtain the full set of hydrodynamic equations for a one-component fluid [15, 18] and the reaction-diffusion equation for a reacting mixture [17]. In order to simplify the presentation and yet illustrate the methods that are used to carry out such derivations, we restrict our considerations to the simpler case of the derivation of the diffusion equation for a test particle in the fluid. The methods used to derive this equation and obtain the autocorrelation function expression for the diffusion coefficient are easily generalized to the full set of hydrodynamic equations. [Pg.99]

The aim of this chapter is to describe approaches of obtaining exact time characteristics of diffusion stochastic processes (Markov processes) that are in fact a generalization of FPT approach and are based on the definition of characteristic timescale of evolution of an observable as integral relaxation time [5,6,30—41]. These approaches allow us to express the required timescales and to obtain almost exactly the evolution of probability and averages of stochastic processes in really wide range of parameters. We will not present the comparison of these methods because all of them lead to the same result due to the utilization of the same basic definition of the characteristic timescales, but we will describe these approaches in detail and outline their advantages in comparison with the FPT approach. [Pg.359]

Formula (2.2) contains only one-dimensional probability density W(xi, t ) and the conditional probability density. The conditional probability density of Markov process is also called the transition probability density because the present state comprehensively determines the probabilities of next transitions. Characteristic property of Markov process is that the initial one-dimensional probability density and the transition probability density completely determine Markov random process. Therefore, in the following we will often call different temporal characteristics of Markov processes the transition times, implying that these characteristics primarily describe change of the evolution of the Markov process from one state to another one. [Pg.360]

As an example of the description presented above, let us consider the time evolution of a mean coordinate of the Markov process ... [Pg.420]

Markov Chain Evolution for the Anodic Deposit Formation<->Deposit Dissolution Process. The Initial State is Purely Ionic (/> ° = 1 p2 = 0)... [Pg.294]

A sophisticated model of protein evolution was developed by Dayhoff et al. (1978) who measured the frequency with which each amino acid was replaced by every other in sets of closely related sequences. This was converted into a Markov model, which was used to generate the probability of any amino acid being substituted by any other or remaining unchanged after different amounts of evolution. The amount of evolution was measured in PAMs (point accepted mutations), which are mean numbers of substitutions per 100 residues. This is, of course, the same... [Pg.127]

The authors then ask the following question Do there exist deterministic dynamical systems that are, in a precise sense, equivalent to a monotonous Markov process The question can be reformulated in a more operational way as follows Does there exist a similarity transformation A which, when applied to a distribution function p, solution of the Liouville equation, transforms the latter into a function p that can also be interpreted as a distribution function (probability density) and whose evolution is governed by a monotonous Markov process An affirmative answer to this question requires the following conditions on A (MFC) ... [Pg.32]

Conclusion. In classical statistical mechanics the evolution of a many-body system is described as a stochastic process. It reduces to a Markov process if one assumes coarse-graining of the phase space (and the repeated randomness assumption). Quantum mechanics gives rise to an additional fine-graining. However, these grains are so much smaller that they do not affect the classical derivation of the stochastic behavior. These statements have not been proved mathematically, but it is better to say something that is true although not proved, than to prove something that is not true. [Pg.456]

Under the Markov assumption, the hierarchy of the joint probability densities [Eqs. (4.9)] describing the evolution of the system takes the following form ... [Pg.82]

A sine-wave frequency tracker has also been developed using hidden Markov modeling of the time evolution of sine-wave frequencies over multiple frames. This approach is particularly useful is tracking crossing frequency trajectories which can occur in complex sounds[Depalle et al., 1993]. [Pg.508]

It is necessary to note that the matrix K does not deal with the hold-ups at all. The same rule for its construction was suggested in [8,9] on the basis of particle fractions flows balance only, without referring to Markov chains models. Here it was obtained as a particular case of the developed model, which is presented by the matrix P and allows describing also the evolution of hold-ups in general case provided the smaller matrices of P are known. [Pg.270]

Markovic et al. (2000) Surface diffraction, electrochemistry Pd on Pt (111) Nanostructure, overpotential, subsurface hydride + + + Hydrogen evolution during water splitting... [Pg.321]

This statement can also be obtained when a transport process evolution is analyzed by the concept of Markov chains or completely connected chains. The math-... [Pg.191]

The last equations prove that the Markov chains [4.6] are able to predict the evolution of a system with only the data of the current state (without taking into account the system history). In this case, where the system presents perfect mixing cells, probabilities p and p j are described with the same equations as those applied to describe a unique perfectly stirred cell. Here, the exponential function of the residence time distribution (p in this case, see Section 3.3) defines the probability of exit from this cell. In addition, the computation of this probability is coupled with the knowledge of the flows conveyed between the cells. For the time interval At and for i= 1,2,3,. ..N and j = 1,2,3,..N - 1 we can write ... [Pg.197]

The establishment of stochastic equations frequently results from the evolution of the analyzed process. In this case, it is necessary to make a local balance (space and time) for the probability of existence of a process state. This balance is similar to the balance of one property. It means that the probability that one event occurs can be considered as a kind of property. Some specific rules come from the fact that the field of existence, the domains of values and the calculation rules for the probability of the individual states of processes are placed together in one or more systems with complete connections or in Markov chains. [Pg.206]

Polystochastic models are used to characterize processes with numerous elementary states. The examples mentioned in the previous section have already shown that, in the establishment of a stochastic model, the strategy starts with identifying the random chains (Markov chains) or the systems with complete connections which provide the necessary basis for the process to evolve. The mathematical description can be made in different forms such as (i) a probability balance, (ii) by modelling the random evolution, (iii) by using models based on the stochastic differential equations, (iv) by deterministic models of the process where the parameters also come from a stochastic base because the random chains are present in the process evolution. [Pg.216]


See other pages where Markov evolution is mentioned: [Pg.86]    [Pg.531]    [Pg.5]    [Pg.424]    [Pg.283]    [Pg.293]    [Pg.294]    [Pg.295]    [Pg.315]    [Pg.325]    [Pg.326]    [Pg.327]    [Pg.247]    [Pg.102]    [Pg.112]    [Pg.98]    [Pg.122]    [Pg.78]    [Pg.321]    [Pg.16]    [Pg.94]    [Pg.366]    [Pg.367]    [Pg.412]    [Pg.278]   
See also in sourсe #XX -- [ Pg.313 ]




SEARCH



Markov

Markovic

© 2024 chempedia.info