Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Markov chain assumptions

This is the simplest of the models where violation of the Flory principle is permitted. The assumption behind this model stipulates that the reactivity of a polymer radical is predetermined by the type of bothjts ultimate and penultimate units [23]. Here, the pairs of terminal units MaM act, along with monomers M, as kinetically independent elements, so that there are m3 constants of the rate of elementary reactions of chain propagation ka ]r The stochastic process of conventional movement along macromolecules formed at fixed x will be Markovian, provided that monomeric units are differentiated by the type of preceding unit. In this case the number of transient states Sa of the extended Markov chain is m2 in accordance with the number of pairs of monomeric units. No special problems presents writing down the elements of the matrix of the transitions Q of such a chain [ 1,10,34,39] and deriving by means of the mathematical apparatus of the Markov chains the expressions for the instantaneous statistical characteristics of copolymers. By way of illustration this matrix will be presented for the case of binary copolymerization ... [Pg.180]

The Markovian character of the sequence distribution statistics in the macromolecules results [6, 94] from assumption about the steady-state of the radical concentrations, which usually holds with a high degree of accuracy in the copolymerization processes [6, 95], It is worth mentioning that along with such kinetic stationarity one should usually speak about the statistical stationarity. It means that when the number of the units in copolymer molecules exceeds 10-15, their composition practically becomes independent on degree of polymerization and is indistinguishable from the value predicted by the stationary Markov chain theory. This conclusion is supported by the theoretical [96,97,6] and experimental [98] evidence. [Pg.16]

When the assumptions (4.19) are valid the Markov chain describing a sequence distribution in macromolecules is found to be reversible. It follows from the chain definition and relationships that ... [Pg.28]

The models discrete in space and continuous in time as well as those continuous in space and time, led many times to non-linear differential equations for which an analytical solution is extremely difficult or impossible. In order to solve the equations, simplifications, e.g. linearization of expressions and assumptions must be carried out. However, if this is not sufficient, one must apply numerical solutions. This led the author to a major conclusion that there are many advantages of using Markov chains which are discrete in time and space. The major reason is that physical models can be presented in a unified description via state vector and a one-step transition probability matrix. Additional reasons are detailed in Chapter 1. It will be shown later that this presentation coincides also with the fact that it yields the finite difference equations of the process under consideration on the basis of which the differential equations have been derived. [Pg.180]

The simplest model [73, p.l80] of a two impinging-stream reactor is shown schematically in Fig.4.S-l. On the LHS is demonstrated the actual configuration of the reactor and on the RHS the Markov-chain model. The latter employs the following considerations and assumptions ... [Pg.464]

In the subsequent sections an overview of Markov models is provided, followed by a discussion of the Markovian assumption, the discrete time Markov chain, a mixed effects Markov model, and a hybrid mixed effects Markov and proportional odds model suited for data sets that exhibit the characteristics that can be described with such models. [Pg.689]

For now, assume in addition that the total repair time is exponentially distributed. Under this assumption, each functional group can be modelled as a continuous time Markov chain with -f 1 states. We let state / 0,..., ff correspond to the situation in which there are i pieces of defect equipment in the group. When in state i < R + 1, we move to state i + 1 with rate This transition corresponds to a failure in a piece of equipment within the group. When in state / > 0, we move to state / — 1 with rate t . This transition corresponds to a fiiushed repair. [Pg.576]

To do so, we first provide in Section 2 a brief overview of Markov Chains and Monte-Carlo simulations. Section 3 presents the structure of our Markov model of failures and replacement as well as its imder-lying assumptions. In Section 4, we run Monte-Carlo simulations of the model and generate probability distributions for the lifecycle cost and utility of the two considered architectures that serve as a basis for our comparative analysis. Important trends and invariants are identified and discussed. For example, changes in average lifecycle cost and utility resulting from fractionation are observed, as well as reductions in cost risk. We conclude this work in Section 5. [Pg.660]

Al) only one failure can occur at one time, as the states are mutually exclusive in a Markov chain. While this assumption is trivially verified in the case of the monolithic architecture, it can represent a restriction of the model of the SBN. This restriction can be lifted by defining additional failed states for the combination of failures considered (e.g., FI3 would be the state corresponding to the simultaneous failure of module 1 and module 3). [Pg.661]

Statistical methods which have been used to treat polymerization problems can be grouped into three categories direct, formal Markov chain theory, and recursive approaches. They differ in detail and objectives but all contain the underlying assumption that polymerization follows Markovian statistics. [Pg.110]

Note that aperiodicity is often a strong assumption in Markov chains. One can still carry out the steady state probability computations even if a Markov chain has a period greater than one. Components of the steady state probability vector n can then be interpreted as long run proportions of time that the underlying stochastic process would be in a given state. [Pg.410]

The use of the embedded Discrete Time Markov Chain in a continuous stochastic process for deter-mining the events probability makes assumption that the system is in a stationary state characterizing by stationary distribution probabihties over its states. But the embedded DTMC is not limited to Continuous Time Markov Chain a DTMC can also be defined from semi-Markov or under some hypothesis from more generally stochastic processes. Another advantage to use the DTMC to obtain the events probability is that the probability of an event is not the same during the system evolution, but can depends on the state where it occurs (in other words the same event can be characterized by different occurrence probabilities). The use of the Arden lemma permits to formally determine the whole set of events sequences, without model exploring. Finally, the probability occurrence for relevant or critical events sequences and for a sublanguage is determined. [Pg.224]

The state-transition model can be analyzed using a number of approaches as a Markov chains, using semi-Markov processes or using Monte Carlo simulation (Fishman 1996). The applicability of each method depends on the assumptions that can be made regarding faults occurrence and a repair time. In case of the Markov approach, it is necessary to assume that both the faults and renewals occur with constant intensities (i.e. exponential distribution). Also the large number of states makes Markov or semi-Markov method more difficult to use. Presented in the previous section reliability model includes random values with exponential, truncated normal and discrete distributions as well as some periodic relations (staff working time), so it is hard to be solved by analytical methods. [Pg.2081]

Nassar et al. [10] employed a stochastic approach, namely a Markov process with transient and absorbing states, to model in a unified fashion both complex linear first-order chemical reactions, involving molecules of multiple types, and mixing, accompanied by flow in an nonsteady- or steady-state continuous-flow reactor. Chou et al. [11] extended this system with nonlinear chemical reactions by means of Markov chains. An assumption is made that transitiions occur instantaneously at each instant of the discretized time. [Pg.542]

Let N be the number of initially available time slots in a frame, and X be the total number of nodes which acquired a time slot within n frames. Under the assumptions, X is a stationary discrete-time Markov chain with the following transition probabilities. [Pg.35]

It can be shown [55] that the Andersen thermostat with non-zero collision frequency a leads to a canonical distribution of microstates. The proof [55] involves similar arguments as the derivation of the probability distribution generated by the MC procedure. It is based on the fact that the Andersen algorithm generates a Markov chain of microstates in phase space. The only required assumption is that every microstate is accessible from every other one within a finite time (ergodicity). Note... [Pg.123]


See other pages where Markov chain assumptions is mentioned: [Pg.161]    [Pg.161]    [Pg.377]    [Pg.2951]    [Pg.8]    [Pg.51]    [Pg.112]    [Pg.561]    [Pg.285]    [Pg.149]    [Pg.260]   
See also in sourсe #XX -- [ Pg.692 ]




SEARCH



Markov

Markov assumption

Markov chain

Markovic

© 2024 chempedia.info