Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Markov renewal

In the mathematical literature, X (t ) is called a semi-Markov process associated with the two-component Markov chain X , T ), a Markov renewal process [218]. As discussed in Sect. 2.3, the CTRW model is a standard approach for studying anomalous diffusion [298]. [Pg.61]

Some optimization approaches have been presented in literature in order to attain optimal maintenance policies for the single objective problem. For instance, Castanier et al. (2003) investigate the problem of inspecting and maintaining a repairable system subject to continuous deterioration processes. They aim to find a policy, by means of Markov renewal approach, that optimizes system performance. Chen Trivedi... [Pg.618]

The new set X n > 0 is defined such that it describes the visits to the states in U and the set T n > 0 are again random variables with 0 < Tq < T < T2 <. The resulting process X, Tj t n 0 is also a Markov renewal process. The holding time distributions, and instantaneous time transition probabilities,... [Pg.246]

We assume that Yi, T, k>0 follows a Markov renewal process which generalizes the notion of Markov jump process. Then, the probability that the N components will step to state J f m state i in the time interval [Tj, T + Ar], given Tjj.,7ji,, k < n, is defined as follows ... [Pg.777]

Localization and delocahzation for a periodic pinning model may be characterized once again by looking at the free energy. As we will explain in Chapter 3, it is possible to generalize the renewal theory approach introduced in Section 1.2, however the algebra is substantially more complex and the point process hidden behind periodic models is not a standard renewal, but rather a Markov renewal (see Chapter 3). This will allow precise computations, but for the moment we observe that ... [Pg.30]

We call r, but also (J, r), Markov renewal process [Asmussen (2003)]. It is a natural generalization of standard renewals, and we denote the law of (J, r) by P", when Jo = a (note that tq is miiquely defined in all cases). It is immediate to see that (J, 77) is a Markov chain with transition probabilities... [Pg.73]

As we have already stressed, the analogy with (2.18) is evident and it is probably not surprising for the reader that from such a formula one can extract the sharp behavior of the partition function. It should however be noted that, while in the positive recurrent set-up ( > 1) the theory of Markov renewals is well developed, fewer results are available in the literature on the mass renewal function of null recurrent Markov renewals. Moreover the results, even only at the level of sharp asymptotic behavior of the partition function, are more involved. As we shall see, this complexity is not only of a technical nature, but it really reflects a substantially larger variety of phenomena that can be observed in weakly inhomogeneous models, with respect to homogeneous ones. [Pg.75]

In this review we show that there are two main sources of memory. One of them correspond to the memory responsible for Anderson localization, and it might become incompatible with a representation in terms of trajectories. The fluctuation-dissipation process used here to illustrate Anderson localization in the case of extremely large Anderson randomness is an idealized condition that might not work in the case of correlated Anderson noise. On the other hand, the non-Poisson renewal processes generate memory properties that may not be reproduced by the stationary correlation functions involved by the projection approach to the GME. Before ending this subsection, let us limit ourselves to anticipating the fundamental conclusion of this review The CTRW is a correct theoretical tool to address the study of the non-Markov processes, if these correspond to trajectories undergoing unpredictable jumps. [Pg.375]

In the case where the correlation function <3> (f) has the form of Eq. (148), with p fitting the condition 2 < p < 3, the generalized diffusion equation is irreducibly non-Markovian, thereby precluding any procedure to establish a Markov condition, which would be foreign to its nature. The source of this fundamental difficulty is that the density method converts the infinite memory of a non-Poisson renewal process into a different type of memory. The former type of memory is compatible with the occurrence of critical events resetting to zero the systems memory. The second type of memory, on the contrary, implies that the single trajectories, if they exist, are determined by their initial conditions. [Pg.397]

Over the past several years, there has been a renewed interest in thermodynamics and many scientists have considered it from new points of view [1-8]. Thermodynamics is a universal effective theory [9]. It does not depend on the details of underlying dynamics. The first law is the conservation of energy. The second law is the nonnegativeness of excess heat production. It is valid for wide classes of Markov processes in which systems approach to the Boltzmann equilibrium distribution. [Pg.354]

Note that the Poisson process plays a very important role in random walk theory. It can be defined in two ways (1) as a continuous-time Markov chain with constant intensity, i.e., as a pure birth process with constant birth rate k (2) as a renewal process. In the latter case, it can be represented as (3.25) with T = Here... [Pg.69]

This disparity has been known for some time, and several attempts were made in ASR to correct this, the most notable proposal being the hidden semi-Markov model (HSMM) [282]. In this model, the transition probabilities are replaced by an explicit Gaussian duration model. It is now known that this increase in durational accuracy does not in fact improve speech recognition to any significant degree, most probably because duration itself is not a significant factor in discrimination. In synthesis, however, modelling the duration accurately is known to be important and for this reason there has been renewed interest in hidden semi-Markov models [504], [514],... [Pg.477]

Second, commonly used analytical techniques for reliability evaluation are applied probabihty theory, renewal reward processes, Markov decision theory, and Fault Trees. Each of these techniques has advantages and disadvantages and the choice depends on the system being modeled. [Pg.2162]

The states in which a lamp has both filaments broken are considered as unavailability states no mater if both broken filaments are detected already. These unavailability states are labeled by small circles close to the state in the Markov model. Detected failures are immediately transmitted to maintenance engineer site and then lamp renewal can be accomplished. [Pg.2195]

Note that the successive versions—going from 2.5G to 5G—of smartphones require to adapt the procedures, and the associated outage probabilities. Various mathematical frameworks are being used, from Monte Carlo simulations to renewal theory queueing theory is also well represented, with Markov systems at the forefront. [Pg.250]

If time-dependent failure and repair rates need to be considered, the Markov process is no longer applicable. Then, other methods like renewal process (Fritz et al. 2000) or even Petri nets (Zeiler Bertsche 2013) need to be used. These methods are typically analysed by numerical or simulation based procedures. [Pg.1768]

The state-transition model can be analyzed using a number of approaches as a Markov chains, using semi-Markov processes or using Monte Carlo simulation (Fishman 1996). The applicability of each method depends on the assumptions that can be made regarding faults occurrence and a repair time. In case of the Markov approach, it is necessary to assume that both the faults and renewals occur with constant intensities (i.e. exponential distribution). Also the large number of states makes Markov or semi-Markov method more difficult to use. Presented in the previous section reliability model includes random values with exponential, truncated normal and discrete distributions as well as some periodic relations (staff working time), so it is hard to be solved by analytical methods. [Pg.2081]

Counting process Equations for response statistical moments Jump processes Markov processes Non-Poisson processes Point process Probability density Random impulses Random vibrations Renewal processes... [Pg.1692]

Non-Markov Nature of the State Vector of the Dynamic System A renewal impulse process is... [Pg.1703]

DetaUed Integrodifferential Equations for the Joint Probability Density of the State Vector for Renewal Impulse Process Driven by Two Independent Poisson Processes The jump process Z(t) driven by two independent Poisson processes Eq. 90 is tantamount to a two-state Markov chain 5(0, such that 5(0 = 1 when Z(t) = 0 and 5(0 = 2 when Z(t) = 1 (Figs. 6 and 7). [Pg.1707]

Non-Poisson Impulse Processes, Fig. 8 Markov chain for a jump process driven by an Erlang renewal process... [Pg.1709]

This is clearly a (generalized) renewal property. One can easily generalize this formula and obtain analogous formulas for S based models. We will repeatedly use this property, but mainly in a non-explicit way and we will mostly manipulate (restricted) partition functions. Homogeneous Markov and renewal processes are manageable due to their local nature, but, in inhomogeneous frameworks, they may instead display sharply nonlocal features and tools to analyze them go well beyond the tools used for homogeneous systems. [Pg.41]

In this case there is no exponential growth and one is left with estimating the mass renewal function the notation is the same as for > 1, just keep in mind that now b = 0, so A° (n) = Ma, (n), Ta,f n) = Ma p n) fi/ and P( ) is the Markov transition matrix of J, with invariant measure V, Va = Ca a- We Will uot give the details of the proof, that can be found in [Caravenna et al. (2005)], but we stress that the proof is a matter of deahng with a return distribution that is a random superposition of return laws with 0=1/2 and trivial L( ), so the N dependence in Theorem 3.4(2) does not come as a surprise once we consider the corresponding result (2.15)... [Pg.76]


See other pages where Markov renewal is mentioned: [Pg.206]    [Pg.305]    [Pg.310]    [Pg.244]    [Pg.246]    [Pg.249]    [Pg.777]    [Pg.72]    [Pg.73]    [Pg.73]    [Pg.76]    [Pg.79]    [Pg.79]    [Pg.79]    [Pg.81]    [Pg.206]    [Pg.305]    [Pg.310]    [Pg.244]    [Pg.246]    [Pg.249]    [Pg.777]    [Pg.72]    [Pg.73]    [Pg.73]    [Pg.76]    [Pg.79]    [Pg.79]    [Pg.79]    [Pg.81]    [Pg.532]    [Pg.384]    [Pg.2197]    [Pg.194]    [Pg.1966]    [Pg.1692]    [Pg.176]    [Pg.40]    [Pg.153]    [Pg.165]   
See also in sourсe #XX -- [ Pg.72 ]




SEARCH



Markov

Markovic

© 2024 chempedia.info