Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Markov decision models

Love, C. E., Zhang, Z. G., Zitron, M. A. Guo, R. 2000. A discrete semi-Markov decision model to determine the optimal repair/replacement policy under general repairs. European Journal of Operational Research, 125(2) 398-409. [Pg.624]

Figure 24.2 The Markov decision-analytic model shows cost and cost-effectiveness evaluations for patients undergoing renal transplant [17]. a = cost b = cost per functioning graft c = cost per rejection-free clinical course. Figure 24.2 The Markov decision-analytic model shows cost and cost-effectiveness evaluations for patients undergoing renal transplant [17]. a = cost b = cost per functioning graft c = cost per rejection-free clinical course.
Dynamic programming (DP) is an approach for the modeling of dynamic and stochastic decision problems, the analysis of the structural properties of these problems, and the solution of these problems. Dynamic programs are also referred to as Markov decision processes (MDP). Slight distinctions can be made between DP and MDP, such as that in the case of some deterministic problems the term dynamic programming is used rather than Markov decision processes. The term stochastic optimal control is also often used for these types of problems. We shall use these terms synonymously. [Pg.2636]

More formally, the environment is modeled as a Markov decision process (MDP) with states si,. .., e 5 and actions a, . .with the... [Pg.916]

SEMI-MARKOV DECISION PROCESSES MODEL CHARACTERISTICS... [Pg.618]

This paper proposed a multiobjective optimization model based on semi-Markov decision processes and multiobjective GAs for the optimal replacement policy for monitored systems from oil industry. The proposed multiobjective GA with SMDP was validated by means of an exhaustive multiobjective algorithm and was able to find almost all solutions from the true non-dominated set. In addition, the time required to run the multiobjective GA jointly with the SMDP was much smaller than the needed by the exhaustive algorithm. [Pg.624]

Second, commonly used analytical techniques for reliability evaluation are applied probabihty theory, renewal reward processes, Markov decision theory, and Fault Trees. Each of these techniques has advantages and disadvantages and the choice depends on the system being modeled. [Pg.2162]

The basic model assumes that the relationship between the principal and a single agent extends over multiple periods. To capture the process dynamics, the model assumes that the agent controls a Markov Decision Process. And to eliminate the complications that arise from history-dependent policies, it assumes that the agent has access to ftictionless capital markets and he can smooth his consumption over time in a ftictionless capital market there are no transaction costs and the interest rate on negative bank balances is the same as the interest rate on positive balances. Overall, the model consists of three major components a physical component for process dynamics, an economic component for the preferences of the two parties, and an information component. [Pg.122]

Monahan, G.E. 1982. A survey of partially observable Markov decision processes Theory, models, and algorithms. Management Sci. 28 (1) 1-16. [Pg.446]

Einally, there are a few analytical models in the literature that deal with wind turbines. Byon et al. (2010) consider use a Markov Decision Process (MDP) to determine the optimal maintenance strategy under stochastic weather conditions. To our knowledge, this is the first mathematical model for wind turbine maintenance however, it is in a land-based context. Haddand et al. (2010) use a MDP based on real options for the availability maximization of an offshore wind farm. This model takes a condition-based maintenance approach to identify the optimal combination of turbines to be maintained in the wind farm. Besnard et al (2011) formulate a stochastic model for opportunistic maintenance planning of offshore wind farms. They use stochastic optimization to optimize the planning of service maintenance activities to take advantage of low wind periods in order to reduce production loss while the turbines are offline. Besnard et al. (2013) also... [Pg.1142]

We have formulated a Markov decision process model to jointly optimize the maintenance and operational decisions of an offshore wind turbine farm. We have incorporated many key aspects of this problem including extended maintenance durations and changing wind conditions. [Pg.1145]

The key constructs in the PRISM property specification language, as it applies to Markov decision processes, are the P and R operators. The P operator refers to the probability of an event occurring, more precisely, the probability that the observed execution of the model satisfies a given specification. The R operator is used to express properties that relate to rewards (more precisely, the expected value of a random variable, associated with particular reward structure) and since a model will often be decorated with multiple reward structures, we augment the R operator with a label. For example, to determine the mean time to exhaust the supply of cake filler we would specify the following property ... [Pg.2411]

Condition assessment of civil structures is a task dedicated to forecasting future structural performances based on current states and past performances and events. The concept of condition assessment is often integrated within a closed-loop decision, where structural conditions can be adapted based on system prognosis. Figure 1 illustrates a particular way to conduct condition assessment. In the process, various structural states are measured, which may include excitations (e.g., wind, vehicles) and responses (e.g., strain, acceleration). These measurements are processed to extract indicators (e.g., maximum strain, fundamental frequencies) of current structural performance. These indicators are stored in a database, and also used within a forecast model (e.g., time-dependent reliability, Markov decision process) that will lead to a prognosis on the structural system, enabling optimization of... [Pg.1711]

Methods that will be used in this study are partially derived from well-known methods in the fields of production/inventory models, the queuing theory and Markov Decision Processes. The other methods that will be used, apart from simulation, are all based on the use of Markov chains. In a continuous review situation queuing models using Markov processes can be of much help. Queuing models assume that only the jobs or clients present in the system can be served, the main principle of production to order. Furthermore, all kinds of priority rules and distributions for demand and service times have been considered in literature. Therefore we will use a queuing model in a continuous review situation. [Pg.10]

In order to give a good description of the problem, we shall model it as a Markov Decision Problem (MDP). Markov Decision Processes have been studied initially by Bellmann (1957) and Howard (1960). We will first give a short description of an MDP in general. Suppose a system is observed at discrete points of time. At each time point the system may be in one of a finite number of states, labeled by 1,2,.., M. If, at time t, the system is in state i, one may choose an action a, fix>m a finite space A. This action results in a probability PJ- of finding the system in state j at time r+1. Furthermore costs qf have to be paid when in state i action a is taken. [Pg.37]

Figure 24.3 Strategic pathway of Bayesian Markov model showing decision points for diagnosis of pancreatic cancer [14]. Bx = biopsy Dx = diagnosis. Figure 24.3 Strategic pathway of Bayesian Markov model showing decision points for diagnosis of pancreatic cancer [14]. Bx = biopsy Dx = diagnosis.
Inductive logic programming (ILP) is not a pharmacophore generation method by itself, but a subfield of the machine learning approach. In this field, other methods such as hidden Markov models, Bayesian learning, decision trees and logic programs are available. [Pg.44]

F. A. Sonnenberg and J. R. Beck, Markov models in medical decision making a practical gnide. Med Decis Making 13 322-338 (1993). [Pg.697]

Stimulation, environmental vs. task, 1357, 1358 STL (stereo lithography format), 208 Stochastic approximation, 2634-2635 Stochastic counterpart method, 2635 Stochastic decision trees, 2384, 2385 Stochastic models, 2146-2170 benefits of mathematical analysis of, 2146 definition of, 2146, 2150 Markov chains, 2150-2156 in continuous time, 2154-2156 and Markov property, 2150-2151 queueing model based on, 2153-2154... [Pg.2782]

Soimenberg, F. A. Beck J. R, 1993. Markov Models in Medical Decision Making A Practical Guide. Medical Decision Making, 13 322-339. [Pg.666]

We describe some models of the disastrous events spreading. This is a very important part of risk analysis. Despite the strong dependency of successive events, after some arrangements we can use Markov models for the description. It allows us to compute several characteristics of such system. When we consider a system of objects among them a disastrous event could spread, we can compute a probabdity distribution of absorbing states, first passage times for any of the objects and many others. This modeling can help us to make some preventive decision or to prepare disaster recovery plans. In the paper, the model is described and some computations are outlined. Keywords risk, safety, successive event, disastrous event, markov chain. [Pg.1127]


See other pages where Markov decision models is mentioned: [Pg.348]    [Pg.1779]    [Pg.346]    [Pg.348]    [Pg.1779]    [Pg.346]    [Pg.213]    [Pg.335]    [Pg.348]    [Pg.617]    [Pg.948]    [Pg.43]    [Pg.118]    [Pg.406]    [Pg.414]    [Pg.534]    [Pg.1141]    [Pg.574]    [Pg.576]    [Pg.212]    [Pg.421]    [Pg.11]    [Pg.2127]    [Pg.51]    [Pg.477]    [Pg.466]   
See also in sourсe #XX -- [ Pg.348 ]




SEARCH



Decision modeling

Markov

Markov Modeling

Markovic

© 2024 chempedia.info