Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Subjective probability, theory

Belief measures (also conmiOTly referred to as belief functions) aim to generalize the well-known interpretation of subjective probability theory - i.e., the Bayesian probability for a subjective measure of uncertainty - to a broader concept of evidence. Like possibility and necessity measures, evidence theory is developed from dual measures of belief and plausibility (KJir and Yuan 1995). These measures express beliefs or judgments formulated by available evidence (Yager and Liu 2008). Although Dempster s original woiks were closely linked to classical probability theory, there are some significant distinctions between classical probability theory and evidence theory. [Pg.3842]

In order to compare various reacting-flow models, it is necessary to present them all in the same conceptual framework. In this book, a statistical approach based on the one-point, one-time joint probability density function (PDF) has been chosen as the common theoretical framework. A similar approach can be taken to describe turbulent flows (Pope 2000). This choice was made due to the fact that nearly all CFD models currently in use for turbulent reacting flows can be expressed in terms of quantities derived from a joint PDF (e.g., low-order moments, conditional moments, conditional PDF, etc.). Ample introductory material on PDF methods is provided for readers unfamiliar with the subject area. Additional discussion on the application of PDF methods in turbulence can be found in Pope (2000). Some previous exposure to engineering statistics or elementary probability theory should suffice for understanding most of the material presented in this book. [Pg.15]

The theory of decision making under risk tells people to maximize expected utility. In cases like the one 1 have just discussed, this means the same as utility averaged over many periods. The theory has been extended, however, to cover choice situations that do not repeal themselves day after day or year after year. In that case the decision maker is asked to rely on his "subjective probabilities" or, in less solemn language, on his informed hunches. The utility of each possible outcome of an action is weighted by the estimated probability of that outcome, to yield the expected utility of the action. The theory tells us to take the action that has associated with it the highest expected utility. In the next chapter 1 state my reasons for being skeptical about this extension of the theory. [Pg.36]

Ad b. This transformation is pure mathematics. It is the main subject of textbooks on probability theory. The subject has been freed from all contamination of reality by the axiomatics of Kolmogorov. ... [Pg.19]

By appropriate manipulation, it is possible to determine the expected value of various functions of X, which is the subject of probability theory. For example, the expected value of X is simply the sum of squares of the values, each weighted by the probability of obtaining the value. [Pg.9]

The Dempster-Shafer theory, also known as the theory of belief functions, is a generalization of the Bayesian theory of subjective probability [30,42]. Whereas the Bayesian theory requires probabilities for each question of interest, belief functions allow us to base degrees of belief for one question on probabilities for a related question. These degrees of belief may or may not have the mathematical properties of probabilities how much they differ from probabilities will depend on how closely the two questions are related. [Pg.28]

Molecular medicine provides quantifiable, objective criteria that can be used as the basis for treatment, particularly in the use of drugs specifically designed to correct abnormal biochemical processes. Molecular abnormalities in different parts of the body, including the brain, will become the language of neuropathology. Of course, these therapeutic decisions must involve subjective as well as objective criteria. Rather than search for ultimate causes of the patient s illness, we search for antecedent events or findings. Probability theory, particularly the use of Bayes theorem, plays an important role in our model of the diagnostic process. [Pg.173]

According to one point of view, expressed by Laplace, dynamical systems like the Solar System are completely deterministic, so probability theory can have no relevance. But this point of view requL a God-like omniscience in being able to determine initial conditions exactly. This requires an infinite number of digits and is beyond the capacity of anybody or anything ctf finite size, including the observable Universe (Ford 1983) [Ref. 59]. In reality measurement is only able to determine the state of a classical system to a finite number of digits, and even this determination is subject to errors, without quantum mechanics, and whether this determination is made by human or machine. Such measurements limit the known or recorded motion to a range of possible orbits. [Pg.118]

Thus we have argued that the engineer has to make use of propositions, of theories and data, which are highly variable in their testability and dependability. The next question is, obviously, how can we use the ideas presented in this chapter to help the engineer measure this variable dependability, even if the measurement has to be subjective There is no accepted answer to this question today, but one purpose of the work described in Chapters 6 and 10 is to begin to provide a theoretical basis for such measurements. Firstly we have to be convinced that the present methods of reliability theory based on probability theory are inadequate. In fact it will be argued in Chapter 5 that the present use of reliability theory confuses the four aspects of testability discussed earlier. We will demonstrate the limitations of probability theory as a measure of the testability or dependability of a theory. In Chapter 6 we will discuss the theoretical developments which may eventually lead us to measures of the various aspects of testability and dependability, and we will return to a discussion of this in Chapter 10. [Pg.45]

If we need to be able to deal with all types of uncertainty, can mathematics as a formal language help us It is the purpose of this chapter to review briefly and qualitatively the basic ideas of mathematics, in particular logic and set theory, on which probability theory depends. The nature of probability and its application in reliability theory as applied to structural design, and the problems of applying it to estimate system uncertainty are then discussed. It is not intended to cover the techniques associated with the theories, only the ideas behind them. Many texts are available on all the subjects touched here, to which reference will have to be made if techniques for handling the ideas are required. The purpose of the following discussion is to attempt to clarify the basis on which we work... [Pg.258]

The subject of ultrafine particles (ufp) is perhaps currently the most challenging and interesting area in aerosol science. This subject entails the difficult area of nucleation processes to explain the origin of most ufp. Analysis of the evolution in size, composition, space and time of ufp involves one with current problems in statistical mechanics, kinetic theory, probability theory, quantum chemistry, etc. One also encounters very difficult measurement problems for ufp, although these will not be discussed here. [Pg.15]

The term P(D H) represents the likelihood function and provides the probability of the observed data arising from the hypothesis. Normally, this is known because it expresses one s knowledge of how to look at the data, given that the hypothesis is tme. The term P(H) or P(0) is called the prior distribution, as it reflects one s prior knowledge before the data are considered. The advantages of Bayesian probability theory is that one s assumptions are made up front, and any element of subjectivity in the reasoning process is directly exposed [2]. [Pg.959]

The entropy Sipii,..., pij,... is a function of a set of probabilities. The distribution of p,j s that cause 5 to be maximal is the distribution that most fairly apportions the constrained scores between the individual outcomes. That is, the probability distribution is flat if there are no constraints, and follows the multiplication rule of probability theory if there are independent constraints. If there is a constraint, such as the average score on die rolls, and if it is not equal to the value expected from a uniform distribution, then maximum entropy predicts an exponential distribution of the probabilities. In Chapter 10, this exponential function will define the Boltzmann distribution law. With this law you can predict thermodynamic and physical properties of atoms and molecules, and their averages and fluctuations. How-ever, first we need the machinery of thermodynamics, the subject of the next three chapters. [Pg.101]

The probability theory is contradicted by the fact that the healthy subject H31A replicates its selection of late areas. To answer these questions, observations of a sequence of linEN are necessary. [Pg.243]

Probability, Probability Distribution, and Cumulative Distribution Functions. A detailed analysis and description of probability theory are beyond the scope of this book. Instead, some of the basic concepts and simple distributions are presented. The interested reader is referred to Resnick [5], Valle-Riestra [6], and Rose [7] for further coverage of this subject. [Pg.330]

Due to influences of measurement error, manufacturing, assembly and other factors, there is uncertainty in mechanical components geometry size, material properties parameters (such as elastic modulus, Poisson s ratio), and so on. The uncertainty significantly affects the reliability of the mechanical components. Therefore, it is important to choose appropriate distributions of random variables before wear reliability analysis. The traditional probability theory method is one of common methods which are used to deal with the uncertainty of variables. However, the method of probability and statistics is subject to restrictions of sample size, sampling time and sample conditions. Sometimes, because sample size is too small, it is impossible to obtain the probability density functions of random variables. [Pg.751]

The problem in the analysis occurs when the database is incomplete or uncertain. There is a need to rely on the knowledge and the experience of experts. In the literature it can be found a new issue of so-called risk analysis in uncertainty conditions. There are two approaches related to this issue. The first approach assumes that the risk is a kind of uncertainty. Taking into account the uncertainty data in risk analysis it is possible to make an analyse using the theory of subjective probability and neural Bayesian networks (Dempster 1967, Hahnet al. 2002, Haimes... [Pg.1473]

First, we find value in providing a qualitative assessment of the various types of evidence per se, which can be rated by an assessor. This approach identifies explicitly which evidence actually is available for the assignment of a subjective probability to the considered variable of interest. The approach by Flage and Aven (2009) in our view leads to some inconsistencies. E.g. when no data is available but the models are based on very well established theories, the classification by Flage and Aven (2009) leads to moderate or even significant uncertainty. However, it is possible that even when no data is available at all to the assessor, the models are so accurate that the uncertainty regarding the evidence is minor. [Pg.1692]

It is well established that probabilistic values can be characterized by two general classes of interpretation relative frequencies of an observed outcome and Bayesian probabilities (or so-called subjective probabilities). The class of subjective probabilities allows for a broader context of probability theory. This interpretation proposes that probability can be justified not necessarily by the objective or frequentist basis (a frequency of occurrence among trials ) but to single occurrence events in the form of a measure of one s uncertainty about a particular event (Dubois and Prade 1988 Vick 2002). From this, Bayes theorem serves as a mathematical basis for manipulating relationships between prior and new probabilistic information. As such, the axioms of probability theory serve as a foundation for expressing uncertainty in multiple contexts. [Pg.3839]

Special Cases of Evidence Theory At a fundamental level, belief functions are expressions of subjective degrees of belief (Shafer 1976). In this regard, subjective probabilities can be viewed as a special case of belief measures defined on singletons. When aU available evidence is defined only on individual elements, belief and plausibility measures become probability measures (proof in Klir 2006). The evidence may therefore be described probabilistically (Ross 2010). Shafer (1976) refers to this special class of belief functions as Bayesian Belief Functions. [Pg.3845]


See other pages where Subjective probability, theory is mentioned: [Pg.321]    [Pg.341]    [Pg.44]    [Pg.612]    [Pg.581]    [Pg.295]    [Pg.18]    [Pg.739]    [Pg.28]    [Pg.443]    [Pg.2182]    [Pg.2187]    [Pg.2202]    [Pg.61]    [Pg.101]    [Pg.164]    [Pg.215]    [Pg.262]    [Pg.283]    [Pg.1026]    [Pg.397]    [Pg.50]    [Pg.462]    [Pg.81]    [Pg.100]    [Pg.99]    [Pg.2324]    [Pg.3845]   
See also in sourсe #XX -- [ Pg.28 ]




SEARCH



Probability subjective

Probability theory

© 2024 chempedia.info