Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Probabilistic relationships

As will be described in Section 9.3, direct methods are techniques that use probabilistic relationships among the phases to derive values of the individual phases from the experimentally measured amplitudes. In order to take advantage of these relationships, a necessary first step is the replacement of the usual structure factors, F, by the normalized structure factors (Hauptman and Karle, 1953),... [Pg.130]

A Bayesian (belief) network is a graphical model for probabilistic relationships among a set of variables whose joint probability distributions are compactly represented in relation to future data. The Bayesian network has several advantages in modeling biological network systems ... [Pg.259]

Assumes a probabilistic relationship between intensities, which are defined in terms of entropies of the entropy distribution, H, where p = probability of intensity. Image-based registration using different modalities where there is a nonlinear intensity relationship between images (i.e., CT to MRI) [26,27]. [Pg.40]

Kim et al. 2006 BBN-version of HRA CREAM method the BBN is used to introduce probabilistic relationships between the state of performance conditions and the control mode in CREAM, extending the deterministic relationships from the original method formulation. ME... [Pg.1081]

Wills CJ, Clahan KB (2(K)6) Developing a map of geologically defined site-conditions categories for California. Bull Seismol Soc Am 96(4A) 1483-1501 Wood HO, Neumann F (1931) Modified Mercalli intensity scale of 1931. Bull Seismol Soc Am 21 277-283 Worden CB, Gerstenberger MC, Rhoades DA, Wald DJ (2012) Probabilistic relationships between ground-motion parameters and modified Mercalli intensity in California. Bull Seismol Soc Am 102(1) 204-221... [Pg.260]

A BBN is a graphical network that represents probabilistic relationships among events in a network structure. [Pg.214]

The topology of the graph is used to Indicate probabilistic relationships among the variables described in the nodes. The BBN on the right includes subjective indicators, like problem complexity and design effort. Thus, this network Is meant to be populated with probabilities that are not all derived from statistical inference, but at least in part from expert opinion. [Pg.214]

Many more such relationships can be derived in a similar manner (see [ma85] or [stan71]). For our purposes here, it will suffice to merely take note of the fact that certain relationships among the critical exponents do exist and are in fact commonly exploited. Indeed, we shall soon sec that certain estimates of critical behavior in probabilistic CA system are predicated on the assumptions that (1) certain rules fall into in the same universality class as directed percolation, and (2) the same relationships known to exist among critical exponents in directed percolation must also hold true for PC A (see section 7.2). [Pg.332]

Relationships (61)—(63) admit simple probabilistic interpretation in terms of the branching process. To the reproducing particles of this process the reacted functional groups correspond distinguished by color i and label r. Integer i characterizes the type S, of monomeric unit to which a given group was attached at the moment r of its formation. [Pg.200]

Any analysis of risk should recognize these distinctions in all of their essential features. A typical approach to acute risk separates the stochastic nature of discrete causal events from the deterministic consequences which are treated using engineering methods such as mathematical models. Another tool if risk analysis is a risk profile that graphs the probability of occurrence versus the severity of the consequences (e.g., probability, of a fish dying or probability of a person contracting liver cancer either as a result of exposure to a specified environmental contaminant). In a way, this profile shows the functional relationship between the probabilistic and the deterministic parts of the problem by showing probability versus consequences. [Pg.92]

Toxic effects of expositions are calculated for a variety of exposures and effect combinations, assuming a probabilistic dose-effect relationship. Lethal and incapacitating responses (e.g. respiratory effects, topical skin effects or incapacitating eye effects) of varying degrees of severity are addressed. The model also distinguishes between effects resulting from vapour exposure and from exposures to liquid droplets. These primary effect probabilities are subsequently combined to afford overall casualty probabilities for lethality, severe incapacitation and incapacitation due to topical eye effects. [Pg.65]

The toxic effects model translates the exposure profiles into casualty probabilities for the personnel, assuming a probabilistic dose-effect relationship. The casualty levels and spectra can be obtained for various type of health effects, e.g. eye effects, inhalation, percutane, subdivided in two levels (incapacitating and lethal), and various protection levels, e.g. no protection, suit only, mask only, mask and suit, and collective protection. Table 1 gives a typical result for one scenario. In case no protection is used, 63% of the population dies due to inhalation of sarin and 25% dies due to percutaneous exposure. Clearly, when both mask and suit are worn, the casualty levels are dropping drastically. [Pg.68]

Probabilistic methods can be applied in dose-response assessment when there is an understanding of the important parameters and their relationships, such as identification of the key determinants of human variation (e.g., metabolic polymorphisms, hormone levels, and cell replication rates), observation of the distributions of these variables, and valid models for combining these variables. With appropriate data and expert judgment, formal approaches to probabilistic risk assessment can be applied to provide insight into the overall extent and dominant sources of human variation and uncertainty. [Pg.203]

The phase problem of X-ray crystallography may be defined as the problem of determining the phases ( ) of the normalized structure factors E when only the magnitudes E are given. Since there are many more reflections in a diffraction pattern than there are independent atoms in the corresponding crystal, the phase problem is overdetermined, and the existence of relationships among the measured magnitudes is implied. Direct methods (Hauptman and Karle, 1953) are ab initio probabilistic methods that seek to exploit these relationships, and the techniques of probability theory have identified the linear combinations of three phases whose Miller indices sum to... [Pg.132]

The US Environmental Protection Agency (USEPA 1998) describes problem formulation as an iterative process with 4 main components integration of available information, definition of assessment endpoints, definition of conceptual model, and development of an analysis plan. These 4 components apply also to probabilistic assessments. In addition, it is useful to emphasize the importance of a 5th component dehnition of the assessment scenarios. The relationships between all 5 components are depicted in Figure 2.1. Note that the bidirectional arrows represent the interdependency of the different components and imply that they may need to be revised iteratively as the formulation of the problem is rehned. [Pg.11]

The first attempts to seek a closed expression for n(t) did not include an analysis of pair correlation of defects, even at the level of pair densities, and seem at least to be ambiguous. Such an approach based on simple probabilistic considerations was first used in [24] (see also [108]). Since here we are not treating explicitly the relationship between two-particle and higher-particle densities, it is difficult to make a correct estimate of the effect of overlap of the forbiden volumes of several closely-lying defects. This leads to the need to introduce some a priori assumptions. A characteristic example is [24], where the implicit assumption of a chaotic distribution of defects through the reaction volume (along with partially taking it into account) led to a physically incorrect result - the absence of an effect of saturation of concentration doses (see Table 7.6 below and comments to it). [Pg.442]

There are innumerable reasons why the administration of a drag can lead to unexpected occurrences, some of which may be harmful. For example, hypersensitivity reactions may occur. These reactions may only show up in 1 in 10,000, meaning that they would be probabilistically unlikely to have been seen in pre-approval clinical trials, and they may show a remarkable absence of a relationship between dose and severity (Olsson and Meyboom, 2006). The underlying mechanism for this reaction may be an inborn variation in metabolism, but in many cases the etiology is unknown. Another example is therapeutic ineffectiveness. Therapeutic ineffectiveness is not often considered to be a side-effect, but it is one of the most common unintended responses to a drag. It is a recognized reportable event in pharmacovigilance, especially when it occurs unexpectedly (Olsson and Meyboom, 2006). [Pg.204]

As a young scientist de Broglie had believed that the statistical nature of modern physics masks our ignorance of the underlying reality of the physical world, but for much of his life he also believed that this statistical nature is all that we can know. Toward the end of his life, however, de Broglie turned back toward the views of his youth, favoring causal relationships in place of the accepted probabilistic picture associated with quantum mechanics. see also Planck, Max Schrodinger, Erwin... [Pg.6]

The TU summation (TUS) approach for the concentration addition model suffers from a serious drawback. The method is based on point-estimate assessments taken from the whole of the concentration-effect relationship, because the standard response is often set at the LC50, the EC50, or the NOEC. If the slopes of the concentration-response relationship for all compounds in the mixture are not considered, there is no way to determine how far away a TUS value is from the effect of concern (Solomon and Takacs 2002). An improvement would be to use whole concentration-response functions for concentration addition modeling, which is possible when adequate effect information for the individual components of a mixture is available, as has been demonstrated by Faust et al. (2000). This can also be done using a probabilistic approach by applying the following protocol (De Zwart and Posthuma 2005). [Pg.152]

Volosin JS, Cardwell RD. 2002. Relationships between aquatic hazard quotients and probabilistic risk estimates what is the significance of a hazard quotient > 1 Human Ecol Risk Assess 8 355-368. [Pg.366]


See other pages where Probabilistic relationships is mentioned: [Pg.430]    [Pg.90]    [Pg.106]    [Pg.2187]    [Pg.17]    [Pg.106]    [Pg.1083]    [Pg.1084]    [Pg.2062]    [Pg.507]    [Pg.525]    [Pg.430]    [Pg.90]    [Pg.106]    [Pg.2187]    [Pg.17]    [Pg.106]    [Pg.1083]    [Pg.1084]    [Pg.2062]    [Pg.507]    [Pg.525]    [Pg.139]    [Pg.65]    [Pg.348]    [Pg.158]    [Pg.65]    [Pg.265]    [Pg.120]    [Pg.131]    [Pg.82]    [Pg.2]    [Pg.87]    [Pg.88]    [Pg.612]    [Pg.443]    [Pg.91]    [Pg.96]    [Pg.277]   


SEARCH



© 2024 chempedia.info