Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Probability distribution, component

Figure 2.5-1 illustrates the fact that probabilities are not precisely known but may be represented by a "bell-like" distribution the amplitude of which expresses the degree of belief. The probability that a system will fail is calculated by combining component probabilities as unions (addition) and intersection (multiplication) according to the system logic. Instead of point values for these probabilities, distributions are used which results in a distributed probabilitv of system fadure. This section discusses several methods for combining distributions, namely 1) con olution, 2i moments method, 3) Taylor s series, 4) Monte Carlo, and 5) discrete probability distributions (DPD). [Pg.56]

Tliis cliapter is concerned willi special probability distributions and tecliniques used in calculations of reliability and risk. Tlieorems and basic concepts of probability presented in Cliapter 19 are applied to llie determination of llie reliability of complex systems in terms of tlie reliabilities of their components. Tlie relationship between reliability and failure rate is explored in detail. Special probability distributions for failure time are discussed. Tlie chapter concludes with a consideration of fault tree analysis and event tree analysis, two special teclmiques lliat figure prominently in hazard analysis and llie evaluation of risk. [Pg.571]

We mentioned above that a typical problem for a Boltzman Machine is to obtain a set of weights such that the states of the visible neurons take on some desired probability distribution. For example, the task may he to teach the net to learn that the first component of an Ai-component input vector has value +1 40% of the time. To accompli.sh this, a Boltzman Machine uses the familiar gradient-descent technique, but not on the energy of the net instead, it maximizes the relative entropy of the system. [Pg.534]

It does not contain a probabilistic modeling component that simulates variability therefore, it is not used to predict PbB probability distributions in exposed populations. Accordingly, the current version will not predict the probability that children exposed to lead in environmental media will have PbB concentrations exceeding a health-based level of concern (e.g., 10 pg/dL). Efforts are currently underway to explore applications of stochastic modeling methodologies to investigate variability in both exposure and biokinetic variables that will yield estimates of distributions of lead concentrations in blood, bone, and other tissues. [Pg.243]

Here, V is vector notation for the set of all component energies Vy, and A, j gives the coefficient of Vy in the ith run. The Ay, without subscript i, indicate the values of A in the target ensemble. The histograms collected in the runs are multidimensional in that they are tabulated as functions of the component energies as well as the order parameter . Similarly, the final result of the WHAM calculation is a multidimensional probability distribution in V J and . [Pg.83]

In the standard HMC method, the 3N components of the vector p are usually drawn randomly from a Gaussian (Maxwellian) probability distribution... [Pg.295]

The histogram reweighting methodology for multicomponent systems [52-54] closely follows the one-component version described above. The probability distribution function for observing Ni particles of component 1 and No particles of component 2 with configurational energy in the vicinity of E for a GCMC simulation at imposed chemical potentials /. i and //,2, respectively, at inverse temperature ft in a box of volume V is... [Pg.369]

This result could be improved by assuming a more appropriate distribution function of T instead of a simple sinusoidal fluctuation however, this example—even with its assumptions—usefully illustrates the problem. Normally, probability distribution functions are chosen. If the concentrations and temperatures are correlated, the rate expression becomes very complicated. Bilger [47] has presented a form of a two-component mean-reaction rate when it is expanded about the mean states, as follows ... [Pg.218]

Like in the Rouse model from the probability distribution Prob the free energy is obtained by taking the logarithm and finally the force exerted on a segment h (x-component) follows by taking the derivative of the free energy ... [Pg.119]

The other components, which can also be characterized by standard deviations, are evaluated from assumed probability distributions based on experience or other information... [Pg.16]

In 1983, Sasaki et al. obtained rough first approximations of the mid-infrared spectra of o-xylene, p-xylene and m-xylene from multi-component mixtures using entropy minimization [83-85] However, in order to do so, an a priori estimate of the number S of observable species present was again needed. The basic idea behind the approach was (i) the determination of the basis functions/eigenvectors V,xv associated with the data (three solutions were prepared) and (ii) the transformation of basis vectors into pure component spectral estimates by determining the elements of a transformation matrix TsXs- The simplex optimization method was used to optimize the nine elements of Tixi to achieve entropy minimization, and the normalized second derivative of the spectra was used as a measure of the probability distribution. [Pg.177]

We ask what is the probability of a given object spectrum value when the functional form of the spectrum o(x) is known, but its x displacement is unknown. Because of the unknown displacement, the probability distribution at each location is the same. Call this component probability pc such that... [Pg.119]

Exercise. Let X stand for the three components of the velocity of a molecule in a gas. Give its range and probability distribution. [Pg.3]

For any r-component stochastic process one may ignore a number of components and the remaining s components again constitute a stochastic process. But, if the r-component process is Markovian, the process formed by the sfirst example above each velocity component is itself Markovian in chemical reactions, however, the future probability distribution of the amount of each chemical component is determined by the present amounts of all components. [Pg.76]

A finite Markov chain is one whose range consists of a finite number N of states. They have been extensively studied, because they are the simplest Markov processes that still exhibit most of the relevant features. The first probability distribution Pi(y, t) is an iV-component vector pn(t) (n = 1,2,...,JV). The transition probability Tz(y2 yi) is an N x N matrix. The Markov property (3.3) leads to the matrix equation... [Pg.90]

Of course, there may be more than one. Each 0 is a time-independent solution of the master equation. When normalized it represents a stationary probability distribution of the system, provided its components are nonnegative. In the next section we shall show that this provision is satisfied. But first we shall distinguish some special forms of W. [Pg.101]

This completes the proof of the second lemma. A corollary is that for a time-independent solution either all components are nonnegative, or all non-positive. For a stationary probability distribution one has, of course, psn 0, because C— 1. [Pg.107]

M. Martin, D. P. Herman and G. Guiochon, Probability distributions of the number of chromatographically resolved peaks and resolvable components in mixtures , Anal. Chem. 58 2200 (1986). [Pg.16]

The outcome of the exposure equation is a dose. This dose varies because of the variability of the components in the equation. The probability distribution of the dose is generally quite difficult to calculate analytically, but can be fairly readily approximated using a Monte Carlo simulation. The simulation consists of numerous iterations. In an iteration, a single value for each component in the exposure equation is randomly sampled from its corresponding distribution. These component values are then substituted into the exposure equation, and the outcome (exposure) is explicitly calculated. The frequency distribution of the calculated values from numerous iterations is the simulated exposure distribution. The exposure equations and the probability distributions of the components are treated as known in the distributional results presented in this chapter. Thus, the simulated exposure distributions reflect exposure variability - but not uncertainty about these equations, the distributions of the components, and related assumptions. This uncertainty and its quantitative impact on the simulated exposure distribution are presented in Sielken et al. (1996). [Pg.481]

In the Monte Carlo approach, there are no inherent limitations on the complexity of the exposure equation, the number of component variables, the probability distributions for the variable components, or the number of iterations. This freedom from limitations is especially useful in simulating the distributions of a LADD for the different exposure scenarios considered here. As its name suggests, a LADD is the average over all the days in an individual s lifetime of the dose of a chemical (e.g., atrazine, simazine, or both) received as a result of his or her exposure from one or more exposure pathways (e.g., water, diet, or herbicide handling). Because the exposure equation can explicitly consider each day individually, the values of the equation s variable components can vary from day to day and have different distributions for different ages and different lifespan projections. [Pg.481]

While the following two observations are not critical in the distributional characterization of the intake of atrazine and simazine from dietary consumption, these observations can be important in other situations. First, making the assumption that the residue concentration in an individual s food is the same every time that food is consumed (as in Equation (31.2) exaggerates the variability in the intake distribution. Without this assumption, both the low and high percentiles of the intake distribution would be closer to the median intake, and the 95% lower bound on the MOE would increase. Second, when a sum is being characterized (such as the sum of intakes in Equation (31.2), it is important to determine explicitly the probability distribution of the entire sum and not to attempt to infer the characteristics of the distribution of the sum indirectly from the distributions of its components. For example, the 95th percentile of a sum may be much smaller than the sum of the 95th percentiles of its components. [Pg.485]

Uncertainty of measurement comprises, in general, many components. Some of these components may be evaluated from the statistical distribution of the results of series of measurements and can be characterized by experimental standard deviations. The other components, which can also be characerized by standard deviations, are evaluated from assumed probability distributions based on experience or other information.2... [Pg.297]


See other pages where Probability distribution, component is mentioned: [Pg.246]    [Pg.246]    [Pg.448]    [Pg.331]    [Pg.59]    [Pg.419]    [Pg.203]    [Pg.6]    [Pg.174]    [Pg.404]    [Pg.44]    [Pg.44]    [Pg.102]    [Pg.81]    [Pg.83]    [Pg.202]    [Pg.354]    [Pg.148]    [Pg.43]    [Pg.92]    [Pg.516]    [Pg.480]    [Pg.481]    [Pg.485]    [Pg.19]    [Pg.114]   


SEARCH



Component probability

Distributed component

Distribution components

Probability distribution, component overlap

Probability distributions

© 2024 chempedia.info