Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Conditional probability computational procedure

The procedure used in FAVOR to calculate the CPI is based on conventional linear-elastic fracture mechanics, where the extension of any existing flaw in the vessel wall occurs when Ki Ki. Ki is the potential to extend an existing flaw in the vessel material and is a function of load magnitude and distribution, flaw location size and shape, and component geometry. Ki is the material resistance to flaw extension and is a function of temperature, neutron irradiation, material element content and fabrication history. fCic is determined from the relationship  [Pg.383]

RTndt = adjusted reference temperature at the edge of the flaw = RTndtw [Pg.383]

ARTndt = an adjustment for neutron radiation degradation = 1.1 ATjo (°C) for plates and forgings and 0.99 AT o (°C) for welds [Pg.383]

AT20 (°C) is the predicted shift in the Charpy transition temperature at the 41 Joules (30 ft-lb) energy level (the factors 1.1 and 0.99 in the equation account for the epistemic uncertainty associated with the sampled Charpy shift values) [Pg.384]

P = bulk material phosphorus content (wt%) Mn = bulk material manganese content (wt%) [Pg.384]


The ML decoding procedure receives the coefficient sequence y=(yi,..., yn) and outputs the decoded bit sequence b =(b i,..., b n). It uses the estimated probability distribution functions Pu and Pi to compute the conditional probability of observing a pair of coefficients y2i-i,y2i given that the corresponding encoded bit was 0 (respectively 1). These probabilities are given by ... [Pg.10]

In the last contribution, Jensen and Valdebenito (Chapter 35) deal with an efficient computational procedure for the reliability-based optimization of uncertain stochastic linear dynamical systems. The reliability-based optimization problem is formulated as the minimization of an objective function for a specified failure probability. The probability that design conditions are satisfied within a given time interval is used as a measure of the system reliability. Approximation concepts are used to construct... [Pg.647]

To understand the physical consequences of modulation, we make the assumption of being able to generate time series with no computer time and computer memory limitation. Of course, this is an ideal condition, and in practice we shall have to deal with the numerical limits of the mathematical recipe that we adopt here to understand modulation. The reader might imagine that we have a box with infinitely many labelled balls. The label of any ball is a given number X. There are many balls with the same X, so as to fit the probability density of Eq. (281). We randomly draw the balls from the box, and after reading the label we place the ball back in the box. Of course, this procedure implies that we are working with discrete rather than continuous numbers. However, we make the assumption that it is possible to freely increase the ball number so as to come arbitrarily close to the continuous prescription of Eq. (281). [Pg.453]

Fig. 7.8 also shows the results of a classical calculation and a quantum calculation that both confirm the prediction of the giant resonance based on the simple overlap criterion discussed above. The crosses in Fig. 7.8 are the results of classical Monte Carlo calculations. They were performed by choosing 200 different initial conditions in the classical phase space at Iq = 57. The ionization probabihty in this case was defined as the excitation probability of actions beyond the cut-off action Ic = 86. This definition is motivated by experiments that, due to stray fields and the particular experimental procedures, cannot distinguish between excitation above Ic > 86 and true ionization, i.e. excitation to the field-free hydrogen continuum. The crosses in Fig. 7.8 are close to the full line and thus confirm the model prediction. The open squares are the results of quantum calculations within the one-dimensional SSE model. The computations were performed in the simplest way, i.e. no continuum was... [Pg.201]

These repetitive procedures make them very amenable to automation. The resulting hardware, including robots and computers, would suppress fastidious tasks, allowing manipulation of a large number of samples and therefore increasing the probability of finding suitable biocatalysts, and eventually improving the work conditions. [Pg.50]


See other pages where Conditional probability computational procedure is mentioned: [Pg.383]    [Pg.383]    [Pg.383]    [Pg.383]    [Pg.263]    [Pg.263]    [Pg.61]    [Pg.126]    [Pg.381]    [Pg.77]    [Pg.263]    [Pg.566]    [Pg.309]    [Pg.1040]    [Pg.2975]    [Pg.701]    [Pg.752]    [Pg.53]    [Pg.36]    [Pg.946]    [Pg.369]    [Pg.301]    [Pg.147]    [Pg.5]    [Pg.454]    [Pg.1332]    [Pg.154]    [Pg.541]    [Pg.261]    [Pg.187]    [Pg.351]    [Pg.250]    [Pg.278]    [Pg.286]    [Pg.244]    [Pg.685]    [Pg.187]    [Pg.2385]    [Pg.102]    [Pg.238]    [Pg.449]    [Pg.42]    [Pg.256]    [Pg.96]    [Pg.1003]    [Pg.922]    [Pg.81]   


SEARCH



Computational procedures

Conditional probability

© 2024 chempedia.info