Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Summing of events

Figure 11. Ratio of anticoincidence events to the sum of events recorded in anticoincidence and coincidence as a function of activity for 137Cs... Figure 11. Ratio of anticoincidence events to the sum of events recorded in anticoincidence and coincidence as a function of activity for 137Cs...
The impact of forest on the hydrology and climatology of a region must in effect be the sum of events taking place on individual forested areas, such as watersheds and basins. The interception of precipitation and evapotranspiration by the forest are major events which can be determined over a defined area. In Brazil, measurements of interception in sub-tropical forest were carried out by Freise (1934, 1936) almost fifty years ago. He found that 34% of precipitation became throughfall, 28%... [Pg.629]

Figure 1.27. Histograms of a data set containing n - 19 values, the r-range depicted is in all cases -0.5 to 5.5 inclusive, leaving one event to the left of the lower boundary. The number of classes is 6, 12, resp. 24 in the latter case the observed frequency never exceeds two per class, which is clearly insufficient for a x -test. (See text.) The superimposed normal distribution has the same area as the sum of events, n, times the bin width, namely 19, 8.5, respectively 4.25. Figure 1.27. Histograms of a data set containing n - 19 values, the r-range depicted is in all cases -0.5 to 5.5 inclusive, leaving one event to the left of the lower boundary. The number of classes is 6, 12, resp. 24 in the latter case the observed frequency never exceeds two per class, which is clearly insufficient for a x -test. (See text.) The superimposed normal distribution has the same area as the sum of events, n, times the bin width, namely 19, 8.5, respectively 4.25.
Summing of events, either a result of coincident emission of gamma rays in the decay chain of the nuclide of interest or of random coincident emissions, can lead to significant losses or potential additions to an otherwise clean peak (De Bruin and Blaauw 1992 Becker et al. 1994). While coincidence losses are not an issue for comparator NAA, calibration, and/or computational correction must be applied (Debertin and Helmer 1988 Blaauw and Celsema 1999) to arrive at true peak areas for other methods of calibration. [Pg.1603]

In fact, each linear polarizability itself consists of a sum of two temis, one potentially resonant and the other anti-resonant, corresponding to die two doorway events, and D, and the window events, and described above. The hyperpolarizability chosen in equation (B1.3.12) happens to belong to the generator. As noted, such tliree-coloiir generators caimot produce Class I spectroscopies (fiill quadrature with tliree colours is not possible). Only the two-colour generators are able to create the Class I Raman spectroscopies and, in any case, only two colours are nomially used for the Class II Raman spectroscopies as well. [Pg.1191]

To analyze this case, we employ, as before, contour algebra (see Section IX) From Figure 14, it is noticed that F23 is a contour that sunounds the (2,3) conical intersection, F34 is the contour that surrounds the two (3,4) conical intersections, and F24 is a contour that surrounds all three conical intersections. According to contour algebra the event that takes place along F24 is the sum of the events along each individual contour. Thus,... [Pg.712]

In so doing, we obtain the condition of maximum probability (or, more properly, minimum probable prediction error) for the entire distribution of events, that is, the most probable distribution. The minimization condition [condition (3-4)] requires that the sum of squares of the differences between p and all of the values xi be simultaneously as small as possible. We cannot change the xi, which are experimental measurements, so the problem becomes one of selecting the value of p that best satisfies condition (3-4). It is reasonable to suppose that p, subject to the minimization condition, will be the arithmetic mean, x = )/ > provided that... [Pg.61]

The theory of operation of such a protection scheme is based on the prineiple that in a balanced cireuit the phasor sum of currents in the three healthy phases is zero, as illustrated in Figure 21.7, and the current through the grounded neutral is zero. In the event of a ground fault, i.e. when one of the phases becomes grounded, this balance is upset and the out-of-balance current flows through the grounded neutral. A healthy three-phase circuit, how ever. [Pg.683]

Risk is defined as tlie product of two factors (1) tlie probability of an undesirable event and (2) tlie measured consequences of the undesirable event. Measured consequences may be stated in terms of financial loss, injuries, deatlis, or Ollier variables. Failure represents an inability to perform some required function. Reliability is the probability that a system or one of its components will perform its intended function mider certain conditions for a specified period. Tlie reliability of a system and its probability of failure are complementary in tlie sense tliat the sum of these two probabilities is unity. This cluipler considers basic concepts and llieorenis of probability tliat find application in tlie estimation of risk and reliability. [Pg.541]

Theorem 1 says that tlie probability that A does not occur is one minus tlie probability tliat A occurs. Thcorcni 2 siiys that the probability of any event lies between 0 and 1. Theorem 3, tlie addition tlieorein, provides an alternative way of calculating tlie probability of tlie union of two events as tlie sum of tlieir... [Pg.546]

Tlie matliematical properties of P(A), tlie probability of event A, are deduced from tlie following postulates governing tlie assignment of probabilities to tlie elements of a sample space, S. In the case of a discrete sample space (i.e., a sample space consisting of a finite nmnber or countable infinitude of elements), tliese postulates require that the numbers assigned as probabilities to tlie elements of S be nonnegative and have a sum equal to 1. [Pg.566]

The probability of the top set T is the probability of the union of the events represented by the minimal cut sets. This probability usually can be approximated by the sum of the probabilities of these events. Suppose that all tlie basic events are independent and tliat tlieir probabilities are... [Pg.599]

Tlie total risk measured in terms of the average annual total number of people killed is obtained by multiplying tlie number of people in each impact zone by the sum of the probabilities of the events affecting the zone, and summing the results. Tlierefore,... [Pg.612]

With time-domain data, the analyst must manually separate the individual frequencies and events that are contained in the complex waveform. This effort is complicated tremendously by the superposition of multiple frequencies. Note that, rather than overlaying each of the discrete frequencies as illustrated theoretically in Figure 43.18(a), actual time-domain data represents the sum of these frequencies as was illustrated in Figure 43.17. [Pg.685]

Now consider the case where the system is perturbed randomly in space and time and F(t) represents a superposition of many avalanches (occurring simulta-neou.sly and independently). The total power spectrum is the (incoherent) sum of individual ( ontributions for single relaxation event due to single perturbations. [Pg.442]

The total number of calories a person needs each day is the sum of the basal requirement plus the energy used for physical activities, as shown in Table 29.1. A relatively inactive person needs about 30% above basal requirements per day, a lightly active person needs about 50% above basal, and a very active person such as an athlete or construction worker may need 100% above basal requirements. Some endurance athletes in ultradistance events can use as many as 10,000 keal/day above the basal level. Each day that your caloric intake is above what you use, fat is stored in your body and your weight rises. Each day that your caloric intake is below whatyou use, fat in your body is metabolized and your weight drops. [Pg.1170]

Exercise Show that the density function of the sum of ft independent, identical random variables with Hie common density function Ae-A is given by A(Ax)n 1e-Ju/(ft — 1) . Note that the time intervals between events that occur by a Poisson process are exponentially distributed. [Pg.288]

The LIF technique is extremely versatile. The determination of absolute intermediate species concentrations, however, needs either an independent calibration or knowledge of the fluorescence quantum yield, i.e., the ratio of radiative events (detectable fluorescence light) over the sum of all decay processes from the excited quantum state—including predissociation, col-lisional quenching, and energy transfer. This fraction may be quite small (some tenths of a percent, e.g., for the detection of the OH radical in a flame at ambient pressure) and will depend on the local flame composition, pressure, and temperature as well as on the excited electronic state and ro-vibronic level. Short-pulse techniques with picosecond lasers enable direct determination of the quantum yield [14] and permit study of the relevant energy transfer processes [17-20]. [Pg.5]

Mechanism I illustrates an important requirement for reaction mechanisms. Because a mechanism is a summary of events at the molecular level, a mechanism must lead to the correct stoichiometry to be an accurate description of the chemical reaction. The sum of the steps of a mechanism must give the balanced stoichiometric equation for the overall chemical reaction. If it does not, the proposed mechanism must be discarded. In Mechanism I, the net result of two sequential elementary reactions is the observed reaction stoichiometry. [Pg.1051]

As a complement to any (and perhaps all) of the above methods, calorimetry can be utilized in developing an understanding of the overall energetic behavior of the binding event [20]. The overall thermodynamics of any molecular interaction is the sum of both the enthalpic and entropic energy components of the species involved [21]. While these measurements have historically been somewhat limited due to a requirement for a significant amount of protein, new techniques have alleviated the situation substantially [22]. [Pg.149]

Factors that will influence the grouping are similarities in inventories, compositions, discharge rates, and discharge locations. Note that by grouping similar incidents, the frequency of occurrence of a grouped subset is the sum of the individual frequencies of the grouped events. [Pg.105]

The following is a table specification, or table shell, for the summary of adverse events by body system, preferred term, and maximum severity. As a rule for this summary, a patient should be counted only once at maximum severity within each subgrouping. Denominators should be calculated as the sum of all patients who had the given treatment in the demographics file. [Pg.147]


See other pages where Summing of events is mentioned: [Pg.77]    [Pg.1522]    [Pg.556]    [Pg.248]    [Pg.12]    [Pg.77]    [Pg.1522]    [Pg.556]    [Pg.248]    [Pg.12]    [Pg.1201]    [Pg.1326]    [Pg.1432]    [Pg.391]    [Pg.360]    [Pg.1829]    [Pg.135]    [Pg.146]    [Pg.222]    [Pg.223]    [Pg.542]    [Pg.619]    [Pg.616]    [Pg.633]    [Pg.637]    [Pg.229]    [Pg.681]    [Pg.77]    [Pg.579]    [Pg.39]    [Pg.311]    [Pg.7]    [Pg.26]   
See also in sourсe #XX -- [ Pg.1603 ]




SEARCH



Of sums

© 2024 chempedia.info