Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Determination of expectation

It is sometimes sufficient in practice just to determine the expectation and variance. Multiplying eqn (5.51) by j and summing over all possible j the equation for the expectation can be obtained  [Pg.108]

We see that in the equation there is a term containing the variance [Pg.108]

In general the treatment of bicomponential reactions is rather difficult [Pg.108]

Sometimes the approximation E[ tY] = ( [ (0]) is adopted. This assumption is equivalent to the condition that D [ (t)] = 0 and so it corresponds to the omission of the stochastic character. [Pg.109]

This equation is analogous to that for the deterministic model. Similar kinds of equations can be derived for more general cases. [Pg.109]


A number of laboratory procedures even more rapid than the accelerated Weather-o-meter have been described for the determination of expected weatherability of coating asphalts. Research sponsored by the Asphalt Roofing Manufacturers Association describes a stepwise procedure to determine changes in the crude asphalt source (see Asphalt). [Pg.216]

Principles that depend on determination of expected values by the mathematics of probability theory are frequently criticized on the grounds that the theory holds only when trials are repeated many times. It is argued that, for certain types of decisions—for example, whether to finance a major expansion—expectation is meaningless since this type of decision is not made very often. According to the counterargument, even if the firm is not faced with a large number of repetitive decisions, it should apply the principle to many different decisions and thus realize the long-run effects. Moreover, even if the decision is unique, the only way to approach decisions for which probabilities are known is to behave as if the decision were a repetitive one and thus minimize expected cost or maximize expected revenue or profit. [Pg.2378]

Safety analysis uses logic structure representative of possible incidents. Such work methods as fault tree, incident analysis, and decision table are suitable for this purpose. Computation rules for determination of expected frequency of incidents must be formulated accordingly. In a broad sense all mathematical simulation methods which are suited for determination of stress states in technical installations and their parts become aids in safety analysis. These will be described in partial detail later. Here characteristic work methods which are of direct significance with respect to system-related and prognostic consideration of safety analysis will be discussed first,... [Pg.45]

In Europe, the most commonly used approach for the determination of expected seismic response based on pushover analysis is the N2 method (e.g., Fajfar 2000), which has been implemented in Eurocode 8 (CEN 2004), whereas the so-called coefficient method is used in the United States (e.g., FEMA 356 (2000)). According to Eurocode 8 and FEMA 356, the target displacement (di and 4), respectively, can be determined as follows ... [Pg.103]

DETERMINING THE EXPECTED LONG-TIME STRENGTH OF 12 KhlMF STEEL TUBES by V.A. Burganova, L.V. Kokhman, P.A. Khalileev, V.A. Kuz mina and L.P. [Pg.28]

The mathematical requirements for unique determination of the two slopes mi and ni2 are satisfied by these two measurements, provided that the second equation is not a linear combination of the first. In practice, however, because of experimental error, this is a minimum requirement and may be expected to yield the least reliable solution set for the system, just as establishing the slope of a straight line through the origin by one experimental point may be expected to yield the least reliable slope, inferior in this respect to the slope obtained from 2, 3, or p experimental points. In univariate problems, accepted practice dictates that we... [Pg.80]

Table 5.2 demonstrates how an uncorrected constant error affects our determination of k. The first three columns show the concentration of analyte, the true measured signal (no constant error) and the true value of k for five standards. As expected, the value of k is the same for each standard. In the fourth column a constant determinate error of +0.50 has been added to the measured signals. The corresponding values of k are shown in the last column. Note that a different value of k is obtained for each standard and that all values are greater than the true value. As we noted in Section 5B.2, this is a significant limitation to any single-point standardization. [Pg.118]

The goal of a collaborative test is to determine the expected magnitude of ah three sources of error when a method is placed into general practice. When several analysts each analyze the same sample one time, the variation in their collective results (Figure 14.16b) includes contributions from random errors and those systematic errors (biases) unique to the analysts. Without additional information, the standard deviation for the pooled data cannot be used to separate the precision of the analysis from the systematic errors of the analysts. The position of the distribution, however, can be used to detect the presence of a systematic error in the method. [Pg.687]

Determination of the Gas-Phase Temperature. The development given above is in terms of interface conditions, bulk Hquid temperature, and bulk gas enthalpy. Often the temperature of the vapor phase is important to the designer, either as one of the variables specified or as an important indicator of fogging conditions in the column. Such a condition would occur if the gas temperature equaled the saturation temperature, that is, the interface temperature. When fogging does occur, the column can no longer be expected to operate according to the relations presented herein but is basically out of control. [Pg.102]

The probabihty-density function for the normal distribution cui ve calculated from Eq. (9-95) by using the values of a, b, and c obtained in Example 10 is also compared with precise values in Table 9-10. In such symmetrical cases the best fit is to be expected when the median or 50 percentile Xm is used in conjunction with the lower quartile or 25 percentile Xl or with the upper quartile or 75 percentile X[j. These statistics are frequently quoted, and determination of values of a, b, and c by using Xm with Xl and with Xu is an indication of the symmetry of the cui ve. When the agreement is reasonable, the mean v ues of o so determined should be used to calculate the corresponding value of a. [Pg.825]

A great variety of factors are in use, depending on the time available and the accuracy expected. Normally the input information required is the base cost. Determination of this cost usually requires a knowledge of equipment sizes, probably using mass and energy balances for the proposed process. [Pg.866]

Power dissipation can lead to temperature increases of up to 40°C in the mass. Note that evaporation of liquid as a result of this increase needs to be accounted for in determining liquid requirements for granulation. Liquid should be added through an atomizing nozzle to aid uniform hquid distribution in many cases. In addition, power intensity (kW/kg) has been used with some success to judge granulation end point and for scale-up, primarily due to its relationship to granule deformation [Holm loc. cit.]. Swept volume ratio is a preliminary estimate of expected power intensity. [Pg.1895]

After understanding the probiem, the second step is to conduct a theoreticai screening to determine the expected thermai hazards of a system. Tabie A.l identifies properties of materiais to be considered, and some potentiai sources of information, in formuiating an opinion about the thermai hazards of particuiar materiais and reactions. [Pg.21]

In analyzing these data it is necessary to go beyond the curves and determine the expected behavior of the material with respect to notch sensitivity. Problems with notch sensitivity can often be corrected by modifying the processing steps and/or heat treatment. [Pg.234]

Similar accuracies have been found for thick, homogeneous, complex specimens when corrections for secondary excitation are also included. With appropriate standards, total accuracies of 2% have been demonstrated. Because the determination of the lighter elements (i.e., 5 < Z< 15) are more sensitive to the uncertainties in the data base items listed above, less accuracy should be expected for these elements. [Pg.366]


See other pages where Determination of expectation is mentioned: [Pg.292]    [Pg.73]    [Pg.150]    [Pg.108]    [Pg.336]    [Pg.266]    [Pg.456]    [Pg.292]    [Pg.73]    [Pg.150]    [Pg.108]    [Pg.336]    [Pg.266]    [Pg.456]    [Pg.209]    [Pg.87]    [Pg.108]    [Pg.364]    [Pg.360]    [Pg.130]    [Pg.546]    [Pg.283]    [Pg.100]    [Pg.426]    [Pg.200]    [Pg.103]    [Pg.536]    [Pg.1972]    [Pg.213]    [Pg.76]    [Pg.3]    [Pg.32]    [Pg.301]    [Pg.229]    [Pg.149]    [Pg.275]    [Pg.208]    [Pg.347]    [Pg.734]    [Pg.82]    [Pg.159]   


SEARCH



Expectancies

Expectations

Expected

© 2024 chempedia.info