Averages, definition


Average Definition Methods Section  [c.42]

For example, the time average definition of the Lyapunov exponent for one-dimensional maps, A = lim v->oo (which is often difficult to calculate in prac-  [c.208]

It is generally well known that, for most averages, differences between ensembles disappear in the thennodynamic limit. However, for fmite-sized systems of the kind studied in simulations, it is necessary to consider tire differences between ensembles, which will be significant for mean-squared values (fluctuations) and, more generally, for the probability distributions of measured quantities. For example, energy fluctuations in the constant-A FE ensemble are (by definition) zero, whereas in the constant-A fT ensemble tliey are not. Since these pomts have a bearing on various aspects of simulation methodology, we expand on them a little here.  [c.2246]

The first requirement is the definition of a low-dimensional space of reaction coordinates that still captures the essential dynamics of the processes we consider. Motions in the perpendicular null space should have irrelevant detail and equilibrate fast, preferably on a time scale that is separated from the time scale of the essential motions. Motions in the two spaces are separated much like is done in the Born-Oppenheimer approximation. The average influence of the fast motions on the essential degrees of freedom must be taken into account this concerns (i) correlations with positions expressed in a potential of mean force, (ii) correlations with velocities expressed in frictional terms, and iit) an uncorrelated remainder that can be modeled by stochastic terms. Of course, this scheme is the general idea behind the well-known Langevin and Brownian dynamics.  [c.20]

This definition is based on and proportional to the g-expectation value. However, it is more useful since it is not necessary to evaluate the partition function to compute an average.  [c.201]

We assume that A is a symmetric and positive semi-definite matrix. The case of interest is when the largest eigenvalue of A is significantly larger than the norm of the derivative of the nonlinear force f. A may be a constant matrix, or else A = A(y) is assumed to be slowly changing along solution trajectories, in which case A will be evaluated at the current averaged position in the numerical schemes below. In the standard Verlet scheme, which yields approximations y to y nAt) via  [c.422]

The numerical value of the exponent k determines which moment we are defining, and we speak of these as moments about the value chosen for M. Thus the mean is the first moment of the distribution about the origin (M = 0) and is the second moment about the mean (M = M). The statistical definition of moment is analogous to the definition of this quantity in physics. When Mj = 0, Eq. (1.11) defines the average value of M this result was already used in writing Eq. (1.6) with k = 2.  [c.37]

Table 1.5 lists the different molecular weight averages most commonly encountered in polymer chemistry. Table 1.5 also includes the definition of these averages for easy reference, some experimental methods that produce them, and cross-references to sections of this volume where the specific techniques are discussed. Note that end group analysis produces a number average molecular weight, since it is a technique based on counting. This is especially evident when we compare end group analysis with the procedure for evaluating M in Example 1.5. In the latter, is given by dividing the total mass of the sample by the total number of moles of polymer it contains. This is exactly what is done in end group analysis.  [c.41]

It is the presence of p in the proportionality factor between 17 and M in the Debye viscosity equation that justifies the use of this quantity as a weighting factor in the definition of the viscosity average molecular weight [e.g., Eq. (2.37)]. Of the various parameters which appear in Eq. (2.56), only f is unfamiliar. While many polymers differ relatively little in 1q, p, or even Mq, it turns out that variation in f span a wide range. Defining a quantity such as f is easy, but if we attempt to assign a numerical value to it, we find ourselves at a loss. Accordingly, the following example gives us an idea of the magnitude of f on the basis of some experimental viscosity data.  [c.112]

In deriving these results we have focused attention on growth fronts originating elsewhere and crossing point x. We would count the same number if the growth originated at x and we evaluated the number of nucleation sites swept over by the growing front. This change of perspective is immediately applicable to a three-dimensional situation as follows. Suppose we let N represent the number of sites per unit volume (note that this is a different definition than given above) and assume that a spherical growth front emanates from each. Then the average number of fronts which cross nucleation sites in time t is  [c.224]

The number average degree of polymerization for these mixtures is easily obtained by recalling the definition of the average from Sec. 1.8. It is given by the sum of all possible n values, with each multiplied by its appropriate weighting factor, provided by Eq. (5.24)  [c.293]

If we multiply the time elapsed per monomer added to a radical by the number of monomers in the average chain, then we obtain the time during which the radical exists. This is the definition of the radical lifetime. The number of monomers in a polymer chain is, of course, the degree of polymerization. Therefore we write  [c.373]

Now we consider how the averaging implied by the overbar is carried out. What this involves is multiplying cos(srj, cos 7) by P(7) d7-the probability that a particular angle is between 7 and 7 + d7-and then integrating the result over all values of 7 in keeping with the customary definition of an average quantity.  [c.700]

Fig. 13. Definition of effective average slopes of equilibrium line (45). Fig. 13. Definition of effective average slopes of equilibrium line (45).
Goal Pyrolysis. Pyrolysis is the destmctive distillation of coal in the absence of oxygen typically at temperatures between 400 and 500°C (133). As the temperature of carbonaceous matter is increased, decomposition ultimately occurs. Melting and dehydration may also occur. Coals exhibit more or less definite decomposition temperatures, as indicated by melting and rapid evolution of volatile components, including potential fuel Hquids, during destmctive distillation (134). Table 10 summarizes an extensive survey of North American coals subjected to laboratory pyrolysis. The yields of light oils so derived average no more than ca 8.3 L/1 (2 gal/short ton), and tar yields of ca 125 L/1 (30 gal/short ton) are optimum for high volatile bituminous coals (135).  [c.92]

Obviously shear rate in different parts of a mixing tank are different, and therefore there are several definitions of shear rate (/) for average shear rate in the impeller region, oc V, the proportionaUty constant varies between 8 and 14 for all impeller types (2) maximum shear rate, oc tip speed (%NU), occurs near the blade tip (3) average shear rate in the entire tank is an order of magnitude less than case / and (4) minimum shear rate is about 25% of case 3.  [c.423]

Regulation. The identification of health risks associated with asbestos fibers, together with the fact that huge quantities of these minerals were used ( a5 million tons yearly) in a variety of appHcations, has prompted strict regulations to limit the maximum exposure of airborne fibers in workplace environments. These exposure limits may be defined as averaged or peak values, measured either as a weight or as a number of-fibers-per-unit-volume (cm, m ), for fibers having lengths >5 fim and diameters <3 fim. The International Labor Organization has adopted the foUowing definition (Convention 162, article 2d) "the term respirable asbestos fibers means asbestos fibers having a diameter of less than 3 ]lni and a length—to—diameter ratio greater than 3 1. Only fibers of a length greater than 5 ]lni shall be taken into account for purposes of measurement."  [c.356]

Time-Dependent Cascade Behavior. The period of time during which a cascade must be operated from start-up until the desired product material can be withdrawn is called the equiUbrium time of the cascade. The equiUbrium time of cascades utilizing processes having small values of a — 1 is a very important quantity. Often a cascade may prove to be quite impractical because of an excessively long equiUbrium time. An estimate of the equihbrium time of a cascade can be obtained from the ratio of the enriched inventory of desired component at steady state, JT, to the average net upward transport of desired component over the entire transient period from start-up to steady state, T . In equation form this definition can be written as  [c.83]

Overall Coefficient of Heat Transfer In testing commercial heat-transfer equipment, it is not convenient to measure tube temperatures (t,3 or t4 in Fig. 5-6), and hence the overall performance is expressed as an overall coefficient of heat transfer U based on a convenient area dA, which may be dAi, oi an average of dAi and dA whence, by definition,  [c.558]

The particular learning curve is usually characterized by the percentage reduction in the cumulative average value Y when the number of units X is doubled. From this definition it follows that  [c.819]

Entrainment Flooding The early work of Souders and Brown [Ind. Eng. Chem., 26, 98 (1934)] based on a force balance on an average suspended droplet of hquid led to the definition of a capacity parameter C,i,  [c.1372]

Among the different algorithms tested , a combination of a modified version of MaxMin Distance and Forgy algorithms found to be effective for the discrimination of AE hits from composites. The modification of MaxMin Distance algorithm is focused on the definition of the two starting clusters which are selected as the point furthest to the mass centre and the point furthest to the previous one, A new cluster centre is created if Dmi >T,n Day, where Dm, is the maximum of the minimum distances between each pattem/AE hit to the existing cluster centrers, Djy is the average between clusters distance and Tm a user specified parameter in the range [0,1]. The algorithm identifies cluster regions which are farthest apart and therefore is particularly useful either for extreme noise condition identification or for the first approximation of initial clusters centres to be refined by Forgy algorithm.  [c.40]

Other SFA studies complicate the picture. Chan and Horn [107] and Horn and Israelachvili [108] could explain anomalous viscosities in thin layers if the first layer or two of molecules were immobile and the remaining intervening liquid were of normal viscosity. Other inteipretations are possible and the hydrodynamics not clear, since as Granick points out [109] the measurements average over a wide range of surface separations, thus confusing the definition of a layer thickness. McKenna and co-workers [110] point out that compliance effects can introduce serious corrections in constrained geometry systems.  [c.246]

Note that the sums are restricted to the portion of the frill S matrix that describes reaction (or the specific reactive process that is of interest). It is clear from this definition that the CRP is a highly averaged property where there is no infomiation about individual quantum states, so it is of interest to develop methods that detemiine this probability directly from the Scln-ddinger equation rather than indirectly from the scattering matrix. In this section we first show how the CRP is related to the physically measurable rate constant, and then we discuss some rigorous and approximate methods for directly detennining the CRP. Much of this discussion is adapted from Miller and coworkers [44, 45].  [c.990]

The sunmration averages to n. Using the definition of the difhision coefficient, D=e /(2Xj), and the difhision time, equation B 1,14,7 gives  [c.1540]

Equation (2.39) is the weight average molecular weight as defined in Sec. 1.8. It is important to note that this result. My = M y, applies only in the case of nonentangled chains where 17 is directly proportional to M. A more general definition of My for the case where 17 a is  [c.106]

When air movement, clothing, or activity are not as specified in the definition of ET, Eigure 4, derived from an equation developed by Eanger (6), may be used. Knowledge of the energy expended during the course of routine physical activities is also necessary, since the production of body heat increases in proportion to exercise intensity. Table 2 presents probable metaboHc rates (or the energy cost) for various activities. However, for higher activity levels, the values given could be in error by as much as 50%. Engineering calculations should allow for this. The activity level of most people is a combination of activities or work—rest periods. A weighted average metaboHc rate is generally satisfactory, provided that activities alternate several times per hour.  [c.358]

Although there is no doubt that greenhouse gas concentrations and the radiative forcing are increasing, there is no unequivocal evidence that this forcing is actually causing a net warming of the earth. Analyses of global temperature trends since the 1860s show that the global temperature has increased about 0.5—0.7°C (51,52), but this number decreases to M).5° C when corrections for heat-island effects are considered (53). This is in reasonable agreement with modeling results which predict a temperature increase of - 1° C (54). However, most of the temperature increase occurred prior to the increases in CO2 (55). A detailed analysis of global temperature and CO2 concentration time-series from 1958 to 1988, the period of atmospheric CO2 measurements, shows an excellent positive correlation, but CO2 changes lag temperature change by an average of 5 months (56). Thus, although there is strong evidence linking temperature and CO2 changes, the cause and effect has not been demonstrated, and it is not clear which is the cause and which is the effect. The lack of a definitive relationship may also be obscured by changes in other factors that affect the earth s heat budget such as increased atmospheric aerosols or cloud cover and natural climatic cycles. Global circulation models (GCMs) predict that the average global temperature will increase from 2.0 to 5.5°C as CO2 concentrations double from those of preindustrialized levels (57,58). The temperature increases are not expected to be uniformly distributed. The uncertainties in these models are large because many of the important feedback processes involving oceans and clouds are not adequately understood to properly incorporate them into the model system (see Atmospheric modeling).  [c.379]

Equation 2 gives the relation between stresses and accelerations obtained from momentum balances. To proceed further requires use of the constitutive equations which codify the material properties through additional relations between the stresses (t, etc) and the rates of strain (du/dx, etc). The constitutive equation for a given fluid is found empirically or theoretically by use of some theory of material properties. The simplest model is one in which the various stresses are expressed as linear combinations of the rates of strain. When the fluid is homogeneous and isotropic, this relation leads to the Navier-Stokes equations. Fluids that obey these equations are by definition Newtonian. The conditions of homogeneity and isotropy ensure that only two material constants, the shear viscosity, ]1, and the dilational viscosity, X, are needed to describe the fluid. By defining pressure as the negative of the average of the three normal stresses, CJ, one finds that X = 2/3 fi, hence only a single material viscosity ]1 is required. As defined, the pressure is usually identified with the thermodynamic pressure for purposes such as determining physical properties. Equations 4 and 5 show the constitutive equation so derived, and equation 6 shows the form taken by the equation of motion in the X-direction when these are inserted into the force balance.  [c.88]

The metal content of an ore is typically called the ore grade and is usually expressed as weight percent for most metals. For precious metals, however, grade is usually expressed in g/t (02/short ton). Because the definition of ore is estabUshed by economic considerations, there is no upper limit to grade, ie, the richer, the better. There is frequently a lower limit or cutoff grade, however, based on process efficiency and economics. Table 1 shows the average grade of various metalliferous ores that can be processed economically. Also shown is an estimate of the world total reserve base for each metal. For many metals, ore grade depletion has been a serious problem. This is illustrated for copper by the decline in average copper yield for U.S. copper ores during the 1900s (Fig. 1). The abihty of the copper industry to remain competitive while faced with this problem has been a challenge. Technical developments in leaching, and improvements in solution concentration, purification, and metal reduction (solvent extraction and electro winning), have turned this problem into an opportunity for additional metal production. Leaching of large tonnages of low grade material accounts for about 35% of the U.S. primary copper production.  [c.158]

A further assessment is carried out through the definition and measurement of iadustry-average performance iadexes relating to safety. These iadexes have been estabhshed by the utihties, working with INPO, EPRI, and the supphers (24). Each index bears on some aspect of safe operation of the nuclear power plant, ie, industrial safety accident rate, unplaimed automatic scrams, collective radiation exposure, plant capabihty factor, and unplaimed capabihty loss factor. Eive-year goals are estabhshed for average performance of all U.S. plants for each of these performance indexes. A substantial improvement has been made in all of these indexes since the early 1980s. The goals which were set in 1990 to be achieved by 1995 were either met prior to 1995 or ate expected to be met by the end of that year. International performance indexes very similar to those utilized in the United States have been estabhshed for nuclear plants elsewhere in the Western world. Measurement of performance against these indexes also shows significant improvement of reactor performance worldwide.  [c.237]

Ideally, available information on the toxicology of a material should allow the following to be determined as part of a ha2ard evaluation procedure nature of potential adverse effects relevance of the conditions of the toxicology studies to the practical in-use situation the average response, range of responses, the presence of a hypersensitive group, and an indication of minimal or no-effects levels identification of factors likely to modify the toxic response effects of acute gross overexposure, ie, accident situations effects of repeated exposures recognition of adverse effects assistance in the definition of allowable and nonaHowable exposure conditions assistance in the definition of monitoring requirements guidance on the need for personal and collective protection measures guidance on first-aid, antidotal, and medical support needs relevance of toxicity to coincidental disease and definition of "at risk" individuals, eg, pregnant and fertile females genetically susceptible individuals.  [c.238]

Poly(ethylene terephthalate) [25038-59-9] (PET), with an oxygen permeabihty of 8 nmol /(m-sGPa), is not considered a barrier polymer by the old definition however, it is an adequate barrier polymer for holding carbon dioxide in a 2-L bottie for carbonated soft drinks. The solubihty coefficients for carbon dioxide are much larger than for oxygen. For the case of the PET soft drink bottie, the principal mechanism for loss of carbon dioxide is by sorption in the bottie walls as 500 kPa (5 atm) of carbon dioxide equihbrates with the polymer (3). For an average wall thickness of 370 pm (14.5 mil) and a permeabihty of 40 nmol /(m-sGPa), many months are requited to lose enough carbon dioxide (15% of initial) to be objectionable.  [c.488]

For cavitation in flow through orifices. Fig. 6-55 (Thorpe, Jnt. J. Multipha.se Flow, 16, 1023-1045 [1990]) gives the critical cavitation number for inception of cavitation. To use this cavitation number in Eq. (6-207), the pressure p is the orifice backpressure downstream of the vena contracta after lull pressure recovery, and V is the average velocity through the orifice. Fig. 6-55 includes data from TuUis and Govindarajan (ASCE J. Hydi auf. Div., HY13, 417-430 [1973]) modified to use the same cavitation number definition their data also include critical cavitation numbers for 30.50- and 59.70-cm pipes (12.00- to 23.50-in). Veiy roughly, compared with the 15.40-cm pipe, the cavitation number is about 20 percent greater for the 30.50-cm (12.01-in) pipe and about 40 percent greater for the 59.70-cm (23.50-in) diameter pipe. Inception of cavitation appears to be related to release of dissolved gas and not merely vaporization of the hquid. For further discussion of cavitation, see Eisenberg and Tuhn (Streeter, Handbook of Fluid Dynamics, Sec. 12, McGraw-Hill, New York, 1961).  [c.671]


See pages that mention the term Averages, definition : [c.723]    [c.389]    [c.470]    [c.1126]    [c.2313]    [c.114]    [c.19]    [c.37]    [c.487]    [c.543]    [c.681]    [c.392]    [c.102]    [c.483]    [c.170]    [c.211]    [c.1592]   
Sourse beds of petroleum (1942) -- [ c.9 ]