Air Quality Averaging Time


A very useful format in which to display air quality data for analysis is that of Fig. 4-8, which has as its abscissa averaging time expressed in two different time units and, as its ordinate, concentration of the pollutant at the receptor. This type of chart is called an arrowhead chart and includes enough information to characterize fully the variability of concentration at the receptor.  [c.53]

If air quality data at a receptor for any one averaging time are lognormally distributed, these data will plot as a straight line on log probability graph paper (Fig. 4-9) which bears a note Sg = 2.35. Sg is the standard geometric deviation about the geometric mean (the geometric mean is the Nth root of the product of the n values of the individual measurements).  [c.54]

In general, air quality data are classified as a function of time, location, and magnitude. Several statistical parameters may be used to characterize a group of air pollution concentrations, including the arithmetic mean, the median, and the geometric mean. These parameters may be determined over averaging times of up to 1 year. In addition to these three parameters, a measure of the variability of a data set, such as the standard deviation  [c.226]

Larsen (18-21) has developed averaging time models for use in analysis and interpretation of air quality data. For urban areas where concentrations for a given averaging time tend to be lognormally distributed, that is, where a plot of the log of concentration versus the cumulative frequency of occurrence on a normal frequency distribution scale is nearly linear,  [c.316]

In order to build new facilities or expand existing ones without harming the environment, it is desirable to assess the air pollution impact of a facility prior to its construction, rather than construct and monitor to determine the impact and whether it is necessary to retrofit additional controls. Potential air pollution impact is usually estimated through the use of air quality simulation models. A wide variety of models is available. They are usually distinguished by type of source, pollutant, transformations and removal, distance of transport, and averaging time. No attempt will be made here to list aU the models in existence at the time of this writing.  [c.320]

In the United States if the anticipated air polluhon impact is sufficiently large, modeling has been a requirement for new sources in order to obtain a permit to construct. The modeling is conducted following guidance issued by the U.S. Environmental Protechon Agency (56, 57). The meeting of all requirements is examined on a pollutant-by-pollutant basis. Using the assumptions of a design that will meet all emission requirements, the impact of the new source, which includes all new sources and changes to existing sources at this facility, is modeled to determine pollutant impact. This is usually done using a screening-type model such as SCREEN (58). The impacts are compared to the modeling significance levels for this pollutant for various averaging times. These levels are generally about 1/50 of the National Air Quality Standards. If the impact is less than the significance level, the permit can usually be obtained without additional modeling. If the impact is larger than the significance level, a radius is defined which is the greatest distance to the point at which the impact falls to the significance level. Using this radius, a circle is defined which is the area of significance for this new facility. All sources (not only this facility, but all others) emitting this pollutant are modeled to compare anticipated impact with the National Ambient Air Quality Standards and with the Prevention of Significant Deterioration increments.  [c.338]

United States National Primary and Secondary Ambient Air Quality Standard, attained when the expected number of days per calendar year with maximum hourly average concentrations above 0.12 ppm is equal to or less than 1, as determined in a specified manner  [c.373]

Workers in the field of water resources are accustomed to thinking in terms of watersheds and watershed management. It was these people who introduced the term airshed to describe the geographic area requiring unified management for achieving air pollution control. The term airshed was not well received because its physical connotation is wrong. It was followed by the term air basin, which was closer to the mark but still had the wrong physical connotation, since unified air pollution control management is needed in flat land devoid of valleys and basins. The term that finally evolved was air quality control region, meaning the geographic area including the sources significant to production of air pollution in an urbanized area and the receptors significantly affected thereby. If long averaging time isopleths (i.e., lines of equal pollution concentration) of a pollutant such as suspended particulate matter are drawn on the map of an area, there will be an isopleth that is at essentially the same concentration as the background concentration. The area within this isopleth meets the description of an air quality control region.  [c.424]

While most of us are aware of the dangers posed by outdoor air pollution, awareness of airborne chemical and biological pollutants present indoors and its implications for human health is more limited. Some indoor air gases and pollutants such as radon, asbestos, carbon monoxide, biological contaminants, and volatile organic compounds (VOCs) pose a serious threat to our health and well-being. Over the past several decades, our exposure to indoor air pollutants is believed to have increased due to a variety of factors, including the construction of more tightly sealed buildings reduced ventilation rates to save energy use of synthetic building materials and furnishings and use of chemically formulated personal care products, pesticides, and household cleaners. Since an average person spends increasing amount of time indoors, it is important to understand the health risks posed by prolonged exposure to indoor pollutants and the energy and comfort implications of different methods to control and mitigate these pollutants in order to ensure acceptable indoor air quality.  [c.53]

On average, buildings with air conditioning that have inadequate supply of fresh air are far more likely to suffer from poor indoor air quality than naturally ventilated buildings. On the other hand, one can find serious lAQ problems in homes and apartment buildings that are naturally ventilated as well.  [c.55]

Americans living in the fifty most congested cities spend an average of thirty-three hours each year stuck in traffic. Congestion causes much more than driver aggravation air quality suffers, vehicle idling and stop-and-go traffic reduce fuel economy by as much as 30 percent, and we lose billions of dollars in productivity. These are the consequences as the automobile does what it is designed to do—transport a highly mobile population. Continued suburban expansion, reduction in household size, increase in number of workers per household, and general changes in lifestyle have all contributed to increased travel demand and greater congestion.  [c.1144]

The conversion products, other than gas and hydrogen sulfide (H2S), are essentially a gasoline fraction that, after pretreatment, will be converted by catalytic reforming an average quality distillate fraction to be sent to the gas oil pool and an atmospheric residue or vacuum distillate and vacuum residue whose properties and impurity levels (S, N, Conr.  [c.400]

As the panels leave the press area, they may pass through two sensing units. The first is a blow detector, which can locate delaminations that may or may not be seen by visual inspection. Boards with blow areas are marked for removal and then are usually ground up, remilled, and recycled through the process. The second unit is an automatic thickness sensor which, by means of several sensing heads across the board, measures and averages the thickness across and along the board. These thickness measurements and the mat weights taken before the press inform the operators if the product is in the proper thickness and density range and whether or not adjustments need to be made. This is the first product quality monitoring step and it is critical in achieving maximum production of on-grade panel materials.  [c.393]

As a general guide to conveyor selec tion. Table 2I-I indicates conveyor choices on the basis of some common functions. Table 21-2 is designed to aid in feeder selection on the basis of the physical characteristics of the material to be handled. Table 21-3 is a coded hsting of material characteristics to be used with Table 21-4, which describes the conveying qualities of some common materials. While these tables may sei ve as valuable guides, conveyor selec tion must be based on the as-conveyed characteristics of a material. For instance, if packing or aerating can occur in the conveyor, the machine s performance will not meet expectations if calculations are based on an average weight per cubic meter. Storage conditions, variations in ambient temperature and humidity, and discharge methods may all affect conveying characteristics. Such factors should be carefully considered before making a final conveyor selec tion.  [c.1912]

Aside from flow control, basic design considerations have centered on surface overflow rates, retention time, and weir overflow rate. Surface overflow rates have been slowly reduced from 33 mV(m day) [800 gal/(ft day)] to 24 mV(m day) to 16 mV(m day) [600 gal/ (ft day) to 400 gal/(ft day)] and even to 12 m (m day) [300 gal/ (ft day)] in some instances, based on average raw-waste flows. Operational results have not demonstrated that lower surface overflow rates improve effluent quality, making 33 mV(m day) [800 gal/ (ft day)] the design choice in most systems. Retention time has been found to be an important design fac tor, averaging 2 h on the basis of raw-waste flows. Longer retention periods tend to produce rising sludge problems, while shorter retention periods do not provide for good solids separation with high-return sludge flow rates. Effluent-weir overflow rates have been limited to 186 mV(m day) [15,000 gal/(ft day)] with a tendency to reduce the rate to 124 mV(m day) [10,000 gal/(ft day)]. Lower effluent-weir overflow rates are obtained by using dual-sided effluent weirs cantilevered from the peripheiy of the tank. Unfortunatelv, proper adjustment of dual-side effluent weirs has created more hydraulic problems than the weir overflow rate. Field data have shown that effluent quality is not really affected by weir overflow rates up to 990 mV(m day) [80,000 gal/(ft day)] or even 1240 mV(m day) [100,000 gal/(ft day)] in a properly designed sedimentation tank. A single peripheral weir, being easy to adjust and keep clean, appears to oe optimal for secondaiy sedimentation tanks from an operational point of view.  [c.2221]

The protection current density for steel ships. depends on the quality of coating, the flow behavior and the type of components to be protected (see Sections 17.1 and 17.2). For example, a propeller that is assembled with a slip ring requires protection current densities of up to 0.5 A m". Experience has to be relied on for the service behavior of coated surfaces (e.g., possible damage from ice or sand abrasion). Protection current densities are usually a few mA m" for typical ships coatings. They increase somewhat with time. After a year, average values of between 15 and 20 mA m can be assumed. It is usual in designing with galvanic anodes for 15 mA mr to include a mass reserve of 20%. For steel merchant ships, impressed current equipment giving 30 mA m" is designed so that it can eventually deliver more current to cope with damage to the coating. This value has to be increased for ice breakers and ice-going ships according to the area and time of travel (e.g., in the Antarctic at least 60 mA m is required). The additional expenditure is negligible with this system compared with galvanic anodes.  [c.398]

Based on these experimental results, one can speculate on the influence of the arc mode on the yield and distribution of the bundles. For the glow discharge, the plasma is continuous, homogeneous, and stable. In other words, the temperature distribution, the electric field which keeps growing tube tips open[47], and the availability of carbon species (atoms, ions, and radicals) are continuous, homogeneous, and stable over the entire central region of the cathode. Accordingly, a high yield and better quality buckytubes should occur over the entire central region of the cathode. These are consistent with what we observed in Fig. 9(a), Fig. 10, and Fig. 11 (a). For the conventional arc discharge, we can speculate that the arc starts at a sharp edge near the point of closest approach, and after vaporizing this region it jumps to what then becomes the next point of closest approach (usually within about a radius of the arc area), and so on. The arc wanders around on the surface of the end of the anode, leading, on the average, to a discontinuous evaporation process and an instability of the electric field. This kind of violent, randomly jumping arc discharge is responsible for the low yield and the low quality of the deposited buckytubes. This is, again, consistent with what we showed in Fig. 9(b) and Fig. 11 (b). Note also from Fig. 11 that carbon nanotubes and nanoparticles coexist in both samples. The coexistence of these two carbonaceous products may suggest that some formation conditions, such as the temperature and the density of the various carbon species, are almost the same for the nanotubes and the  [c.119]

The numbers appended to the PIFs represent numerical assessments of the quality of the PIFs (on a scale of 1 to 9) across all task steps being evaluated. The ratings indicate that there are negative influences of high time stress and high levels of distractions. These are compensated for by good training and moderate (industry average) procedures. Again, in some cases, these ratings could differ for the different tasks. For example, the operator may be highly trained for the types of operations in some tasks but not for others. It should be noted that as some factors increase from 1 to 9, they have a negative effect on performance (time stress and level of distractions), whereas for the other factors, an increase would imply improved performance (quality of procedures and experience/training).  [c.218]

Since AMI contains more adjustable parameters than MNDO, and since PM3 can be considered as a version of AMI with all the parameters fully optimized, it is expected that the error decreases in the order MNDO > AMI > PM3. This is indeed what is observed in the above tables. It should be noted, however, that the data in the tables refer to averages, thus for specific compounds or classes of compounds the ordering may be different. Bonds between silicon and iodine with PM3 give an example of specific compound being poorly described, although the average description for all compounds is better. It is clear that the PM3 method will perform better than AMI in an average sense since the two-electron integrals are optimized to give a better fit to the given molecular data set. This does not mean, however, that PM3 necessarily will perform better than AMI (or MNDO) for properties not included in the training set. Indeed it has been argued that the AMI method tends to give more realistic values for atomic charges than PM3, especially for compounds involving nitrogen. An often quoted example is formamide for which the Mulliken population analysis by different methods is given in Table 3.3. The negative charge on nitrogen produced by PM3 is significantly smaller than that produced by the other methods, but it should be noted that atomic charges are not well-defined quantities, as discussed in Chapter 9. Nevertheless, it may indicate that the electrostatic potential generated by a PM3 wave function is of lower quality than one generated by the AMI method.  [c.91]

Average airport visibilities over the eastern half of the United States have been determined over a period of approximately 25 years (1948-1974) (6). Although seasonal variations occur, the long-term trend has been decreased visual air quality over the time period.  [c.148]

Stunning as they are, these averages do not capture the real difference in living standards. Poor countries devote a mnch smaller share of their total energy consumption to private household and transportation uses. The actual difference in typical direct per capita energy use among the richest and the poorest quarters of the mankind is thus closer to being fortyfold rather than just twcntyfold. This enormous disparity reflects the chronic gap in economic achievement and in the prevailing quality of life and contributes to persistent global political instability.  [c.629]

Under appropriate contrast and high light intensity, the resolution of planar object structures is diffraction limited. Noise in the microscopic system may also be important and may reduce resolution, if light levels and/or the contrasts are low. This implies that the illumination of the object has to be optimal and that the contrast of rather transparent or highly reflecting objects has to be enlianced. This can be achieved by an appropriate illumination system, phase- and interference-contrast methods and/or by data processing if electronic cameras (or light sensors) and processors are available. Last but not least, for low-light images, efforts can be made to reduce the noise either by averaging the data of a multitude of images or by subtracting the noise. Clearly, if the image is inspected by the eye, the number of photons, and hence the noise, are detemimed by the integration time of the eye of about 1/30 s signal/noise can then only be improved, if at all possible, by increasing the light intensity. Hence, electronic data acquisition and processing can be used advantageously to improve image quality, since integration times can significantly be extended and noise suppressed.  [c.1659]

The small I Ar)2 complex serves as a benchmark system since numerically exact calculation is still possible here. By numerically exact calculation we mean here a 3-dimensional wavepacket propagation using the time-dependent vibrational Schrddinger equation with the fully coupled Interaction potential. The experimentally observed vibrationally resolved photoelectron spectrum can be modeled as a Fourier transform of the calculated autocorrelation function, i. e. the overlap of the initial total wavepacket with the wavepacket at time t. At the same time the complex autocorrelation function is a very sensitive quantity for testing the quality of approximate approaches since it depends not only on the amplitude but also on the phase of the wavepacket. Fig. 1 depicts the short time CSP and CI-CSP autocorrelation functions, compared to exact and TDSCF results. First, we note that there is qualitative agreement among all four approaches indicating that mean-field methods represent a reasonable approximation even though there is no significant separation in mode frequencies in this system. The excellent agreement between CSP and TDSCF demonstrates that only minor errors are introduced by replacing the quantum mean-field integrals by averages over classical trajectories. Finally, inclusion of two-mode correlations significantly improves the autocorrelation funcion and brings it closer to the exact one.  [c.373]

Elevated ground-level ozone exposures affect agricultural crops and trees, especially slow growing crops and long-lived trees. Ozone damages the leaves and needles of sensitive plants, causing visible alterations such as defoliation and change of leaf color. In North America, tropospheric ozone is believed responsible for about 90% of the damage to plants. Agricultural crops show reduced plant growth and decreased yield. According to the U.S. Office of Technology Assessment (OTA), a 120 figlrc seasonal average of seven-hour mean ground-level ozone concentrations is likely to lead to reductions in crop yields in the range of 16 to 35% for cotton, 0.9 to 51 % for wheat, 5.3 to 24% for soybeans, and 0.3 to 5.1 % for corn. In addition to physiological damage, ground-level ozone may cause reduced resistance to fungi, bacteria, viruses, and insects, reducing growth and inhibiting yield and reproduction. These impacts on sensitive species may result in declines in agricultural crop quality and the reduction of biodiversity in natural ecosystems. The impact of the exposure of plants to ground-level ozone depends not only on the duration and concentration of exposure but also on its frequency, the interval between exposures, the time of day and the season, site-specific conditions, and the developmental stage of plants. Additionally, ground-level ozone is part of a complex relationship among several air pollutants and other factors such as climatic and meteorological conditions and nutrient balances. For example, the presence of sulfur dioxide may increase the sensitivity of certain plants to leaf injury by ground-level ozone. Also the presence of ground-level ozone may increase the growth-suppressing effects of nitrogen dioxide.  [c.31]

Conditions for the determination of molecular mass or molecular mass distributions need not be as stringent as for purity check. Because the molecular size is given by the retention volume (as given by the first moment of the distribution), the concern is that peaks should not overlap to significantly affect this estimate. The flow rate will not affect the retention volume, but may affect the peak width and thus hide smaller peaks. The net contribution from smaller peaks to the retention volume of larger peaks may be neglected in most cases. Thus, if the sample is relatively pure the requirements on the operating conditions are modest and the run may be done at high flow rates. However, if the sample is composed of several species, as for a polymer sample, the estimates of mass averages will be affected by the entire distribution and thus the broadening of the peak (e.g., the number average molecular mass is heavily weighted by the low molecular portion). However, a modest zone broadening is not likely to severely affect the estimates (unless for samples of narrow MWDs) and the running conditions are not critical, i.e., running at a velocity of 40 times DJdp should be sufficient (corresponding to 0.8 ml/min for the case cited earlier). For a column of 30 cm length, this will result in a separation time of 30 min. This time may be seen as the maximum required to give high-quality information. It may be further reduced if only a rough estimate of the size and size distribution is the goal, and separations times as low as a few seconds have been reported. However, at these extreme eluent velocities the risk of polymer degradation by shear forces must be taken into account.  [c.71]

Looking back a century in the United States, when draft horses outnumbered automobiles and police enforced a speed limit of 10 niph on cars in some locales, the individual improvements that have since accrued to the automotive vehicle have transformed it into a necessity in the lives of many Americans. Gains in both performance and fuel economy have contributed importantly to its popularity. In the last quarter of the twentieth century, during which fuel economy regulation began in the United States, average new-car fuel economy has increased 80 percent at the same time that performance capability, as measured by acceleration time to 60 nipli, has improved 25 percent. Both have benefited most significantly from a reduction in the weight of the average car, but over this time period the summation of effects from myriad other individual improvements has contributed even more to fuel economy. These individual improvements can be grouped into factors influencing engine efficiency, the efficiency of the drivetrain connecting the engine to the drive wheels of the vehicle, the rolling resistance of the tires, vehicle aerodynamic drag, and the power consumed by accessories either required for engine operation or desired for passenger comfort and convenience. Engine improvements can be further classified as to whether they improve the quality and control of the mixture inducted into the cylinders, increase the air capacity of the engine, improve the efficiency of the energy conversion process associated with combustion, or reduce the  [c.108]

Per capita transportation energy consumption in a city such as New York is much lower than the U.S. average as a result of much lower car use. Although high-quality public transit service is one explanation for this result, parking costs, bridge and tunnel tolls, and the convenience of walking are equally important. Public transit is vital for the transportation needs of New York City residents. But many of the trips that residents of the typical U.S. metropolitan area take by auto are taken by New Yorkers on foot, or not at all. It is the lower number of auto trips, rather than a one-for-one substitution of transit trips for auto trips, that has a large impact on reducing energy use. Transit service m New York City is well used, making it energy-efficient despite the slow speed of surface transit travel.  [c.768]


See pages that mention the term Air Quality Averaging Time : [c.1607]    [c.393]    [c.1544]    [c.310]    [c.597]    [c.156]    [c.40]    [c.384]   
See chapters in:

Fundamentals of air pollution  -> Air Quality Averaging Time