Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Industrial data assumptions

Interested readers can refer to the original paper by Croston (1972). Willemain et al. (1994) did a comparative evaluation of Croston s method in forecasting intermittent demand in manufacturing using industrial data. They concluded that Croston s method was superior to the exponential smoothing method and was robust even under situations when Croston s model assumptions were violated. [Pg.82]

Disturbance to Feed Flow Rate. Since the flow rate of the feed at the first steady state approaches the flow rate of the second steady state, the results from dynamic simulations approach the industrial data. The reason to these deviations are, again, the misleading assumption of constant overall heat transfer coefficient during the transient state. For example, for a feed flow rate of 19 gmole/s,2 the estimated overall heat transfer coefficient is 0.012 cal/s,cm, C at steady state. However, for feed flow rate of 34.3 gmole/s (industrial data), it is around 0,015 cal/s.cm. C,... [Pg.791]

Cross-comparing the risks of various activities is difficult because of the lack of a common basis of comparison, however Cohen and Lee, 1979 provide such a comparison on the basis of loss of life expectancy. Solomon and Abraham, 1979 used an index of harm in a study of 6 occupational harms - three radiological and three nonradiological to bracket high and low estimates of radiological effects. The index of harm consists of a weighting factor for parametric study the lost time in an industry and the worker population at risk. The conclusions were that the data are too imprecise for firm conclusions but it is possible for a radiation worker under pessimistic health effects assumptions to have as high index of harm as the other industries compared. [Pg.13]

The optimization of empirical correlations developed from the ASPEN-PLUS model yielded operating conditions which reduced the steam-to-slurry ratio by 33%, increased throughput by 20% while maintaining the solvent residual at the desired level. While very successful in this industrial application the approach is not without shortcomings. The main disadvantage is the inherent assumption that the data are normally distributed, which may or may not be valid. However, previous experience had shown the efficacy of the assumption in other similar situations. [Pg.106]

On the basis of different assumptions about the nature of the fluid and solid flow within each phase and between phases as well as about the extent of mixing within each phase, it is possible to develop many different mathematical models of the two phase type. Pyle (119), Rowe (120), and Grace (121) have critically reviewed models of these types. Treatment of these models is clearly beyond the scope of this text. In many cases insufficient data exist to provide critical tests of model validity. This situation is especially true of large scale reactors that are the systems of greatest interest from industry s point of view. The student should understand, however, that there is an ongoing effort to develop mathematical models of fluidized bed reactors that will be useful for design purposes. Our current... [Pg.522]

The values of k a for CO, desorption in a stirred-tank fermentor, calculated from the experimental data on physically dissolved CO, concentration (obtained by the above-mentioned method) and the CO2 partial pressure in the gas phase, agreed well with the k a values estimated from the k a for O, absorption in the same fermentor, but corrected for any differences in the liquid-phase diffusivities [11]. Perfect mixing in the liquid phase can be assumed when calculating the mean driving potential. In the case of large industrial fermentors, it can practically be assumed that the CO, partial pressure in the exit gas is in equilibrium with the concentration of CO, that is physically dissolved in the broth. The assumption of either a plug flow or perfect mixing in the gas phase does not have any major effect... [Pg.203]

It is commonly accepted in the tire industry that about one tire per person per year is discarded. Since there is no industry group or governmental agency that monitors tire disposal in the United States, the best estimates that can be made are based on tire production. The Rubber Manufacturers Association (RMA) records the number of original equipment, replacement, and export tires that are shipped each year in the United States. (See Table 3.) In 1990, a total of 264,262,000 tires were shipped. The RMA data include new tire imports, but not imported used tires. To estimate the number of tires that were discarded in the United States in 1990, the following assumptions were made ... [Pg.22]

The USEPA OPPT cannot design training sets, nor can it measure the toxicity of industrial chemicals directly. The TSCA prescribes that the chemical industry test chemicals for toxicity thus, the OPPT is dependent upon what toxicity data are submitted to the USEPA under the TSCA. The OPPT could design a training set for a (Q)SAR such as fish acute toxicity for aromatic diazoniums, but it does not have the ability to get the chemicals in the training set tested. Thus, some (Q)SARs used by the OPPT have training sets composed of two data, one datum, or no data—just assumptions about intercept, slope, and log Kow at which no toxic effects at saturation will occur. [Pg.81]

Other work has been mainly concerned with the scale-up to pilot plant or full-scale installations. For example, Beltran et al. [225] studied the scale-up of the ozonation of industrial wastewaters from alcohol distilleries and tomato-processing plants. They used kinetic data obtained in small laboratory bubble columns to predict the COD reduction that could be reached during ozonation in a geometrically similar pilot bubble column. In the kinetic model, assumptions were made about the flow characteristics of the gas phase through the column. From the solution of mass balance equations of the main species in the process (ozone in gas and water and pollution characterized by COD) calculated results of COD and ozone concentrations were determined and compared to the corresponding experimental values. [Pg.63]

Also, MCDA allows the costs and practicality of meeting a standard to be accommodated in the final decision. This can be achieved by identifying the technological options for mitigating exposure, each of which would be associated with a different standard. They could include not only a do nothing option but also the application of different technologies or assumptions about the benefits that would follow from adopting best practice in some or all industry sectors. This may require stakeholder input to help focus attention on the most feasible abatement options. A preliminary analysis may usefully be shared with stakeholders so that they have an opportunity to comment and provide further information (e.g., to refine assumptions or prompt data collection to reduce uncertainty). [Pg.24]


See other pages where Industrial data assumptions is mentioned: [Pg.258]    [Pg.1]    [Pg.249]    [Pg.135]    [Pg.1376]    [Pg.147]    [Pg.28]    [Pg.99]    [Pg.268]    [Pg.19]    [Pg.215]    [Pg.444]    [Pg.301]    [Pg.135]    [Pg.160]    [Pg.229]    [Pg.111]    [Pg.52]    [Pg.255]    [Pg.151]    [Pg.52]    [Pg.227]    [Pg.102]    [Pg.249]    [Pg.36]    [Pg.114]    [Pg.2421]    [Pg.15]    [Pg.208]    [Pg.41]    [Pg.114]    [Pg.220]    [Pg.107]    [Pg.8]    [Pg.454]    [Pg.18]    [Pg.25]    [Pg.1470]   
See also in sourсe #XX -- [ Pg.131 , Pg.150 , Pg.209 ]

See also in sourсe #XX -- [ Pg.131 , Pg.150 , Pg.209 ]

See also in sourсe #XX -- [ Pg.131 , Pg.150 , Pg.209 ]




SEARCH



Industrial data

Industry data

© 2024 chempedia.info