Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Case analysis

Fouling layers, in general, are compressible, that is they become more compact as the extent of their compression increases. Solid compressive pressure is responsible for the compression of a fouling layer according to basic filtration theory [46]. In traditional filtration theory, the derivation of the drag equations of filtration for rigid particle slurries assume that particles are in point contact mode and that compression attends instantaneously. Under this assumption, a force balance can be obtained between liquid pressure over the entire cross-section and the solid compressive pressure on the total mass within the porous layer as  [Pg.345]

The ratio of the pressure drop over the full fouling layer to the average specific resistance is regarded as being equal to the integral of the differential amount of the local specific resistance  [Pg.348]

Substituting part (a) of Equation (15.12) into Equation (15.13) and integrating and [Pg.348]

Equation (15.14) is commonly used in filtration for compressible cakes. [Pg.348]

This dynamic analysis procedure enables us to obtain these basis data by knowing the surface porosity of a fouling layer and the real-time variation in the fouling layer thickness. In summary, the adopted dynamic procedure proves itself to be useful tool not only for designing a membrane filtration system but also for predicting or monitoring the system performance during plant operation. [Pg.348]


The first results of computer-based assessment system application show that the benefits are obvious for repaired (without heat treatment) welds and complex defect configurations defect with height local increasing, group of defects, case analysis of defects interference and possible joining. [Pg.197]

A wide variety of particle size measurement methods have evolved to meet the almost endless variabiUty of iadustrial needs. For iastance, distinct technologies are requited if in situ analysis is requited, as opposed to sampling and performing the measurement at a later time and/or in a different location. In certain cases, it is necessary to perform the measurement in real time, such as in an on-line appHcation when size information is used for process control (qv), and in other cases, analysis following the completion of the finished product is satisfactory. Some methods rapidly count and measure particles individually other methods measure numerous particles simultaneously. Some methods have been developed or adapted to measure the size distribution of dry or airborne particles, or particles dispersed inhquids. [Pg.130]

Stress calculations, fault tree analysis, failure modes analysis, and worst case analysis... [Pg.250]

A second approach has been to use an unsymmetrical initiator which allows the two radicals of interest to be generated simultaneously in equimolar amounts.175 In this case, analysis of the cage recombination products provides information on cross termination uncomplicated by homotermination. Analysis of products of the encounter reaction can also give information on the relative importance of cross and homotermination. However, copolymerization of unsaturated products can cause severe analytical problems. [Pg.371]

The data in Table I are also significant in terms of the type of analysis to determine the presence of NDMA. In all cases analysis was done using gas chromatography coupled with a Thermal Energy Analyzer, a sensitive, relatively specific nitrosamine detector (12). Further, in six of the studies, the presence of NDMA in several samples was confirmed by gas chromatography-mass spectrometry (GC-MS). The mass spectral data firmly established the presence of NDMA in the beer samples. [Pg.231]

Obviously, use of such databases often fails in case of interaction between additives. As an example we mention additive/antistat interaction in PP, as observed by Dieckmann et al. [166], In this case analysis and performance data demonstrate chemical interaction between glycerol esters and acid neutralisers. This phenomenon is pronounced when the additive is a strong base, like synthetic hydrotalcite, or a metal carboxylate. Similar problems may arise after ageing of a polymer. A common request in a technical support analytical laboratory is to analyse the additives in a sample that has prematurely failed in an exposure test, when at best an unexposed control sample is available. Under some circumstances, heat or light exposure may have transformed the additive into other products. Reaction product identification then usually requires a general library of their spectroscopic or mass spectrometric profiles. For example, Bell et al. [167] have focused attention on the degradation of light stabilisers and antioxidants... [Pg.21]

Principles and Characteristics In some cases, analysis using an appropriate combination of a single separation and detection method is not satisfactory, and it becomes necessary to utilise a combination of separation methods and/or multidetector monitoring. This approach is termed multidimensional, or coupled chromatography and is meant to describe a specific sequential combination of chromatographic procedures. [Pg.545]

Worst-case analysis based on the DSC data, namely, the test with the lowest onset temperature, resulted in a graph showing the relationship between initial temperature and time-to-maximum rate under adiabatic conditions. For an initial temperature of 170°C, it would take 2 hours to reach the maximum rate. Venting simulation tests were undertaken on a larger scale to detect safe venting requirements for the separator and for the MNB hold tank. Several vent sizes were tested. It was found that a 10-cm rupture disc with a burst pressure 1 bar above the operating pressure was adequate. [Pg.152]

Often a term for an inert substance may be required in the equation for tfv. Also, one or more of the other terms can be left out, thus giving rise to another rate equation for analysis. For instance, hydrogen, although a reaction participant, often is relatively slightly adsorbed. In such cases, analysis with the complete denominator will not necesssarily give zero for the adsorption constant of the otherwise omittable substance. One of the cases may be preferable statistically. [Pg.654]

There is not usually a clear order in which to perform context analysis ("use-case analysis") and system behavioral specification. There is some element of negotiation between... [Pg.608]

Extrapolating properties, defined, 16 729 Extra spring copper alloys, 7 723t Extreme ambient conditions, lubrication and, 15 252-256 Extreme-case analysis, 9 547 Extreme environments, solid and liquid lubricants for, 15 256 Extremely low toxic substances, 23 113 Extreme pressure (EP) lubrication regime, 15 214. See also EP entries Extreme purity gases, analyses of, 13 468 Extreme ultraviolet lithography, 15 189-191... [Pg.343]

Follow-up on medication errors, including knowledge of routines, incidence reporting, root case analysis etc. [Pg.93]

The (8X2)-TiOr film can also be synthesized by the stepwise direct deposition of Ti onto an oxygen-covered Mo(112) surface followed by subsequent oxidation-annealing cycles. However, the quality and reproducibility are not comparable with growth on the SiOz film. In either case, analysis of the HREELS and XPS results indicate that the oxidation state of the Ti is probably +3. This reduced Ti state is apparently responsible for the ability of Au and other metals to wet the surface. [Pg.349]

The key word in any case is representative. A laboratory analysis sample must be representative of the whole so that the final result of the chemical analysis represents the entire system that it is intended to represent. If there are variations in composition, such as with the coal example above, or at least suspected variations, small samples must be taken from all suspect locations. If results for the entire system are to be reported, these small samples are then mixed and made homogeneous to give the final sample to be tested. Such a sample is called a composite sample. In some cases, analysis on the individual samples may be more appropriate. Such samples are called selective samples. [Pg.19]

Hazard assessment worst-case analysis five-year accident history. [Pg.76]

Generally, activities of the catalysts are expressed in terms of k, but in some cases analysis of the values will also be used. [Pg.225]

The crudest form of bounding analysis is just interval arithmetic (Moore 1966 Neumaier 1990). In this approach the uncertainty about each quantity is reduced to a pair of numbers, an upper bound and a lower bound, that circumscribe all the possible values of the quantity. In the analysis, these numbers are combined in such a way to obtain sure bounds on the resulting value. Formally, this is equivalent to a worst case analysis (which tries to do the same thing with only 1 extreme value per quantity). The limitations of such analyses are well known. Both interval arithmetic and any simple worst case analysis... [Pg.90]

Probability theory is, of conrse, designed precisely to estimate these chances. Becanse of this, probabilistic assessment is regarded by many as the heir apparent to worst case analysis. However, traditional applications of probability theory also have some severe limitations. As it is used in risk assessments today, probability theory... [Pg.91]

One very simplistic way of handling missing data is to remove those patients with missing data from the analysis in a complete cases analysis or completers analysis. By definition this will be a per-protocol analysis which will omit all patients who do not provide a measure on the primary endpoint and will of course be subject to bias. Such an analysis may well be acceptable in an exploratory setting where we may be looking to get some idea of the treatment effect if every subject were to follow the protocol perfectly, but it would not be acceptable in a confirmatory setting as a primary analysis. [Pg.119]

In a retrospective case analysis, fluoxetine (20 to 80 mg daily) and paroxetine (20 to 40 mg daily) were found to be effective in approximately one-quarter of adults (mean age, 39 years) with intellectual disability and autistic traits (Branford et al., 1998). The sample consisted of all intellectually disabled subjects who had been treated with a SSRI over a 5-year period within a health-care service in Great Britain. The mean duration of treatment was 13 months. Target symptoms were perseverative behaviors, aggression, and self-injurious behavior. Six of 25 subjects treated with fluoxetine and 3 of 12 subjects given paroxetine were rated as much improved or very much improved on the CGI. [Pg.571]

Paroxetine. Only a few reports, none of them controlled, have appeared on the use of paroxetine in autistic disorder. Paroxetine at 20 mg/day decreased self-injurious behavior in a 15-year-old boy with high-functioning autistic disorder (Snead et al., 1994). In another report, paroxetine s effectiveness for a broader range of symptoms, including irritability, temper tantrums, and interfering preoccupations, was reported in a 7-year-old boy with autistic disorder (Posey et al., 1999). The optimal dose of paroxetine was 10 mg daily an increase of paroxetine to 15 mg/day was associated with agitation and insomnia. As described earlier, a retrospective case analysis found paroxetine to be effective in approximately 25 % of adults with PDD NOS (Branford et al., 1998). [Pg.571]

The Monte Carlo analyses are used to observe how device tolerances can affect a design. There are two analyses that can be performed. The Worst Case analysis is used to find the maximum or minimum value of a parameter given device tolerances. Device tolerances are varied to their maximum or minimum limits such that the maximum or minimum of the specified parameter is found. The Monte Carlo analysis is used to find production yield. If the Worst Case analysis shows that not all designs will pass a specific criterion, the Monte Carlo analysis can be used to estimate what percentage will pass. The Monte Carlo analysis varies device parameters within the specified tolerance. The analysis randomly picks a value for each device that has tolerance and simulates the circuit using the random values. A specified output can be observed. [Pg.504]

We must also set up the Worst Case analysis. Click the LEFT mouse button on Carlo/Worst Case button. This will enable the analysis and display its settings ... [Pg.506]

The results of the Worst Case analysis are saved in the output file. Select PSpice and then View Output File from the Capture menu bar. The results are given at the bottom of the output file. [Pg.508]

The simulation says that the maximum value is. 5188, which is less than the expected value. Remember that for the resistor with the 5% Gaussian distribution, the standard deviation was 1.25%, and the absolute limits on the distribution were 4o = 5%. In the Worst Case analysis, a device with a Gaussian distribution is varied by only 3cr. Had we calculated the maximum value with a 3.75% resistor variation, we would have come up with a maximum gain of 0.51875, which agrees with the PSpice result. To obtain the worst case limits, I prefer to use the uniform distribution. Type CTRL-F4 to close the output file and display the schematic. [Pg.509]

Change both resistors to R5pcnt and then run PSpice. The results will again be stored in the output file. At the end of the output file you will see the results of the Worst Case analysis ... [Pg.510]


See other pages where Case analysis is mentioned: [Pg.486]    [Pg.540]    [Pg.264]    [Pg.273]    [Pg.904]    [Pg.69]    [Pg.244]    [Pg.937]    [Pg.201]    [Pg.443]    [Pg.347]    [Pg.33]    [Pg.32]    [Pg.92]    [Pg.45]    [Pg.47]    [Pg.382]    [Pg.91]    [Pg.119]    [Pg.119]   
See also in sourсe #XX -- [ Pg.52 ]




SEARCH



© 2024 chempedia.info