Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hardware and Performance

Moskau and Zerbe262 in a general review of cryoprobe hardware and performance also detailed ways of quickly recognizing component malfunction in cryogenic probe systems, which from the author s own experience is an area from which others might benefit. [Pg.87]

There are a many reports that evaluate the performance of the nonliquid reagent systems however, owing to proprietary constraints, there are relatively few articles that describe the actual preparation or detail the characteristics of these analytical devices. This veil of secrecy is particularly true of the Drichem (82) and Konica nonliquid reagent systems information on these systems is confined almost exclusively to the patent literature. Very few clinical or performance evaluations have been reported. Therefore, these systems will not be dealt with in this overview. The OPUS (83) and the Stratus II (84) Immunoassay Systems are also nonliquid reagent systems that deal with the immunoassays of therapeutic drugs, fertility hormones, thyroid function, and other metabolites. The Stratus II can also be used to measure CK-MB as the protein rather than the enzyme. An excellent review by Chan (85) describes the hardware and performance of the OPUS, Stratus, and other such systems. [Pg.167]

KAPL s initial work in Experimental Engineering was focused on testing an open cycle Capstone C30 Microturbine unit. The use of a commercially available, open cycle Capstone turbine provided a quick and cost effective approach to obtaining a basic understanding of Brayton hardware and performance characteristics The C30 was the same model modified for use in Sandia s closed Capstone loop and also the same model that was being modified for Bettis dual, closed loop... [Pg.799]

Molecular modelling used to be restricted to a small number of scientists who had access to the necessary computer hardware and software. Its practitioners wrote their own programs, managed their own computer systems and mended them when they broke down. Today s computer workstations are much more powerful than the mainframe computers of even a few years ago and can be purchased relatively cheaply. It is no longer necessary for the modeller to write computer programs as software can be obtained from commercial software companies and academic laboratories. Molecular modelling can now be performed in any laboratory or classroom. [Pg.13]

D. F. Feller, MSRC Ah Initio Methods Benchmark Suite—A Measurement of Hardware and Software Performance in the Area of Electronic Structure Methods WA Battelle Pacific Northwest Labs, Richland (1993). [Pg.133]

Design considerations and costs of the catalyst, hardware, and a fume control system are direcdy proportional to the oven exhaust volume. The size of the catalyst bed often ranges from 1.0 m at 0°C and 101 kPa per 1000 m /min of exhaust, to 2 m for 1000 m /min of exhaust. Catalyst performance at a number of can plant installations has been enhanced by proper maintenance. Annual analytical measurements show reduction of solvent hydrocarbons to be in excess of 90% for 3—6 years, the equivalent of 12,000 to 30,000 operating hours. When propane was the only available fuel, the catalyst cost was recovered by fuel savings (vs thermal incineration prior to the catalyst retrofit) in two to three months. In numerous cases the fuel savings paid for the catalyst in 6 to 12 months. [Pg.515]

It is important, however, not to take this analogy too far. In general, the performance of a piece of hardware, such as a valve, will be much more predictable as a function of its operating conditions than will human performance as a function of the PIFs in a situation. This is partly because human performance is dependent on a considerably larger number of parameters than hardware, and only a subset of these will be accessible to an analyst. In some ways the job of the human reliability specialist can be seen as identifying which PIFs are the major determinants of human reliability in the situation of interest, and which can be manipulated in the most cost-effective manner to minimize error. [Pg.103]

Throughout these guidelines it is argued that when engineering techniques for the design and assessment of process equipment and control systems are supplemented with human reliability techniques, then performance of both the hardware and humans will be optimized. [Pg.108]

The most important hardware items appeared to be the detectors themselves. The gas detection system gave frequent spurious alarms, and on both platforms the ultraviolet (UV) fire detectors were also prone to spurious activation from distant hot work for example, and had a limited ability to detect real fires. The tmreliability of these systems had a general effect on response time and would, overall, lengthen the time to respond. The second aspect which was related to hardware was fimction and performance testing of the emergency blowdown systems. It is critical that the workers believe the systems will work when required, and this can only be achieved by occasional use or at least fimction testing. [Pg.339]

MW fraction increases the melt flow, thus improving the processability but at the cost of toughness, stiffness, and stress crack resistance. In addition, the improvement in performance through narrowing the MWD is restricted by the catalyst, the process hardware, and the process control limitations. Dow has developed a reactor grade HDPE of optimized breadth, peak, and shape of MWD... [Pg.289]

The rapid increase in computer applications is partly attributable to both the decreasing costs of hardware and software and to the increasing costs of human labor. This shift has given rise to a productivity factor assigned to various tasks performed by computers versus people. One figure recently quoted was a minimum factor of 4 to 5 for CAD (computer aided design), that is, one draftsman with a CAD system can replace 4 to 5 manual draftsmen. [Pg.108]

Products in Group 3 seem to us to represent the future of practical batch process control. In such systems, modern workstations perform the single-user functions (e.g control system design, set-up, and maintenance operator interface data collection historical reporting) for which they were designed, while powerful multitasking controllers perform actual control. As computer hardware and software standards continue to evolve toward distributed networks of processors optimized for specific kinds of tasks, such systems will, we feel, proliferate rapidly. [Pg.474]

Mathematical models require computation to secure concrete predictions. Successes in relatively simple cases spurs interest in more complex situations. Somewhat specialized computer hardware and software have emerged in response to these demands. Examples are the high-end processors with vector architecture, such as the Cray series, the CDC Cyber 205, and the recently announced IBM 3090 with vector attachment. When a computation can effectively utilize vector architecture, such machines will out-perform even the most powerful conventional scalar machine by a substantial margin. Such performance has given rise to the term supercomputer. ... [Pg.237]

Sodium imaging is relatively time consuming and cannot be performed on standard clinical scanners without specialized hardware and software upgrades. Nevertheless, the unique physiologic information provided by sodium imaging may make this technique an important tool in acute stroke imaging in years to come. [Pg.27]

We can zoom into, or refine, the action to see more detail. What was one action is now seen to be composed of several actions (see Figure 6.8). Each of these actions can be split again into smaller ones, into as much detail as you like. Some of the actions might be performed by software others might be performed by some mixture of software, hardware, and people still others might be the interactions between those things. At any level—deep inside the software or at the overall business level—we can treat them the same way. Catalysis is a fractal method It works in the same way at any scale. [Pg.249]

Reliability and availability Does the running system reliably continue to perform correctly over extended periods of time What proportion of time is the system up and running In the presence of failure, does it degrade gracefully rather than shut down completely Reliability is measured as the mean time to system failure availability is the proportion of time the system is functioning. Both qualities are typically dealt with by making the architecture fault-tolerant using duplicated hardware and software resources. [Pg.513]

The blast resistance of conventional doors is generally limited by the rebound capacity in the unseating direction. A conventional unreinforced hollow metal door with a cylindrical latch may be adequate to withstand a rebound force of 50 psf (2.4 kPa). Door with a mortised latch may be adequate for a rebound force of 100 psf (4.8 kPa). If the blast pressure exceeds this, other alternatives may be considered. These include placing interior or externa barrier walls, or installation of blast resistant doors and frames. Unlike conventional doors, blast doors are typically provided as a complete assembly including the door, frame, hardware and accessories. This is because all the components are dependent on each other to provide the overall blast resistance. Refer to Chapter 9 for performance requirements and design details for blast resistant doors. [Pg.75]

Modern laboratories are complex multifaceted units with vast amounts of information passing to and from instruments and computers and to and from analysts and clients daily. The development of highspeed, high-performance computers has provided laboratory personnel with the means to handle the situation with relative ease. Software written for this purpose has meant that ordinary personal computers can handle the chores. The hardware and software system required has come to be known as the laboratory information management system (LIMS). [Pg.167]

In order to reveal sources of artifacts and noise we will give a brief description of spectrometer hardware and probehead technology. Of course, spectrometer manufacturers do their best to construct hardware with optimal performance. However, experience shows that all hardware components may occasionally fail. It is one of the goals of this chapter to present ways to recognize malfunction quickly and to locate the source of problem. [Pg.69]


See other pages where Hardware and Performance is mentioned: [Pg.196]    [Pg.251]    [Pg.1365]    [Pg.196]    [Pg.251]    [Pg.1365]    [Pg.14]    [Pg.17]    [Pg.127]    [Pg.61]    [Pg.352]    [Pg.512]    [Pg.466]    [Pg.648]    [Pg.139]    [Pg.146]    [Pg.441]    [Pg.318]    [Pg.575]    [Pg.998]    [Pg.150]    [Pg.46]    [Pg.289]    [Pg.1062]    [Pg.1063]    [Pg.242]    [Pg.244]    [Pg.522]    [Pg.332]    [Pg.271]    [Pg.566]    [Pg.78]    [Pg.67]   


SEARCH



Hardware

Hardware performance

© 2024 chempedia.info