Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Workstation

Workstations are perhaps a more flexible complement to a robotics method. These systems are capable of in-series or parallel analysis. Methods are developed with very specific and specialized functions that allow higher throughput and operation in a batch mode. These dedicated approaches would seem to be a popular choice in the drug discovery and preclinical development stages. Workstations could be [Pg.183]

LC/MS interfaces that accommodate miniaturized formats for biomolecule analysis such as nanoelectrospray (Wilm and Mann, 1996) or microelectrospray (Figeys et al., 1996) with a variety of mass-detection devices ranging from triple quadrupole (Swiderek et al., 1998), TOF (Medzihradszky et al., 1998), quadrupole TOF (Morris et al., 1996 Hanisch et al., 1998), and ion traps (Figeys and Aebersold, 1997 Arnott et al., 1998) appear to be headed for tremendous growth. Future developments with instrumentation and improvements in performance will drive this growth, which will permit the facile conversion to automated approaches (Ducret et al., [Pg.188]

1998) and routine procedures for isotopic labeling of peptides for sequence analysis (Shevchenko et al., 1997 Gygi et al., 1999). Furthermore, it may be reasonable to presume that these advances will result in the more frequent investigations of intact membrane pro- [Pg.188]

The most salient difference between robotic stations and workstations is that, whereas a workstation can only be used for the tasks (all or some) for which it was constructed, robotic stations can be modified by changing their software, modules or peripherals as required to undertake one or more specific tasks, or even a whole analytical process. As a result, describing a workstation is as simple as listing its intended functions, whereas characterizing a robotic station includes stating the type of arm it uses and the equipment that helps the arm perform its tasks. [Pg.503]

The most simple and common workstations are those for dilution and/or reagent addition to a number of samples in a simultaneous manner, either to all samples in a rack or to a line with a slide z-axis (as in the Biomex 2000 model from Beckman). Most workstations are designed to operate with liquid samples such is the case with those from Cyberlab, Gilson, Zymark, SciLog, Sagian, Beckman and Hamilton, which manufacture specific equipment for liquid handling, solid-phase extraction and preparation of liquid [Pg.503]

The diversity of equipment with which manufacturers have flooded the market is exemplified by Zymark Corporation. In the last few years, this firm has launched six different types of workstations for handling liquid samples, namely  [Pg.504]

The Presto Liquid Handler, which uses a single-channel arm and a second, multichannel one that controls eight fixed cannulae for rapid, precise reagent addition, transfer or dilution to microplates with up to 384 wells. [Pg.504]

The Twister Universal Microplate Handler, which can be interfaced to microplate instruments such as washers, readers and dispensers to minimize the need for manual microplate handling. [Pg.504]

On the basis of hierarchical systems, some companies such as Perkin-Elmer and IBM have developed the so-called workstations . These are interactive computers based on instruments that allow the user to control one or most of the laboratory operations. Hence, they afford data acquisition, classification, amendment, correlation and request, and support data processing and/or process simulation. [Pg.52]

ROM and RAM, massive-storage hard-disk drives, a standard RS-232C interface operating serially at 9600 Baud (bit/s) and a parallel one transmitting at 100 Baud make up this section of the system. [Pg.53]

Once the task analysis has been performed and ergonomic problems have been observed, a determination is made as to how to eliminate the hazards. One method to achieve this goal is by examining how the work area is laid out and redesigning the workplace to eliminate the problems. Knowledge of workstations will help the safety professional in this process. [Pg.149]

Workspace design specifications include a minimum of 20 inches in width, 26 inches in depth to allow for adequate leg clearance, a minimum of 4-inch clearance from the edge of the workstation, and an approximate ideal work area of 10 by 10 inches where activities are performed. Seated workstations are recommended for detailed visual tasks, for precision assembly work, or for typing and writing tasks. Standing workstations are typically recommended when  [Pg.149]

Grandjean (1988) recommends seven guidelines for workplace layout and design  [Pg.150]

Avoid any kind of bent or unnatural posture. (Bending the trunk or the head sideways is more harmful than bending forward.) [Pg.150]

Avoid keeping an arm outstretched either forward or sideways. (Such postures lead to rapid fatigue and reduce precision.) [Pg.150]


After often a lengthy period (several months) of acquisition and processing, the data may be loaded onto a seismic workstation for interpretation. These workstations are UNIX based, dual screen systems (sections on one side, maps on the other, typically) where all the trace data is stored on fast access disk, and where the picked horizons and faults can be digitised from the screen Into a database. Of vital Importance is access to all existing well data in the area for establishing the well - seismic tie. 2D data will be interpreted line by intersecting line, and 3D as a volume. [Pg.20]

Having gathered and evaluated relevant reservoir data it is desirable to present this data in a way that allows easy visualisation of the subsurface situation. With a workstation it is easy to create a three dimensional picture of the reservoir, displaying the distribution of a variety of parameters, e.g. reservoir thickness or saturations. All realisations need to be in line with the geological model. [Pg.140]

Bales Scientific, Infrared Non-Destructive-Test Workstation SPIE Vol. 1689, IR Imaging Systems, 1992, pp 163... [Pg.407]

The mechanical movements and the data aequisition is controlled by a DEC Alpha-Workstation which is also able to perform the tomographic reconstruction and parts of the visualization of the results. [Pg.493]

RADView workstation software package user manual... [Pg.504]

The Master Module (Figure 2, a) controls both the communications in the local network and the communieation between the network and the base station (a Scanner Master Controller or a FORCE Institute PSP-3 or PSP-4 ultrasonic acquisition unit). The communication between the base station and computer (PC with Windows 95 or Unix-workstation) containing the scanner control software runs on a standard ethemet connection. [Pg.801]

All CD s are stored in a CD-jukebox (100 CD s per jukebox), and are accessible to all HP9000 workstations under HP-UX 9.05 via the fXOS software (Ixos-Jukeman VI.3b). The Ixos-Jukeinan software has a slow time response for filenames searches on the jukeboxes. This problem has been encompassed. Laborelec has developed a dedicated static database software. This database is loaded once for all after burning and verifying CD s. All CD s are read from the jukebox and all the filenames are saved in this database. One jukebox can contain more than 65.000 records. This dedicated software retrieves files from jukebox almost instantaneously. [Pg.1024]

This completes the outline of FAMUSAMM. The algorithm has been implemented in the MD simulation program EGO VIII [48] in a sequential and a parallelized version the latter has been implemented and tested on a number of distributed memory parallel computers, e.g., IBM SP2, Cray T3E, Parsytec CC and ethernet-linked workstation clusters running PVM or MPI. [Pg.83]

Fig. 3. Average computation time for one step using EGO.VIII on a DEC-Alpha 3300L workstation (175 MHz) for simulation systems of varying size. The insets show some of the protein-water systems used for the benchmark simulations. Fig. 3. Average computation time for one step using EGO.VIII on a DEC-Alpha 3300L workstation (175 MHz) for simulation systems of varying size. The insets show some of the protein-water systems used for the benchmark simulations.
The procedure is computationally efficient. For example, for the catalytic subunit of the mammalian cAMP-dependent protein kinase and its inhibitor, with 370 residues and 131 titratable groups, an entire calculation requires 10 hours on an SGI 02 workstation with a 175 MHz MIPS RIOOOO processor. The bulk of the computer time is spent on the FDPB calculations. The speed of the procedure is important, because it makes it possible to collect results on many systems and with many different sets of parameters in a reasonable amount of time. Thus, improvements to the method can be made based on a broad sampling of systems. [Pg.188]

The new formalism is especially useful for parallel and distributed computers, since the communication intensity is exceptionally low and excellent load balancing is easy to achieve. In fact, we have used cluster of workstations (Silicon Graphics) and parallel computers - Terra 2000 and IBM SP/2 - to study dynamics of proteins. [Pg.279]

In Table 1 the CPU time required by the two methods (LFV and SISM) for 1000 MD integration steps computed on an HP 735 workstation are compared for the same model system, a box of 50 water molecules, respectively. The computation cost per integration step is approximately the same for both methods so that th< syieed up of the SISM over the LFV algorithm is deter-... [Pg.343]

Table 1. CPU Time for 1000 MD steps of 50 H2O molecules in a box with L = 15 Ausing the LFV and the SISM for equal time step of 1 fs computed on an HP 735 workstation... Table 1. CPU Time for 1000 MD steps of 50 H2O molecules in a box with L = 15 Ausing the LFV and the SISM for equal time step of 1 fs computed on an HP 735 workstation...
Our multipole code D-PMTA, the Distributed Parallel Multipole Tree Algorithm, is a message passing code which runs both on workstation clusters and on tightly coupled machines such as the Cray T3D/T3E [11]. Figure 3 shows the parallel performance of D-PMTA on a moderately large simulation on the Cray T3E the scalability is not affected by adding the macroscopic option. [Pg.462]

The Fourier sum, involving the three dimensional FFT, does not currently run efficiently on more than perhaps eight processors in a network-of-workstations environment. On a more tightly coupled machine such as the Cray T3D/T3E, we obtain reasonable efficiency on 16 processors, as shown in Fig. 5. Our initial production implementation was targeted for a small workstation cluster, so we only parallelized the real-space part, relegating the Fourier component to serial evaluation on the master processor. By Amdahl s principle, the 16% of the work attributable to the serially computed Fourier sum limits our potential speedup on 8 processors to 6.25, a number we are able to approach quite closely. [Pg.465]

NAMD [7] was born of frustration with the maintainability of previous locally developed parallel molecular dynamics codes. The primary goal of being able to hand the program down to the next generation of developers is reflected in the acronym NAMD Not (just) Another Molecular Dynamics code. Specific design requirements for NAMD were to run in parallel on the group s then recently purchased workstation cluster [8] and to use the fast multipole algorithm [9] for efficient full electrostatics evaluation as implemented in DPMTA [10]. [Pg.473]

As noted above, one of the goals of NAMD 2 is to take advantage of clusters of symmetric multiprocessor workstations and other non-uniform memory access platforms. This can be achieved in the current design by allowing multiple compute objects to run concurrently on different processors via kernel-level threads. Because compute objects interact in a controlled manner with patches, access controls need only be applied to a small number of structures such as force and energy accumulators. A shared memory environment will therefore contribute almost no parallel overhead and generate communication equal to that of a single-processor node. [Pg.480]

The era of the Evans and Sutherland computer systems vanished in the first half of the 1980s, when powerful and more economical workstations were introduced. In spite of advances in computer graphics and in CPU power, these workstations dominate the everyday life of molecular modeling even today. [Pg.131]

In recent years, the rapid development of low-budget 3D-capablc graphics cards makes it possible to visualize molecular models with standard PC systems. Some molecular modeling software, which was once available only for workstations, is now also offered for PCs [198]. [Pg.131]

After selection of descriptors/NN training, the best networks were applied to the prediction of 259 chemical shifts from 31 molecules (prediction set), which were not used for training. The mean absolute error obtained for the whole prediction set was 0.25 ppm, and for 90% of the cases the mean absolute error was 0.19 ppm. Some stereochemical effects could be correctly predicted. In terms of speed, the neural network method is very fast - the whole process to predict the NMR shifts of 30 protons in a molecule with 56 atoms, starting from an MDL Molfile, took less than 2 s on a common workstation. [Pg.527]

Molecular modelling used to be restricted to a small number of scientists who had access to the necessary computer hardware and software. Its practitioners wrote their own programs, managed their own computer systems and mended them when they broke down. Today s computer workstations are much more powerful than the mainframe computers of even a few years ago and can be purchased relatively cheaply. It is no longer necessary for the modeller to write computer programs as software can be obtained from commercial software companies and academic laboratories. Molecular modelling can now be performed in any laboratory or classroom. [Pg.13]

One thing has not changed. By shopping among the software sources at the end of this book, and clipping popular computer magazine advertisements, the prudent instructor can still equip his or her lab at a starting investment of about 2000 per workstation of two students each. [Pg.364]

Another important consideration is the amount of labor necessary on the part of the user. One major difference between different software packages is the developer s choices between ease of use and efficiency of operation. For example, the Spartan program is extremely easy to use, but the price for this is that the algorithms are not always the most efficient available. Many chemistry users begin with software that is very simple, but when more sophisticated problems need to be solved, it is often easier to learn to use more complicated software than to purchase a supercomputer to solve a problem that could be done by a workstation with different software. [Pg.132]


See other pages where Workstation is mentioned: [Pg.176]    [Pg.500]    [Pg.517]    [Pg.517]    [Pg.800]    [Pg.803]    [Pg.915]    [Pg.1770]    [Pg.2218]    [Pg.84]    [Pg.320]    [Pg.372]    [Pg.469]    [Pg.472]    [Pg.477]    [Pg.102]    [Pg.103]    [Pg.137]    [Pg.156]    [Pg.25]    [Pg.28]    [Pg.139]    [Pg.371]    [Pg.205]    [Pg.156]    [Pg.362]    [Pg.645]   
See also in sourсe #XX -- [ Pg.2 , Pg.109 , Pg.183 , Pg.184 , Pg.190 ]

See also in sourсe #XX -- [ Pg.233 ]

See also in sourсe #XX -- [ Pg.210 , Pg.211 ]

See also in sourсe #XX -- [ Pg.502 , Pg.503 , Pg.504 , Pg.511 , Pg.513 , Pg.516 , Pg.522 , Pg.523 , Pg.524 , Pg.525 ]

See also in sourсe #XX -- [ Pg.246 ]

See also in sourсe #XX -- [ Pg.270 , Pg.341 ]

See also in sourсe #XX -- [ Pg.13 , Pg.217 ]

See also in sourсe #XX -- [ Pg.5 , Pg.1059 ]

See also in sourсe #XX -- [ Pg.52 , Pg.53 ]

See also in sourсe #XX -- [ Pg.160 ]

See also in sourсe #XX -- [ Pg.167 ]

See also in sourсe #XX -- [ Pg.452 ]

See also in sourсe #XX -- [ Pg.168 , Pg.169 ]

See also in sourсe #XX -- [ Pg.426 ]




SEARCH



Autotrace workstation

BenchMate workstation

CAD Workstation

Computer graphics interactive workstations

Computer hardware workstations

Design workstation

Dilution automated workstation

Dissolution workstations

Dissolution workstations automated systems

Electrochemical workstation

Ergonomics workstations

Experimental Workstation

Gazetteer of SR workstations for

Gazetteer of SR workstations for macromolecular crystallography

Laminar airflow workstations

Millilab workstation

Miscellaneous considerations on workstations and robotic stations

OPUS workstation

Purification workstations

Quantitation workstations

RapidTrace workstation

Screening automation workstation-based

Semi-automated workstations

Software for Minicomputers, Superminicomputers, Workstations, and Supercomputers

Sun workstations

Synthesis workstations

Tablet processing workstation

Task and workstation design

Video Display Terminal Workstation Design

Workstation Desk Design

Workstation Evaluations

Workstation Interventions

Workstation clusters

Workstation, definition

Workstations peripherals

Workstations seating

Workstations single-task

Workstations, arrangement

Workstations, automated high-throughput

Workstations, enclosed

Workstations, ergonomic

Workstations, process control

Your Personal OPUS Workstation

© 2024 chempedia.info