Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Databases data acquisition

According to an elegant remark by Davies [5], "Modem scientific data handling is multitechnique, multisystem, and manufacturer-independent, with results being processed remotely from the measuring apparatus. Indeed, data exchange and storage are steps of the utmost importance in the data acquisition pathway. The simplest way to store data is to define some special format (i.e., collection of rules) of a flat file. Naturally, one cannot overestimate the importance of databases, which are the subject of Chapter 5 in this book. Below we discuss three simple, yet efficient, data formats. [Pg.209]

With the availability of computerized data acquisition and storage it is possible to build database libraries of standard reference spectra. When a spectrum of an unknown compound is obtained, its identity can often be determined by searching through a library of reference spectra. This process is known as spectral searching. Comparisons are made by an algorithm that calculates the cumulative difference between the absorbances of the sample and reference spectra. For example, one simple algorithm uses the following equation... [Pg.403]

Costing investment analysis Data acquisition Database management Data conversion Development tools Dispersion models Distillation Drafting... [Pg.61]

Time Systems, McGraw-HiU, New York, 1985 Hawryszldewycs, Database Analysis and Design, Science Research Associates Inc., Chicago, 1984 Kham-hata, Microprocessois/Microcomputers Architecture, Software, and Systems, 2d ed.. Whey, New York, 1987 Liptak, Instrument Engineers Handbook, Chilton Book Company, Philadelphia, 1995 Melhchamp (ed.), Real-Time Computing with Applications to Data Acquisition and Control, Van Nostrand Reinhold, New York, 1983. [Pg.770]

Additional database space must be allocated when intermediate data points are used. A system can be designed to use process I/O points as intermediates. However, the data acquisition software must be programmed to bypass these points when scanned. All system builders provide virtual data point types if the intermediate data storage scheme is adopted. These points are not scanned by the data acquisition software. Memory space reqmrements are reduced by eliminating unnecessary attributes such as hardware addresses and scan frequencies. It should be noted that the fiU-iu-the-forms technique is apphcable to all data point types. [Pg.773]

Personal Computer Controller Because of its high performance at low cost and its unexcelled ease of use, apphcation of the personal computer (PC) as a platform for process controllers is growing. When configured to perform scan, control, alarm, and data acquisition (SCADA) functions and combined with a spreadsheet or database management apphcation, the PC controller can be a low-cost, basic alternative to the DCS or PLC. [Pg.776]

This chapter provides the basic knowledge and skills required to implement a computer-based vibrationmonitoring program. It discusses the following topics (1) typical machine-train monitoring parameters, (2) database development, (3) data-acquisition equipment and methods, and (4) data analysis. [Pg.699]

While providing many advantages, simplified data acquisition and analysis also can be a liability. If the database is improperly configured, the automated capabilities of these analyzers will yield faulty diagnostics that can allow catastrophic failure of critical plant machinery. [Pg.699]

The steps in developing such a database are (1) collection of machine and process data and (2) database setup. Input requirements of the software are machine and process specifications, analysis parameters, data filters, alert/alarm limits, and a variety of other parameters used to automate the data-acquisition process. [Pg.713]

The key elements of database setup discussed in this section are analysis parameter sets, data filters (i.e., band-widths, averaging, and weighting), limits for alerts and alarms, and data-acquisition routes. [Pg.715]

Most computer-based systems require data-acquisition routes to be established as part of the database setup. These routes specifically define the sequence of measurement points and, typically, a route is developed for each area or section of the plant. With the exception of limitations imposed by some of the vibration monitoring systems, these routes should define a logical walking route within a specific plant area. A typical measurement is shown in Figure 44.15. [Pg.720]

Automated data acquisition The object of using microprocessor-based systems is to remove any potential for human error, reduce manpower and to automate as much as possible the acquisition of vibration, process and other data that will provide a viable predictive maintenance database. Therefore the system must be able to automatically select and set monitoring parameters without user input. The ideal system would limit user input to a single operation. However this is not totally possible with today s technology. [Pg.805]

Whereas the components of (known) test mixtures can be attributed on the basis of APCI+/, spectra, it is quite doubtful that this is equally feasible for unknown (real-life) extracts. Data acquisition conditions of LC-APCI-MS need to be optimised for existing universal LC separation protocols. User-specific databases of reference spectra need to be generated, and knowledge about the fragmentation rules of APCI-MS needs to be developed for the identification of unknown additives in polymers. Method development requires validation by comparison with established analytical tools. Extension to a quantitative method appears feasible. Despite the current wide spread of LC-API-MS equipment, relatively few industrial users, such as ICI, Sumitomo, Ford, GE, Solvay and DSM, appear to be somehow committed to this technique for (routine) polymer/additive analysis. [Pg.519]

Combinatorial chemistry and HT E are powerful tools in the hands of a scient ist, as they are a source for meaningful consistent records of data that would be hard to obtain via conventional methods within a decent timeframe. This blessing of fast data acquisition can turn into a curse if the experimentalist does not take precautions to carefully plan the experiments ahead and the means of handling the data and analyzing them afterwards. The two essential elements that ensure a successful execution of ambitious projects on a rational and efficient basis are, therefore, tools that enable the scientist to carefully plan experiments and get the most out of the minimum number of experiments in combination with the possibility of fast and reliable data retrieval from databases. Therefore, experimental planning and data management are complementary skillsets for the pre- and post experimental stages. [Pg.376]

A historical control database can take on many formats, from a simple spreadsheet (e.g., Microsoft Excel) to a fully searchable database that is interfaced or a part of the laboratory s computer data collection system. Most laboratories that conduct large numbers of studies according to GLP standards have a validated computer data collection system, and some of these systems automatically compile control data ftom studies so the user does not have to reenter the data into a separate historical control database. However, because of the inflexibility of the data acquisition systems, many laboratories still compile their historical control data by manually entering into a stand-alone database, such as customized spreadsheet format (e.g., Microsoft Excel). [Pg.281]

This architecture is segmented in four functions, data acquisition, data enrichment, correlation and user interface. All these functions interact with a relational database, which serves as a reliable and persistent data bus for all... [Pg.351]

Data acquisition needs privileges for inserting data into the database, but not for modifying it. It must be distributed and concurrent. It needs high performance remote access, because it is extremely likely that it will not reside in the same server as the database. It must have a high priority to ensure that event storms are properly handled. [Pg.353]

Data enrichment needs access to external information sources that are not necessarily accessible from the system hosting the database or the web server. It needs update privileges on tables that are outside the core event tables. As such, it may need to be distributed, but not necessarily concurrent. In fact, our implementation locks data enrichment processes at the database level, to ensure that two installations of the application will not attempt the same enrichment procedure. Regarding performance, data enrichment can be delayed by data acquisition. [Pg.353]

Correlation is usually co-located on the database server, because it also needs rapid access to all the tables in the database, and does not need to interact with the outside world. Performance is an issue because events need to be properly cleaned before they are presented to the operators. As such, correlation comes second after data acquisition for performance needs. It also needs read, write and update privileges on almost all areas of the database. [Pg.353]

Data acquisition is presented in the upper left comer of Figure 1. The information is read from multiple heterogeneous sources and transformed in our standard format. The acquisition mechanism understands the IDMEF format, our private database format, and several dedicated log sources such as firewall logs (Cisco, Netscreen, Checkpoint, IPtables), access control mechanisms (TCP-wrappers, login), VPN concentrators, IDS sensors and routers. [Pg.353]

It is noted several times in this book that the goal of experimental methodology is to provide optimum quality data for subsequent statistical analysis. This is true, but there is also a very important intermediary between data acquisition and data analysis this is the field of clinical data management. In many cases, Data Management and Statistics fall under the same division within a company, and in some cases these tasks are handled by different divisions. Whichever is the case, it is vital to have statisticians involved in all discussions regarding database development and use. [Pg.74]

Otto s book on chemometrics [4] is a welcome recent text, that covers quite a range of topics but at a fairly introductory level. The book looks at computing in general in analytical chemistry including databases, and instrumental data acquisition. It does not deal with the multivariate or experimental design aspects in a great deal of detail but is a very clearly written introduction for the analytical chemist, by an outstanding educator. [Pg.10]

Ward et ai, 1988 Jones et al., 1987). A crystallization plate, on which the protein solution is sandwiched between glass plates, was designed for the automated visual inspection of crystallization experiments (Jones et al., 1987). Photographs of crystals first produced % an automated instrument together with some reproduced from the literature a shown in Fig. 4E-H. As a complement to the automated setup of crystallization experiments, a database system for recording crystallization results (Fig. 10) has been developed to facilitate data acquisition and to aid in the design of subsequent experiments. [Pg.31]


See other pages where Databases data acquisition is mentioned: [Pg.545]    [Pg.806]    [Pg.135]    [Pg.597]    [Pg.46]    [Pg.328]    [Pg.333]    [Pg.243]    [Pg.373]    [Pg.174]    [Pg.45]    [Pg.226]    [Pg.94]    [Pg.354]    [Pg.73]    [Pg.26]    [Pg.18]    [Pg.867]    [Pg.28]    [Pg.73]    [Pg.594]    [Pg.845]    [Pg.31]   
See also in sourсe #XX -- [ Pg.75 ]




SEARCH



Data acquisition

© 2024 chempedia.info