Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data conditioning process

An additional complication is that most dynamic data are stated for configurations involving reference materials such as water, air, and so on. The nature of the process material will affect the dynamic characteristics. For example, a thermowell will exhibit different characteristics when immersed in a viscous organic emulsion than when immersed in water. It is often difficult to extrapolate the available data to process conditions of interest. [Pg.758]

A formal induction of mappings from measured operating data to process conditions is composed of the following three tasks (Fig. 3) ... [Pg.213]

Linear, polynomial, or statistical discriminant functions (Fukunaga, 1990 Kramer, 1991 MacGregor et al., 1991), or adaptive connectionist networks (Rumelhart et al, 1986 Funahashi, 1989 Vaidyanathan and Venkatasub-ramanian, 1990 Bakshi and Stephanopoulos, 1993 third chapter of this volume, Koulouris et al), combine tasks 1 and 2 into one and solve the corresponding problems simultaneously. These methodologies utilize a priori defined general functional relationships between the operating data and process conditions, and as such they are not inductive. Nearest-neigh-... [Pg.213]

In order to illustrate how the mode of operation can positively modify selectivity for a large reactor of poor heat-transfer characteristics, simulations of the reactions specified in Example 5.3.1.4 carried out in a semibatch reactor were performed. The reaction data and process conditions are essentially the same as those for the batch reactor, except that the initial concentration of A was decreased to cao = 0.46 mol litre, and the remaining amount of A is dosed (1) either for the whole reaction time of 1.5 h with a rate of 0.1 mol m s", or (2) starting after 0.5 h with a rate of 0.15 mol m " s". It is assumed that the volume of the reaction mixture and its physical properties do not change during dosing. The results of these simulations are shown in Fig. 5.3-15. The results of calculation for reactors of both types are summarized in Table 5.3-3. [Pg.221]

The development of a calibration model is a time consuming process. Not only have the samples to be prepared and measured, but the modelling itself, including data pre-processing, outlier detection, estimation and validation, is not an automated procedure. Once the model is there, changes may occur in the instrumentation or other conditions (temperature, humidity) that require recalibration. Another situation is where a model has been set up for one instrument in a central location and one would like to distribute this model to other instruments within the organization without having to repeat the entire calibration process for all these individual instruments. One wonders whether it is possible to translate the model from one instrument (old or parent or master. A) to the others (new or children or slaves, B). [Pg.376]

Initial screening (Section 3.2) —Through identification of materials and conditions present at the specific site Material Safety Data Sheets Process conditions Total inventory of materials being handled Information on site conditions as needed to evaluate explosion or fire potential... [Pg.17]

Our specimen database also contains additional parameters that are used to control the data collection process and to provide archival information to each data file written by the collection process. The console display for editing the specimen database is of the "fill in the form" type and the user revises the parameters for each specimen position (including the zeroth) as required. New parameter values are checked for validity at the time they are entered. All other parameters retain the values they possessed during the previous set of analyses. Thus, only minor changes are needed to program for a set of samples similar to the previous ones. All records in the database can be cleared if the analytical conditions are markedly different. [Pg.134]

Especially in academic science, data analysis often starts as an exploratory creative process with evolving ideas of the data analysis flow and rapidly changing analysis parameters or conditions. Therefore, data analysis software has to be extremely flexible in order not to limit the exploration of data. Furthermore, it is important that the data analysis process is comprehensible and easily readable at all time points to ensure that scientists can share their approach with colleagues and to better prevent conceptual mistakes. A third requirement to data analysis software is the minimization of effort and time a scientist has to invest to implement various methods. [Pg.111]

LC-NMR can be operated in two different modes on-flow and stopped-flow. In the onflow mode, LC-NMR spectra are acquired continuously during the separation. The data are processed as a two-dimensional (2D) NMR experiment. The main drawback is the inherent low sensitivity. The detection limit with a 60 p.1 cell in a 500 MHz instrument for a compound with a molecular weight around 400 amu is 20 pig. Thus, on-flow LC-NMR runs are mainly restricted to the direct measurement of the main constituents of a crude extract and this is often under overloaded HPLC conditions. Typically, 1 to 5 mg of crude plant extract will have to be injected on-column.In the stopped-flow mode, the flow of solvent after HPLC separation is stopped for a certain length of time when the required peak reaches the NMR flow cell. This makes it possible to acquire a large number of transients for a given LC peak and improves the detection limit. In this mode, various 2D correlation experiments (COSY, NOESY, HSQC, HMBC) are possible. [Pg.27]

Once the safety data have been collected and documented, they must be evaluated with regard to the process conditions in terms of their significance for process safety. With the interpretation of the safety data, the process conditions that provide safe operation and the limits that should not be surpassed become clear. This defines the critical limits of the process, which are at the root of the search for deviations in the next step of the risk analysis. [Pg.10]

Data conditioning procedures involve the verification, quality control and data levelling processes that are necessary to make data fit for the purpose for which it is to be used. It is something that has to be planned at the outset of any project generating geochemical data. Whether it is in the sampling phase, for example, determining how sites and... [Pg.93]

Given a set of experimental data, we look for the time profile of A (t) and b(t) parameters in (C.l). To perform this key operation in the procedure, it is necessary to estimate the model on-line at the same time as the input-output data are received [600]. Identification techniques that comply with this context are called recursive identification methods, since the measured input-output data are processed recursively (sequentially) as they become available. Other commonly used terms for such techniques are on-line or real-time identification, or sequential parameter estimation [352]. Using these techniques, it may be possible to investigate time variations in the process in a real-time context. However, tools for recursive estimation are available for discrete-time models. If the input r (t) is piecewise constant over time intervals (this condition is fulfilled in our context), then the conversion of (C.l) to a discrete-time model is possible without any approximation or additional hypothesis. Most common discrete-time models are difference equation descriptions, such as the Auto-.Regression with eXtra inputs (ARX) model. The basic relationship is the linear difference equation ... [Pg.360]

The Protein Data Bank (PDB http //www.pdb.org) is the worldwide repository of three-dimensional structural data of biological macromolecules, such as proteins and nucleic acids (Berman et al. 2003). The Protein Data Bank uses several text file-based formats for data deposition, processing, and archiving. The oldest of these is the Protein Data Bank format (Bernstein 1977), which is used both for deposition and for retrieval of results. It is a plain-text format whose main part, a so-called primary structure section, contains the atomic coordinates within the sequence of residues (e.g., nucleotides or amino acids) in each chain of the macromolecule. Embedded in these records are chain identifiers and sequence numbers that allow other records to reference parts of the sequence. Apart from structural data, the PDB format also allows for storing of various metadata such as bibliographic data, experimental conditions, additional stereochemistry information, and so on. However, the amount of metadata types available is rather limited owing to the age of the PDB format and to its relatively strict syntax rules. [Pg.91]


See other pages where Data conditioning process is mentioned: [Pg.94]    [Pg.94]    [Pg.2440]    [Pg.260]    [Pg.137]    [Pg.258]    [Pg.214]    [Pg.223]    [Pg.26]    [Pg.33]    [Pg.56]    [Pg.134]    [Pg.19]    [Pg.178]    [Pg.219]    [Pg.195]    [Pg.121]    [Pg.443]    [Pg.390]    [Pg.344]    [Pg.184]    [Pg.206]    [Pg.124]    [Pg.63]    [Pg.361]    [Pg.357]    [Pg.105]    [Pg.400]    [Pg.108]    [Pg.114]    [Pg.446]    [Pg.56]    [Pg.97]    [Pg.2195]    [Pg.96]   
See also in sourсe #XX -- [ Pg.94 ]




SEARCH



Data conditioning

Data extension different process conditions

Data processing

Process conditions

Process data

Processing conditions

© 2024 chempedia.info