Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Processing Data

The data collected or stored by the computer are generally processed or treated prior to delivery. Thle operation Is controlled by means of software, the term used to described the set of mathematical or logic Instructions (sentences) input by the user and executed by the computer. It is Interesting [Pg.37]

3 Data Processing. - Amide s a Medical Image Data Examiner (AMIDE) [Pg.501]

CTD data processing requires a computer. In some computer languages, graphical editors are available or easy to write which assist in manual editing and determining of the editing parameters. [Pg.68]

Pressure, temperature and conductivity measurements are converted into physical units in their basic calibrations, Pctd, 7 ctd and Cctd- This includes all necessary special corrections needed for certain CTD types. Preliminary salinity 5ctd is calculated. [Pg.69]

Owing to poor data transmission, outliers may exist that are recognizable in the graphics. A median criterion (Sy, 1985) can be applied to remove them. It starts with Pctd at the beginning of a profile. The centre value of a prescribed number of records (window length) is identified as outlier if it differs by more than a prescribed tolerance from the median. Records with outliers are deleted. Since a time basis has been created, it is not necessary to interpolate deleted values. The procedure is repeated for each record over the whole profile, then for T and S. [Pg.69]

The median criterion is able to identify successively occurring outliers and outliers not equally distributed, if there are not too many in one window length. The window length for the median criterion should correspond to a vertical interval of about 1 dbar. Conservative tolerances on these scales are 0.5 dbar, 0.5 K and 0.5 in salinity. If necessary, the procedure [Pg.69]

If the pressure sensor of the CTD responds to fast changes in temperature, it is corrected for according to the results of its dynamic calibration (Section 3.6.3). The correction starts with the pre-cast deck values thus assuming that the sensor is close to thermal equilibrium. [Pg.69]

Once the digitized data are acquired in the computer, a number of mathematical manipulations can be carried out to minimize or eliminate artifacts and to enhance resolution or signal/noise ratio. [Pg.68]

The number of methods of data analysis is so large and the choices are sometimes so confusing, that a roadmap is needed for orientation (Fig. 10.5). At the top of this map are the model-based techniques that require specific and exact correlation between physical parameters and concentration, therefore calibration. At the bottom are the techniques that are model-free. They search for similarities in the response [Pg.319]

There are some conventions and terminology that are common to all statistical data evaluation approaches. As the first step the input matrix X is organized such that the independent variables (compositions) are arranged in rows from 1 to t and the preprocessed signals from sensors 1 to m are entered in the columns. [Pg.320]

The input matrix can be transformed in such a way that unique vectors can be defined independently of each other. Such vectors then describe the feature space. [Pg.320]

The process to convert experimental XRF data into analytically useful information can be divided into two steps first, the evaluation of the spectral data, and, second, the conversion of the net X-ray intensities into concentration data, i.e. quantification. In the latter step, especially, the appropriate correction of matrix effects is a critical issue. [Pg.72]

Matrix absorption All atoms of the specimen matrix will absorb photons from the primary source. The intensity/wavelength distributions of these photons available for the excitation of a given element may be modified by other matrix elements. On the other hand, other elements in the matrix can absorb the characteristic X-ray fluorescence of the element measured as the characteristic radiation passes out from the specimen. Matrix absorption will attenuate the XRF intensity of the given elements, [Pg.72]

Enhancement by multiple excitation In the case where the energy of a fluorescent photon (e.g. Ni Ka at 7.47 keV) is immediately above the absorption edge of a second element (e.g. the K edge of Fe at 7.11 keV), the fluorescence intensity of the second element (here Fe Ka and Fe Kp radiation) will be enhanced as a result of the preferential excitation (here [Pg.72]

Matrix effects make quantitative analysis with XRF quite complicated. Many methodological processes are developed for calibration of the measuring arrangement, which may be performed by two major approaches empirical and fundamental parameters (FP) calibration. [Pg.73]

The empirical calibration is based on the analysis of standards with known elemental compositions. To produce a reliable calibration model, the standards must be representative of the matrix and target element concentration ranges of the sample. Maintaining the same sample morphology (particle size distribution, heterogeneity and surface condition) and source/sample geometry for both standard and sample measurements is essential in empirical calibrations. [Pg.73]

Basically all methods of math atical statistics can be used for data processing of the zirconia sensor tests. At data processing, the average value of some measurement is used for estimation of the real value of the variable. However, it is the most effective appraisal only for those distribution laws which are close to the normal one. The following correlation is used for dispersion evaluation  [Pg.260]

Grouping of experimental data at the histograms building can lead to the displacement of the calculating characteristics. The Sheppard corrections are used for the removal of this displacement [10]. [Pg.260]

Method of the least squares is used at processing of the functional experiments data. In the case of the one independent variable (one-factor experiment), the type [Pg.260]

The regression coefficients k can be found from the distribution of the measured parameters. To find the regression coefficients, it is necessary to work out a task select the real function that minimizes the total square deviations of the measured values from the values of the selected function. The task solution consists in the solution of the following system of equations determining the extremes of private derivatives of the total square deviations function  [Pg.261]

If the measurement number n is less than the number of the searching coefficients r, it is impossible to find a solution for (7.8) at = r, the system (7.8) has only one solution. However, at r there is a possibility to have several systems and consequently to get a row of the searching functions. In the last case, the obtained data should be averaged to increase the formalization accuracy of the experimental work function. [Pg.261]

A generalised structure of an electronic nose is shown in Fig. 15.9. The sensor array may be QMB, conducting polymer, MOS or MS-based sensors. The data generated by each sensor are processed by a pattern-recognition algorithm and the results are then analysed. The ability to characterise complex mixtures without the need to identify and quantify individual components is one of the main advantages of such an approach. The pattern-recognition methods maybe divided into non-supervised (e.g. principal component analysis, PCA) and supervised (artificial neural network, ANN) methods also a combination of both can be used. [Pg.330]

PCA reduces multidimensional, partly correlated data, to two or three dimensions. Projections are chosen so that the maximum amount of information is retained in the smallest number of dimensions. This technique allows the similarities and differences between objects and samples to be better assessed [105]. [Pg.330]

A neural network is a program that processes data like (a part of) the nervous system. Neural networks are especially useful for classification problems and for function approximation problems which are tolerant of some imprecision, which have lots of training data available, but to which hard and fast rules (such as laws of nature) cannot easily be applied. [Pg.330]

Some considerations about the nature of information in a movie, from the data processing point of view are important. Conceptually, a movie is a set of frames recorded sequentially. Formally, it is a dataset that presents some structure due to the sequential nature of its process of generation. Thus it means that the different frames are ordered in time, and each frame consists of an image with X and Y-axis. These datasets, in which there exists a special ordering due to external reasons (often time in Statistic Process Control), are considered multi-way datasets. In our case, we can assume that we are dealing with a three-way dataset, because it presents a structure in the way (time x X pixel x Y pixel). In this way, (I, J, K) axes are providing different [Pg.56]

Multi-way Principal Component Analysis (MPCA) is strongly related to the standard data analysis method Principal Component Analysis (PCA). This bilinear modelling technique, based in the eigenvector decomposition of the covariance matrix, does not consider the way in which data has been acquired. This means that external information, such the ordering in time of the data acquisition, is not taken into account for the modelling process. Although this is unnecessary in a wide amount of cases, there are some for which it becomes an evident loss of information. Multi-way are part of these. [Pg.57]

Multi-way PCA is statistically and algorithmically consistent with PCA (Wise et al. 1999 Westerhuis et al. 1999). Thus, it decomposes the initial matrix X in the summation of the product of scores vectors (t) and loading matrices (P), plus a residual matrix (. These residuals are minimized by least squares, and are considered to be associated to the non-deterministic part of the information. The systematic component of the information, expressed by the product (t x P), represents [Pg.57]

Linear transformation of the original variables can lead to suitable representations of original multivariate data. As is shown above, MPCA method makes this transform pointing towards directions of maximum variance. In Independent Component Analysis (ICA) the goal is finding components (or directions) as independent as possible. This linear decomposition of one random vector (multivariate data) x follows the expression  [Pg.58]

Entropy can be described as the amount of information provided by a random variable. In short, the more random (unpredictable, unstructured) a variable is, larger is the Entropy. An important outcome of this concept is that (Gaussian) random variable distributions of equal variance are those that achieve higher Entropy values. Then it is possible to identify a decrease in entropy with a decrease in gaussianity. At the end, to obtain a positive value for non-gaussianity, is defined the concept of Negentropy (J). [Pg.58]

This section describes the various methodologies available for processing the as-recorded SIMS data. Data processing may be required to  [Pg.248]

Understand the origin of the recorded secondary ion signal or signals [Pg.248]

Reference the collected secondary ion signal to some other signal [Pg.248]

Convert the sputter time to sputter depth and/or convert the secondary ion intensity to concentration [Pg.248]

Firstly, understanding the origin of a signal recorded at some mlq ratio may entail  [Pg.248]

The availability of hyperspectral and multispectral imaging systems with large arrays has brought about new demands in terms of data processing capacity and the mathematics required to extract the important content from these large data sets. As spectral information in images greatly increases the amount of data to be processed, it is absolutely necessary to reduce the dimensionality of the data [66]. [Pg.290]

the sampling rate will permit a discrimination of 100 theoretical plates in 4983, equivalent to 2.0% and this precision of measurement due to the sample acquisition rate being 5 samples/sec, cannot be improved. It can be shown in a similar way that, irrespective of the control over chromatographic conditions, a precision of 1% in column efficiency cannot be realized unless the data acquisition rate is greater than 10 samples/sec. [Pg.251]

Lots of mass spectra should be collected during IMS. The higher the MS imaging resolution, the more MS data are required. Up to a few gigabytes that require complex visualization software to process are usually acquired in an experiment of IMS. In this section, a few software tools for processing IMS data are summarized. A detail comparison of these IMS software packages could be found somewhere else [69], [Pg.263]

Reagent containers should preferably have a cap in order to avoid contamination of the laboratory environment by volatile chemical compounds and/or contamination of the solutions to be pumped. An orifice for maintaining the container inner pressure during solution pumping is then required. [Pg.231]

The containers for waste collection and eventual treatment and disposal should be similarly made and situated close to and above the height of the manifold. For waste needing immediate treatment, a specific solution (e.g., with a suitable precipitating agent) should be added to the container for waste disposal prior to system operation and periodically replaced. [Pg.231]

The flow system is an effective means of precise sample presentation to the detector. The detection unit contains a flow cell with a fixed location and orientation the handled sample is then always monitored with a fixed geometry. [Pg.231]

Relevant parameters such as the characteristics of the flow cell, type of detector and data acquisition and treatment are briefly presented here. For a further discussion of developments in instrumentation related to the detection unit, overview articles [81—83] are recommended. [Pg.231]

A U-shaped glass tube was used in the first segmented flow analysers with spectrophotometric detection but is now rarely used due to the pronounced radiation losses at the curved portions and in the cylindrical walls of the tube. A single tube axially traversed by the radiation incident beam can be used instead of a typical flow cell in situations requiring a short optical path, as originally demonstrated in the spectrophotometric determination of the major constituents of fertilisers in a flow injection system [84] this approach avoided the need for manual sample dilutions. [Pg.231]

Application of scientific methods to archaeometry and the conservation of cultural heritage is carefully carried out to ensure that the methods chosen are in line with the purposes of the research. According to Lahanier [5], methods currently available are classified into three categories  [Pg.12]

These techniques are usually classitied according to the type of radiation or spectral region in which data are provided, namely, electromagnetic radiation (x-ray, UV, visible, IR, radio, etc.), acoustic radiation, etc. [Pg.13]

Magnetic resonance imaging (MRI) has been applied to the study of the distribution of fluid components (i.e., water or a polymer used as consolidant) in a porous material such as stone or waterlogged wood by a direct visualization of the water or fluid confined in the opaque porous medium [13]. [Pg.15]

Laser-induced ultrasonic imaging is used quite widely to study the interiors of opaque objects such as sculptures and paintings from the exploitation of laser-induced stress waves [8]. [Pg.15]

In laser Doppler vibrometry (SLDV), surfaces are slightly vibrated by mechanical activation while the vibrometer scans the object producing 2- or 3D maps of velocity, amplitude, and phase, which allow the detection and mapping of structural defects [15]. [Pg.15]

Metabolynx is a spectral and chromatographic software program associated with Masslynx and developed by Micromass/Waters [29]. Most mass spectrometer manufacturers have similar software. It was specifically designed for [Pg.165]

For implementations that rely upon d.c. signal detection, such as the LCTF NIR imaging system, subtraction of the so-called dark response is critical. There are two components of this response, one that is wavelength independent, the detector response when no photons are impinging on the array, a characteristic of the [Pg.31]

Robust multivariate classification can require large numbers of samples in order to characterize the covariance of the component distributions. One of the major advantages of spectral imaging over single-point spectroscopy is the ready [Pg.32]

The value of dn/dc and nG allows the calculation of the optical constant, K, from Eq. (4). A graph of Kc/AR(0) against the lignin concentration, c, gives the weight-average molecular weight from the intercept and the second virial coefficient from the slope. [Pg.504]

As stated in Section 8.2.1, only limited results concerning Mw determinations of lignins with the use of LALLS photometry have been published. This radically new and sophisticated technique constitutes an improvement over other methods, such as sedimentation equilibrium and size exclusion chromatography, for the evaluation of Mw. Its low angle capability, combined with high sensitivity, small sample size, and simplified clarification procedures, allow simple and accurate determinations to be made even on lignin samples which have been considered very troublesome. Moreover, as stated in Section 8.2.2, additional difficulties, inherent lignin properties, i.e., absorbance, fluorescence and anisotropy, may also be overcome. [Pg.504]

FIGURE 7.8 A set of observations for which the errors are randomly distributed yields a Gaussian curve. The width of that curve is a measure of the dispersion, or precision of the measurements. The true value, however, may be some distance away from the peak of the Gaussian, and this distance defines the accuracy of the experiments. [Pg.163]

Normally this ratio is calculated in shells of resolution. The average intensity declines with greater resolution, and the Rsym increases because weak reflections tend not to be measured as accurately as strong reflections. Another valuable criterion is the estimated error of the reflections, a, again computed as a function of resolution, and this generally [Pg.163]

The estimated error a is best determined by averaging redundant and symmetry related reflections, and is given by the formula [Pg.164]

With a time increment of 100 ms or 0.1 s, a peak maxima occurring at the 1652 ADC reading would have a retention time of [Pg.409]

When several components are ionized simultaneously, this leads to overlapping spectra. In order to obtain spectra of single substances, complex mixtures must be separated before they can be analyzed in the MS. The combination of GC and MS has the advantage that the components are already in vapor form and enter the MS separately. A capillary column with an inside diameter of 0.32 mm or less can be directly connected to the MS via the interface, which prevents the pressure due to the carrier gas in the ionization chamber from becoming too high. Megabore capillaries (I.D. 0.53 mm) and packed columns require a special interface. [Pg.26]

A direct sample introduction system is suitable for pure liquids and solids. Here, the sample is placed on the point of a movable rod which can be introduced into the ion source through a special valve. The sample is then vaporized and broken down into ionized fragments. [Pg.26]

The total ion current (TIC) is the current due to the total number of ions passing through the analyzer (Fig. 4-4). It is necessary to measure this because part of [Pg.26]

To identify the compound, the signal is split up to show the fractions of the individual masses, i.e. a mass spectrum (Fig. 4-5). [Pg.27]

In drug analysis, it is often necessary to analyze complex mixtures. However, often only certain components are of interest. For these components, selected masses (molecule-ions and characteristic fragments) are recorded during the complete GC analysis as one mass after another is selected in a cyclic manner. Only in the case of the GC peak that contains the component of interest (the target compound) is the signal of all the selected ions in the correct intensity ratio observed. The gas-chromatographic retention time can also be used as an additional criterion of identity. [Pg.28]

The axial, radial, and tangential velocity components, ( /, v/, w,), were decomposed into the mean, ( , v, W), and turbulence components, ( , V-, Wf, as follows  [Pg.259]

The rms values of the three turbulence components were calculated from [Pg.259]

The results of the mixing time Tm are presented against the radial sensor position Rs in Fig. 12. The bath depth //l is 100 x 10 m, and the gas flow rate is 120 X 10 m /s. The electric conductivity sensor is placed at z = 50 x 10 m. This sensor position is designated by Hs in Fig. 7.2. [Pg.260]

In the presence of the swirl motion, the mixing time is approximately 15 s, while it is approximately 100 s when the swirl motion is stopped by making use of [Pg.260]

2 Mixing time values under three boundary conditions imposed on the bath surface [Pg.260]


These lab tests were done to gain a specific data base for such a brittle material compared to the normal steels used for the manufacturing of pressure equipments. In any case the application of AE was only possible due to the rapid development of the data processing and the new state-of-the-art equipment where this technique is built in.(3)... [Pg.32]

Expert systems. In situations where the statistical classifiers cannot be used, because of the complexity or inhomogeneity of the data, rule-based expert systems can sometimes be a solution. The complex images can be more readily described by rules than represented as simple feature vectors. Rules can be devised which cope with inhomogeneous data by, for example, triggering some specialised data-processing algorithms. [Pg.100]

Recently, series of computerized defectoscopes AUGUR for inspected area image reconstruction have been developed in Russia in the framework of RF Minatom program. One of them is device with coherent data processing for expert inspection named AUGUR 4.2 . [Pg.194]

AUGUR software is used to obtain two- and three-dimensional images of defects using various coherent data processing methods, determine the sizes of defects in different sections, execute service and report preparation operations. [Pg.195]

Result of reconstruction is a 3D matrix of output data assigned with the values of the local density inside elementary volumes. The ways of obtaining the 3D matrix of output data can be various. They are determined by the structure of tomographic system and chosen way of collected data processing. [Pg.216]

The first of them to determine the LMA quantitatively and the second - the LF qualitatively Of course, limit of sensitivity of the LF channel depends on the rope type and on its state very close because the LF are detected by signal pulses exceeding over a noise level. The level is less for new ropes (especially for the locked coil ropes) than for multi-strand ropes used (especially for the ropes corroded). Even if a skilled and experienced operator interprets a record, this cannot exclude possible errors completely because of the evaluation subjectivity. Moreover it takes a lot of time for the interpretation. Some of flaw detector producers understand the problem and are intended to develop new instruments using data processing by a computer [6]. [Pg.335]

The report contains also other information which needs to be performed according to the standard [7] date, rope indification, rope diameter and construction, length of rope examined, inspection speed etc. Thus, a user gets the document obtained without very long and subjective data processing by a skilled and experienced operator. [Pg.336]

The detection of the profile edges gives the projected wall thickness in pixels of the image data. The next step of data processing is the compensation of the magnification factor in the used tangential projection method. [Pg.520]

Speckle shearing interferometry, or shearography, is a full field optical inspection teclmique that may be used for the nondestructive detection of surface and, sometimes, subsurface defects. Whilst being more sensitive in the detection of surface defects, it may also be considered for pipe inspection and the monitoring of internal conoslon. In contrast, laser ultrasound and other forms of ultrasound, are point by point measurement techniques, so that scanning facilities and significant data processing is required before information on local defects is extracted from any examination of extensive areas [1 - 3]. [Pg.678]

The data are transmitted from the front end processor to the computer in digital form over an ethemet link. The data consist of ultrasonic data, either raw RF A-scans or data processed by the digital signal processor in the PSP-4. In addition to the ultrasonic data, scanner coordinates are transmitted over the ethemet link. [Pg.784]

Hoch J 0 and Stern A S 1996 NMR Data Processing (New York Wiley-Liss)... [Pg.1465]

Figure Bl.17.10. Principles of 3D reconstruction methods, (a) Principle of single axis tomography a particle is projected from different angles to record correspondmg images (left panel) this is most easily realized in the case of a helical complex (right panel), (b) Principle of data processing and data merging to obtain a complete 3D structure from a set of projections. Figure Bl.17.10. Principles of 3D reconstruction methods, (a) Principle of single axis tomography a particle is projected from different angles to record correspondmg images (left panel) this is most easily realized in the case of a helical complex (right panel), (b) Principle of data processing and data merging to obtain a complete 3D structure from a set of projections.
In 1986, David Weininger created the SMILES Simplified Molecular Input Line Entry System) notation at the US Environmental Research Laboratory, USEPA, Duluth, MN, for chemical data processing. The chemical structure information is highly compressed and simplified in this notation. The flexible, easy to learn language describes chemical structures as a line notation [20, 21]. The SMILES language has found widespread distribution as a universal chemical nomenclature... [Pg.26]

Almost all chemical information systems work with tlicir own special type of connection table. They often use various formats distinguishing between internal and external connection tables. In most cases, the internal connection tables arc redundant, thus allowing maximum flexibility and increasing the speed of data processing. The external connection tables are usually non-redundant in order to save disk space. Although a connection table can be cprcsented in many different ways, the core remains the same the list of atoms and the list of bonds. Thus, the conversion of one connection table format into another is usually a fairly straightforward task. [Pg.42]

The evolutionary process of a genetic algorithm is accomplished by genetic operators which translate the evolutionary concepts of selection, recombination or crossover, and mutation into data processing to solve an optimization problem dynamically. Possible solutions to the problem are coded as so-called artificial chromosomes, which are changed and adapted throughout the optimization process until an optimrun solution is obtained. [Pg.467]

Data logging. Implies data collection with storage for later data processing. [Pg.431]

Data processing. Once information is obtained with an appropriate data system, the information must be interpreted appropriately for the end use. Data processing involves the steps leading to this end use data processing does not necessarily imply application of modem computer techniques. [Pg.431]

All of the data processing is achieved by digitization of the raw data, like that in Figure 8.37(a), followed by treatment by computer. [Pg.331]

P-ray attenuation and converted to an electronic signal for transmission and data processing. [Pg.384]

A mass spectrometer consists of four basic parts a sample inlet system, an ion source, a means of separating ions according to the mass-to-charge ratios, ie, a mass analyzer, and an ion detection system. AdditionaUy, modem instmments are usuaUy suppUed with a data system for instmment control, data acquisition, and data processing. Only a limited number of combinations of these four parts are compatible and thus available commercially (Table 1). [Pg.539]

Copper Development Association P.O. Box 1840 Greenwich, Conn. 06836 Standards for wrought and cast copper and copper alloy products a standards handbook is pubUshed with tolerances, alloy data, terminology, engineering data, processing characteristics, sources and specifications cross-indexes for six coppers and 87 copper-based alloys that are recognized as standards. [Pg.25]

Fig. 13. Bubble column flow characteristics (a) data processing system for split-film probe used to determine flow characteristics, where ADC = automated data center (b) schematic representation of primary flow patterns. Fig. 13. Bubble column flow characteristics (a) data processing system for split-film probe used to determine flow characteristics, where ADC = automated data center (b) schematic representation of primary flow patterns.
To allow flexibility, the database manager must also perform point addition or deletion. However, the abihty to create a point type or to add or delete attributes of a point type is not normally required because, unlike other data processing systems, a process control system normally involves a fixed number of point types and related attributes. For example, analog and binary input and output types are required for process I/O points. Related attributes for these point types include tag names, values, and hardware addresses. Different system manufacturers may define different point types using different data structures. We will discuss other commonly used point types and attributes as they appear. [Pg.773]


See other pages where Processing Data is mentioned: [Pg.70]    [Pg.77]    [Pg.79]    [Pg.467]    [Pg.694]    [Pg.922]    [Pg.1655]    [Pg.1659]    [Pg.1780]    [Pg.26]    [Pg.300]    [Pg.300]    [Pg.317]    [Pg.322]    [Pg.384]    [Pg.114]    [Pg.116]    [Pg.133]    [Pg.400]    [Pg.542]    [Pg.514]    [Pg.130]    [Pg.334]    [Pg.417]   
See also in sourсe #XX -- [ Pg.8 ]

See also in sourсe #XX -- [ Pg.55 ]

See also in sourсe #XX -- [ Pg.253 ]

See also in sourсe #XX -- [ Pg.289 , Pg.290 , Pg.291 , Pg.292 , Pg.293 , Pg.294 , Pg.295 , Pg.296 , Pg.297 , Pg.298 , Pg.299 , Pg.300 ]

See also in sourсe #XX -- [ Pg.302 ]

See also in sourсe #XX -- [ Pg.81 ]

See also in sourсe #XX -- [ Pg.396 ]

See also in sourсe #XX -- [ Pg.168 ]

See also in sourсe #XX -- [ Pg.126 ]

See also in sourсe #XX -- [ Pg.23 , Pg.105 , Pg.156 ]

See also in sourсe #XX -- [ Pg.183 ]

See also in sourсe #XX -- [ Pg.239 ]

See also in sourсe #XX -- [ Pg.128 ]

See also in sourсe #XX -- [ Pg.129 ]

See also in sourсe #XX -- [ Pg.126 ]

See also in sourсe #XX -- [ Pg.141 ]

See also in sourсe #XX -- [ Pg.253 ]

See also in sourсe #XX -- [ Pg.290 ]

See also in sourсe #XX -- [ Pg.24 ]

See also in sourсe #XX -- [ Pg.18 , Pg.19 , Pg.20 ]

See also in sourсe #XX -- [ Pg.455 , Pg.459 ]

See also in sourсe #XX -- [ Pg.399 ]

See also in sourсe #XX -- [ Pg.80 , Pg.106 ]

See also in sourсe #XX -- [ Pg.89 , Pg.90 , Pg.91 , Pg.92 , Pg.93 , Pg.94 ]

See also in sourсe #XX -- [ Pg.2 , Pg.89 , Pg.92 , Pg.93 , Pg.94 ]

See also in sourсe #XX -- [ Pg.61 ]

See also in sourсe #XX -- [ Pg.1689 ]

See also in sourсe #XX -- [ Pg.219 , Pg.224 , Pg.225 , Pg.250 ]

See also in sourсe #XX -- [ Pg.2 , Pg.89 , Pg.92 , Pg.93 , Pg.94 ]

See also in sourсe #XX -- [ Pg.689 ]

See also in sourсe #XX -- [ Pg.364 ]

See also in sourсe #XX -- [ Pg.225 ]

See also in sourсe #XX -- [ Pg.144 , Pg.145 , Pg.146 , Pg.147 , Pg.148 , Pg.149 , Pg.150 ]

See also in sourсe #XX -- [ Pg.56 , Pg.159 ]

See also in sourсe #XX -- [ Pg.166 ]




SEARCH



AR(2) Process Data

Accelerator mass spectrometry data processing

Acquisition and Use of Process Design Data

Adsorption isotherms data processing

Automatic processing of standard data

Automation data processing

Based Data Processing

Biomedical Data Processing

Capital-cost data for processing plants

Cellulose processing data

Characteristics of Data Processing for Industrial Process Modeling

Chemical processing interpreting data

Chromatography data processing

Coal liquefaction process, data

Coherence during data processing

Column chromatography data processing

Common Data Processing System

Comparison of Process Technology Data for Bioreactors

Complex systems data processing

Computer, control data processing

Computers for data processing

Computers, mass spectral data processing

Core data management processes

Crystallography data processing

Cyclic voltammetry data processing

DATA PROCESSING IN 2D NMR

Data Analysis and Signal Processing

Data Process and Analysis

Data Processing Architecture for Target Tracking

Data Processing Issues

Data Processing and Assessment

Data Processing and Bioinformatics

Data Processing and Reporting

Data Processing in Bottom-Up Hydrogen Exchange Mass Spectrometry

Data The Prozac Approval Process

Data acquisition and processing

Data acquisition process

Data acquisition, transmission and processing. Survey networks

Data analysis continuous polymer process

Data collection and processing

Data collection chemical processing industry

Data conditioning process

Data extension different process conditions

Data fitting process

Data management process

Data mining process

Data mining process elements

Data mining process illustrative example

Data normalization process

Data pre-processing

Data process status

Data processing algorithms

Data processing and information management models

Data processing baseline correction

Data processing crystal lattice determination

Data processing detectors

Data processing for

Data processing geometry

Data processing methods

Data processing peak detection

Data processing phase correction

Data processing procedure

Data processing service center

Data processing software

Data processing software Biomap

Data processing software baseline correction

Data processing software implementation

Data processing spectra)

Data processing steps

Data processing system

Data processing systems 738 INDEX

Data processing techniques

Data processing techniques (digitized infrared

Data processing theory, parameters

Data processing, levels

Data processing/reduction

Data storage and processing unit

Data tape, processing

Data, acquisition processing

Data, format processing

Data-generation process

Data-processing parameters

Data-processing problems

Data-processing techniques microstructure studies

Decision-making process data sources

Depth resolution data processing

Detection, sensitivity and data processing

Developing data processing procedure

Dielectric relaxation data processing

Digital data processing

Discussion of Data for Specific Processes and Species

Drying process data examples

Dynamic data post processing

Electron data processing

Electronic data processing

Electronic nose data processing

Emission and consumption data from the continuous PA6 production process

Emission and consumption data from the textile yam process

Emission and consumption data of PET processing techniques

Enable processes data collection management

Entering the process data

Environmental data-generation process

Environmental processes, time series data

Epoxy processing data

Errors from data processing

Errors in data processing

Euclidean distance , data processing

Exact mass data processing

Example of Data Collection, Evaluation, and Processing

Experimental data processing, traditional

Experimental data processing, traditional approaches

Experiments and Data Processing

FllnS Data Processing and Verification

Fluid catalytic cracking data processing

Fourier transform data-processing techniques

Fourier transform infrared data processing

Fourier-transform infrared spectroscopy data-processing techniques

Frequency Domain Processing of NMR Data

Hierarchical classification, process data

High-performance liquid data processing

High-throughput ADME, automated data processing

How to Process ID and 2D NMR Data

Immunoassay data processing

Improved IE Accuracy from Data Post-Processing

Input analysis, process data

Input analysis, process data definition

Input analysis, process data example

Input analysis, process data filter

Input analysis, process data loadings

Input analysis, process data multivariate methods

Input analysis, process data steps

Input analysis, process data univariate methods

Input-output analysis, process data

Input-output analysis, process data regression

Instrument for Automatic Surface Tracking and Data Processing

Integrators data processing

Ion Detectors and Data Processing in MALDI-TOF Analyzers

Liquid crystal polymers processing data

Long data post processing

MA(3) Process Data

Magnetic data processing systems

Mass spectrometry data processing

Mass spectrometry imaging data processing

Maximum-likelihood method processing data

Mixing process data flow diagram

Multiple receivers data processing

Multiplexed data processing

Multivariate Data Processing

NMR Data Processing—Overview

National emissions inventory process data

Nuclear magnetic resonance data processing

Optical Data-Processing Devices

PIV Data Processing

PRELIMINARY DATA PROCESSING AND PHASE ANALYSIS

Panel level data processing

Peak area data processing

Performance data post processing

Peripherals data processing

Phenol-formaldehyde processing data

Physico-chemical data required for the design of near-critical fluid extraction process

Pilot plant data on processing

Plunger processing data

Polyacrylate processing data

Polyamide processing data

Polyethylene processing data

Polyethylene terephthalate processing data

Polymer data processing

Polymethyl methacrylate processing data

Polypropylene processing data

Polystyrene processing data

Polysulfone processing data

Polyvinyl chloride processing data

Practical Application of Investigation Data for Self-Ignition Processes

Prediction of Hepatic Efflux Process from In Vitro Data

Preliminary data processing

Principle of Atomic or Molecular Parameter-Data Processing Method

Process Behavior Charts (Technique attribute data

Process Behavior Charts (Technique variable data

Process Data Representation and Analysis

Process Equipment Data Bases

Process Equipment Data Sources

Process Modeling with Multiresponse Data

Process Modeling with Single-Response Data

Process automation data: Thermodynamic

Process control system Data Acquisition

Process control, automatic sampled data

Process data

Process data

Process data analysis

Process data interpretation

Process data sources

Process data, compression

Process data, qualitative/quantitative

Process design data

Process development compile data

Process development data

Process identification from plant data

Process real-time analytical data

Process trends data compression

Processing Spectroscopic Data

Processing and Analysis of ID NMR Data

Processing and Analysis of the NMR Data

Processing chromatographic data

Processing equilibrium sedimentation data

Processing experimental data

Processing of 2D NMR Data

Processing of mass spectral data

Processing of spectroscopic data

Processing the answers from raw to clean data

Processing, data errors

Protein data processing

Pulse Sequences and Data Processing

Quantitative data processing

Rate measurements experiments, data processing)

Raw data processing

Real-time data processing

Real-time optimization data processing

SVM Applied to Archeological Data Processing

Sedimentation data, processing velocity

Signal processing and data acquisition

Specific Data Processing

Stages of Data Processing

Static processing data from

Statistical process control data bounding

Statistical process control data collection

Styrene copolymers processing data

Support Vector Machine Data Processing Method for Problems of Small Sample Size

Tandem mass spectrometry data processing

Testing data from, processing

The Data Exchange Process

The Experimental Process and NMR Data of Total Synthesis

Time-domain spectroscopy data processing

Tools of the Trade VI. Ion Detection and Data Processing

Trace Element Science and Chemical Data Processing

Transient process data

Two dimensional NMR data processing

Two-Dimensional NMR Data-Processing Parameters

Understanding the FDA Data Collection Process

Vibrational spectroscopy data processing

Viruses data processing

X-ray data processing

Zircon data processing

© 2024 chempedia.info