Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Process data, compression

A special implementation of the CD-R disk is the Photo-CD by Kodak which is a 5.25 in. WORM disk employing the dye-in-polymer principle for storage of up to 100 sHdes /pictures on a CD (after data compression) with the possibhity of interactive picture processing. [Pg.140]

The historical data is sampled at user-specified intervals. A typical process plant contains a large number of data points, but it is not feasible to store data for all points at all times. The user determines if a data point should be included in the list of archive points. Most systems provide archive-point menu displays. The operators are able to add or delete data points to the archive point hsts. The samphng periods are normally some multiples of their base scan frequencies. However, some systems allow historical data samphng of arbitraiy intei vals. This is necessaiy when intermediate virtual data points that do not have the scan frequency attribute are involved. The archive point lists are continuously scanned bv the historical database software. On-line databases are polled for data. The times of data retrieval are recorded with the data ootained. To consei ve storage space, different data compression techniques are employed by various manufacturers. [Pg.773]

IV. Compression of Process Data through Feature Extraction... [Pg.10]

The primary objective of any data compression technique is to transform the data to a form that requires the smallest possible amount of storage space, while retaining all the relevant information. The desired qualities of a technique for efficient storage and retrieval of chemical process data are as follows ... [Pg.215]

None of the practiced compression techniques satisfies all of these requirements. In addition, it should be remembered that compression of process data is not a task in isolation, but it is intimately related to the other two subjects of this chapter (1) description of process trends and (2) recognition of temporal patterns in process trends. Consequently, we need to develop a common theoretical framework, which will provide a uniformly consistent basis for all three needs. This is the aim of the present chapter. [Pg.215]

The ideas presented in Section III are used to develop a concise and efficient methodology for the compression of process data, which is presented in Section IV. Of particular importance here is the conceptual foundation of the data compression algorithm instead of seeking noninterpretable, numerical compaction of data, it strives for an explicit retention of distinguished features in a signal. It is shown that this approach is both numerically efficient and amenable to explicit interpretations of historical process trends. [Pg.216]

Compression of process data through feature extraction requires... [Pg.251]

The speed with which the data need to be compressed depends on the stage of data acquisition at which compression is desired. In intelligent sensors it may be necessary to do some preliminary data compression as the data are collected. Often data are collected for several days or weeks without any compression, and then stored into the company data archives. These data may be retrieved at a later stage for studying various aspects ol the process operation. [Pg.251]

Bakshi, B. R., and Stephanopoulos, G., Compression of chemical process data through functional approximation and feature extraction. AIChE J., accepted for publication (1995). [Pg.268]

The application of principal components regression (PCR) to multivariate calibration introduces a new element, viz. data compression through the construction of a small set of new orthogonal components or factors. Henceforth, we will mainly use the term factor rather than component in order to avoid confusion with the chemical components of a mixture. The factors play an intermediary role as regressors in the calibration process. In PCR the factors are obtained as the principal components (PCs) from a principal component analysis (PC A) of the predictor data, i.e. the calibration spectra S (nxp). In Chapters 17 and 31 we saw that any data matrix can be decomposed ( factored ) into a product of (object) score vectors T(nxr) and (variable) loadings P(pxr). The number of columns in T and P is equal to the rank r of the matrix S, usually the smaller of n or p. It is customary and advisable to do this factoring on the data after columncentering. This allows one to write the mean-centered spectra Sq as ... [Pg.358]

Vedam, H., Venkatasubramanian, V., and Bhalodia, M A B-spline base method for data compression, process monitoring and diagnosis. Comput. Chem. Eng. 22(13), S827-S830 (1998). [Pg.102]

Principal components analysis (PCA) and project to latent structure (PLS) were suggested to absorb information from continued-process data (Kresta et al., 1991 MacGregor and Kourti, 1995 Kourti and MacGregor, 1994). The key point of these approaches is to utilize PCA or PLS to compress the data and extract the information by projecting them into a low-dimension subspace that summarizes all the important information. Then, further monitoring work can be conducted in the reduced subspace. Two comprehensive reviews of these methods have been published by Kourti and Macgregor (1995) and Martin et al. (1996). [Pg.238]

The difference between PLS and PCR is the manner in which the x data are compressed. Unlike the PCR method, where x data compression is done solely on the basis of explained variance in X followed by subsequent regression of the compressed variables (PCs) to y (a simple two-step process), PLS data compression is done such that the most variance in both x and y is explained. Because the compressed variables obtained in PLS are different from those obtained in PCA and PCR, they are not principal components (or PCs) Instead, they are often referred to as latent variables (or LVs). [Pg.385]

T. Fearn and A.M.C. Davies, A comparison of Fourier and wavelet transforms in the processing of near-infrared spectroscopic data part 1. Data compression, J. Near Infrared Spectrosc., 11, 3-15 (2003). [Pg.436]

Data compression is the process of reducing data into a representation that uses fewer variables, yet still expresses most of its information. There are many different types of data compression that are applied to a wide range of technical fields, but only those that are most relevant to process analytical applications are discussed here. [Pg.243]

The most commonly used PCA algorithm involves sequential determination of each principal component (or each matched pair of score and loading vectors) via an iterative least squares process, followed by subtraction of that component s contribution to the data. Each sequential PC is determined such that it explains the most remaining variance in the X-data. This process continues until the number of PCs (A) equals the number of original variables (M), at which time 100% of the variance in the data is explained. However, data compression does not really occur unless the user chooses a number of PCs that is much lower than the number of original variables (A M). This necessarily involves ignoring a small fraction of the variation in the original X-data which is contained in the PCA model residual matrix E. [Pg.245]

There are some distinct advantages of the PLS regression method over the PCR method. Because Y-data are used in the data compression step, it is often possible to build PLS models that are simpler (i.e. require fewer compressed variables), yet just as effective as more complex PCR models built from the same calibration data. In the process analytical world, simpler models are more stable over time and easier to maintain. There is also a small advantage of PLS for qualitative interpretative purposes. Even though the latent variables in PLS are still abstract, and rarely express pure chemical or physical phenomena, they are at least more relevant to the problem than the PCs obtained from PCR. [Pg.263]

Fearn, T. and Davies, A.M.C., A Comparison of Fourier and Wavelet Transforms in the Processing of Near-Infrared Spectroscopic Data Part 1. Data Compression /. Near Infrared Spectrosc. 2003, 11, 3-15. [Pg.326]

Often, relationships between measured process parameters and desired product attributes are not directly measurable, but must rather be inferred from measurements that are made. This is the case with several spectroscopic measurements including that of octane number or polymer viscosity by NIR. When this is the case, these latent properties can be related to the spectroscopic measurement by using chemometric tools such as PLS and PCA. The property of interest can be inferred through a defined mathematical relation.39 Latent variables allow a multidimensional data set to be reduced to a data set of fewer variables which describe the majority of the variance related to the property of interest. This data compression using the most relevant data also removes the irrelevant or noisy data from the model used to measure properties. Latent variables are used to extract features from data, and can result in better accuracy of measurement and a reduced measurement time.4... [Pg.438]

Typically, compression and filtering of spectroscopic data go hand-in-hand. Often, the process of compressing the data leads to a certain amount of noise filtering. [Pg.86]

The memory power consumption emerges as a key challenge in the embedded systems design. There are two approaches that improve the power budget. Using the results of computation as soon as possible reduces the memory requirements [Ben 00]. Another approach is to apply data compression. Compressed memory content declines the storage requirements. Breaking down the memory content into code and data, it will be easier to apply compression techniques for the code component [Lek 00], Since no modification of the code is required, it is possible to keep compression and decompression asymmetric. There is no need the compression to be done real time. However, if the compression and decompression are extended to data, both transformations must be processed real time. [Pg.186]


See other pages where Process data, compression is mentioned: [Pg.463]    [Pg.290]    [Pg.2576]    [Pg.228]    [Pg.10]    [Pg.206]    [Pg.206]    [Pg.214]    [Pg.214]    [Pg.228]    [Pg.252]    [Pg.253]    [Pg.267]    [Pg.210]    [Pg.301]    [Pg.357]    [Pg.147]    [Pg.362]    [Pg.363]    [Pg.525]    [Pg.305]    [Pg.290]    [Pg.249]    [Pg.153]   


SEARCH



Compressibility data

Data processing

Process compression

Process data

Processing compression

© 2024 chempedia.info