Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Feature Extraction

We have demonstrated the methods in this section using synthetic data. Now the processing will be applied to the bacteria/butter image data described previously. In this experiment, bacteria are obvious contaminants of the butter, and as such, the location on the sample needs to be identified. Four images are shown in Fig. 4.20 illustrating the different group separation methods described earlier in [Pg.105]

As discussed earlier, second-derivative spectra can be used to minimize effects of baseline spectral fluctuations. Images prepared in the same manner as in Fig. 4.20 except using the negative of the second derivative of the butter-bacteria data are shown in Fig. 4.21. In this figure, the bacterial contamination region is correctly [Pg.106]

We have only shown examples of selecting either raw, derivatized or transformed data parameters for defining different groups of a dataset. This selection process is [Pg.107]

Another type of classification is outlier selection or contamination identification. As an example, in Fig. 4.23(b), the butter is the desired material and bacteria the contamination. An arbitrary threshold for this image would be 0.02, in which all pixels 0.02 are considered suspect, and hopefully, because this is a food product, decontamination procedures are pursued. In these two examples of classification, only arbitrary thresholds have been defined and, as such, confidence in these classifications is lacking. This confidence can be achieved through statistical methods. Although this chapter is not the appropriate place for an involved discussion of application of statistics toward data analysis, we will give one example often used in chemometric classification. [Pg.108]

In the determination of bacterial contamination of butter, a comparison of all spectra in the dataset with the mean butter spectrum should provide generalized criteria for grouping each pixel into either the butter category or contaminant (bacteria) [Pg.108]


In the Neural Spectrum Classifier (NSC) a multi-layer perceptron (MLP) has been used for classifying spectra. Although the MLP can perform feature extraction, an optional preprocessor was included for this purpose (see Figure 1). [Pg.106]

During the inspection of an unknown object its surface is scanned by the probe and ultrasonic spectra are acquired for many discrete points. Disbond detection is performed by the operator looking at some simple features of the acquired spectra, such as center frequency and amplitude of the highest peak in a pre-selected frequency range. This means that the operator has to perform spectrum classification based on primitive features extracted by the instrument. [Pg.109]

The method has many applications among them arc Denoising Smoothing (DS), compression, and Feature Extraction (FE), which arc powerful tools for data transformations. See the "Selected Reading" section at the end of this chapter for further details. [Pg.216]

IV. Compression of Process Data through Feature Extraction... [Pg.10]

Feature Extraction (Multi-Scale Representation of Trends) s(x,) Inductive Learmng... [Pg.214]

Nevertheless, uniform discretization of time at all scales leads to representations that are highly suitable for feature extraction and pattern reeogni-tion. More on this subject in a subsequent paragraph. [Pg.236]

Compression of process data through feature extraction requires... [Pg.251]

For solving the pattern recognition problem encountered in the operation of chemical processes, the analysis of measured process data and extraction of process trends at multiple scales constitutes the feature extraction, whereas induction via decision trees is used for inductive... [Pg.257]

Bakshi, B. R., and Stephanopoulos, G., Compression of chemical process data through functional approximation and feature extraction. AIChE J., accepted for publication (1995). [Pg.268]

Using this notation, X corresponds to any time series of data with xt being a sampled value, and Z represents the processed forms of the data (i.e., a pattern). The z, are the pattern features, wy is the appropriate label or interpretation, is the feature extraction or data analysis transformation, and l is the mapping or interpretation that must be developed. [Pg.3]

The objective of data analysis (or feature extraction) is to transform numeric inputs in such a way as to reject irrelevant information that can confuse the information of interest and to accentuate information that supports the feature mapping. This usually is accomplished by some form of numeric-numeric transformation in which the numeric input data are transformed into a set of numeric features. The numeric-numeric transformation makes use of a process model to map between the input and the output. [Pg.3]

Feature mapping (i.e., numeric-symbolic mapping) requires decision mechanisms that can distinguish between possible label classes. As shown in Fig. 5, widely used decision mechanisms include linear discriminant surfaces, local data cluster criteria, and simple decision limits. Depending on the nature of the features and the feature extraction approaches, one or more of these decision mechanisms can be selected to assign labels. [Pg.6]

Input-Output Analysis considers feature extraction in the context of empirical modeling approaches. [Pg.9]

Data Interpretation extends data analysis techniques to label assignment and considers both integrated approaches to feature extraction and feature mapping and approaches with explicit and separable extraction and mapping steps. The approaches in this section focus on those that form numeric-symbolic interpreters to map from numeric data to specific labels of interest. [Pg.9]

This chapter provides a complementary perspective to that provided by Kramer and Mah (1994). Whereas they emphasize the statistical aspects of the three primary process monitoring tasks, data rectification, fault detection, and fault diagnosis, we focus on the theory, development, and performance of approaches that combine data analysis and data interpretation into an automated mechanism via feature extraction and label assignment. [Pg.10]

As discussed and illustrated in the introduction, data analysis can be conveniently viewed in terms of two categories of numeric-numeric manipulation, input and input-output, both of which transform numeric data into more valuable forms of numeric data. Input manipulations map from input data without knowledge of the output variables, generally to transform the input data to a more convenient representation that has unnecessary information removed while retaining the essential information. As presented in Section IV, input-output manipulations relate input variables to numeric output variables for the purpose of predictive modeling and may include an implicit or explicit input transformation step for reducing input dimensionality. When applied to data interpretation, the primary emphasis of input and input-output manipulation is on feature extraction, driving extracted features from the process data toward useful numeric information on plant behaviors. [Pg.43]

In general, a given numeric-symbolic interpreter will be used in the context of a locally constrained set of labels defining and limiting both the input variables and possible output interpretations. In this context, feature extraction is intended to produce the features that are resolved into the labels. The numeric-symbolic problem boundary is defined backward from the labels of interest so that feature extraction is associated only with the input requirements for a given approach to produce the relevant features needed to generate the label. The distinctions of relevant features and... [Pg.46]

Furthermore, the pattern structures in a representation space formed from raw input data are not necessarily linearly separable. A central issue, then, is feature extraction to transform the representation of observable features into some new representation in which the pattern classes are linearly separable. Since many practical problems are not linearly separable (Minsky and Papert, 1969), use of linear discriminant methods is especially dependent on feature extraction. [Pg.51]

In brief, the Bayesian approach uses PDFs of pattern classes to establish class membership. As shown in Fig. 22, feature extraction corresponds to calculation of the a posteriori conditional probability or joint probability using the Bayes formula that expresses the probability that a particular pattern label can be associated with a particular pattern. [Pg.56]

Feature extraction is of fundamental importance because sensor features are utilized in any successive elaboration to produce the output of the sensor system in terms of estimation of the measured quantities. [Pg.148]


See other pages where Feature Extraction is mentioned: [Pg.105]    [Pg.461]    [Pg.462]    [Pg.529]    [Pg.10]    [Pg.206]    [Pg.228]    [Pg.251]    [Pg.257]    [Pg.4]    [Pg.6]    [Pg.8]    [Pg.43]    [Pg.44]    [Pg.45]    [Pg.46]    [Pg.47]    [Pg.53]    [Pg.54]    [Pg.54]    [Pg.54]    [Pg.58]    [Pg.198]    [Pg.162]    [Pg.147]    [Pg.148]   
See also in sourсe #XX -- [ Pg.216 ]

See also in sourсe #XX -- [ Pg.4 , Pg.101 , Pg.122 ]

See also in sourсe #XX -- [ Pg.346 , Pg.347 , Pg.359 ]

See also in sourсe #XX -- [ Pg.153 ]

See also in sourсe #XX -- [ Pg.55 ]

See also in sourсe #XX -- [ Pg.513 ]

See also in sourсe #XX -- [ Pg.9 ]

See also in sourсe #XX -- [ Pg.308 ]

See also in sourсe #XX -- [ Pg.118 ]

See also in sourсe #XX -- [ Pg.273 ]

See also in sourсe #XX -- [ Pg.190 ]

See also in sourсe #XX -- [ Pg.41 , Pg.557 , Pg.751 ]

See also in sourсe #XX -- [ Pg.273 ]




SEARCH



Feature Extraction by Measuring Importance of Inputs

Feature Selection and Extraction

Feature extraction, data compression

Nonlinear feature extraction

Rule and Feature Extraction from Neural Networks

Spike feature extraction algorithms

© 2024 chempedia.info