Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hyperspectral data set

By analogy, a hyperspectral data set is defined by at least 50 planes an absorbance map is acquired for each wavelength in the spectral range. If the wavelengths number less than 50, the term multispectral imaging is used. [Pg.412]

HCA is a powerful method for data sorting based on local decision criteria. These criteria are based on finding the smallest distances between items such as spectra, where the term distances may imply Euclidean or Mahalanobis distances [17], or correlahon coefficients. HCA is, per se, not an imaging method, but can be used to construct pseudocolor maps from hyperspectral data sets collected from cells or tissue sections [18]. [Pg.181]

Figure 5.lid shows a microscopic image of an MCF-7 cell incubated with TATp-LIP for 6h, while Figure 5.1 le shows a pseudocolor map constructed from the hyperspectral data set using HCA. The cluster analysis is based mainly on the... [Pg.198]

In mid-IR imaging instruments, it is more common to couple the array to an interferometer, so that interferograms from different spatial regions of the sample are recorded at each detector element. Subsequent Fourier transformation yields the desired hyperspectral data set. All types of systems are described in this chapter. [Pg.4]

Multivariate Image Analysis Strong and Weak Multiway Methods Strong and weak -way methods analyze 3D and 2D matrices, respectively. Hyperspectral data cube structure is described using chemometric vocabulary [17]. A two-way matrix, such as a classical NIR spectroscopy data set, has two modes object (matrix lines) and V variables (matrix columns). Hyperspectral data cubes possess two object modes and one variable mode and can be written as an OOV data array because of their two spatial directions. [Pg.418]

In this section, the kernel-based matched signal detectors, such as the kernel MSD (KMSD), kernel ASD (KASD), kernel OSP (KOSP) and kernel SMF (KSMF) as well as the corresponding conventional detectors are implemented based on two different types of data sets - illustrative toy data sets and a real hyperspectral image that contains militaiy targets. The Gaussian RBF kernel, k(x,y) = exp-, was used to implement the kernel-based detectors, c represents the width of the Gaussian distribution and the value of c was chosen such that the overall data variations can be fully exploited by the Gaussian RBF function. In this paper, the values of c were determined experimentally. [Pg.194]

The availability of hyperspectral and multispectral imaging systems with large arrays has brought about new demands in terms of data processing capacity and the mathematics required to extract the important content from these large data sets. As spectral information in images greatly increases the amount of data to be processed, it is absolutely necessary to reduce the dimensionality of the data [66]. [Pg.290]

The visualization of large data sets has special requirements. A typical example area is multivariate image analysis, especially in remote sensing, but hyperspectral imaging is also moving into other fields. Another is three-dimensional visualization of weather systems. [Pg.216]

Outliers, i.e. spectra that have failed one of the tests, are routinely removed from the hyperspectral imaging data sets and are thus not accepted for multivariate classification analysis. [Pg.206]

Long ago, Manne proposed two theorems that clearly stated when the true unique solution (C and profiles) of a resolution problem can be recovered based on the natural local rank conditions of the data set [129]. Although these conditions were originally meant for structured processes, they can be easily reformulated for hyperspectral image data analysis as follows ... [Pg.89]

Many methods have been developed to tackle the issue of high dimensionality of hyperspectral data (Serpico and Bruzzone 1994). In summary, we may say that feature-reduction methods can be divided into two classes feature-selection algorithms (which suitably select a suboptimal subset of the original set of features while discarding the remaining ones) and feature extraction by data transformation which projects the original data space onto a lower-dimensional feature subspace that preserves most of the information, such as nonlinear principal component analysis (NLPCA Licciardi and Del Prate 2011). [Pg.1158]


See other pages where Hyperspectral data set is mentioned: [Pg.412]    [Pg.4]    [Pg.192]    [Pg.193]    [Pg.522]    [Pg.525]    [Pg.213]    [Pg.213]    [Pg.612]    [Pg.459]    [Pg.412]    [Pg.4]    [Pg.192]    [Pg.193]    [Pg.522]    [Pg.525]    [Pg.213]    [Pg.213]    [Pg.612]    [Pg.459]    [Pg.532]    [Pg.147]    [Pg.163]    [Pg.166]    [Pg.166]    [Pg.433]    [Pg.138]    [Pg.14]    [Pg.417]    [Pg.514]    [Pg.309]    [Pg.326]    [Pg.173]    [Pg.29]    [Pg.65]    [Pg.69]    [Pg.382]    [Pg.401]    [Pg.448]    [Pg.215]    [Pg.24]    [Pg.57]    [Pg.61]    [Pg.80]    [Pg.81]    [Pg.138]    [Pg.592]    [Pg.63]    [Pg.290]    [Pg.336]   
See also in sourсe #XX -- [ Pg.66 ]




SEARCH



Data set

Hyperspectral

© 2024 chempedia.info