Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data matrices

Multichannel time-resolved spectral data are best analysed in a global fashion using nonlinear least squares algoritlims, e.g., a simplex search, to fit multiple first order processes to all wavelengtli data simultaneously. The goal in tliis case is to find tire time-dependent spectral contributions of all reactant, intennediate and final product species present. In matrix fonn tliis is A(X, t) = BC, where A is tire data matrix, rows indexed by wavelengtli and columns by time, B contains spectra as columns and C contains time-dependent concentrations of all species arranged in rows. [Pg.2967]

Wavelet transformation (analysis) is considered as another and maybe even more powerful tool than FFT for data transformation in chemoinetrics, as well as in other fields. The core idea is to use a basis function ("mother wavelet") and investigate the time-scale properties of the incoming signal [8], As in the case of FFT, the Wavelet transformation coefficients can be used in subsequent modeling instead of the original data matrix (Figure 4-7). [Pg.216]

It may look weird to treat the Singular Value Decomposition SVD technique as a tool for data transformation, simply because SVD is the same as PCA. However, if we recall how PCR (Principal Component Regression) works, then we are really allowed to handle SVD in the way mentioned above. Indeed, what we do with PCR is, first of all, to transform the initial data matrix X in the way described by Eqs. (10) and (11). [Pg.217]

Now, one may ask, what if we are going to use Feed-Forward Neural Networks with the Back-Propagation learning rule Then, obviously, SVD can be used as a data transformation technique. PCA and SVD are often used as synonyms. Below we shall use PCA in the classical context and SVD in the case when it is applied to the data matrix before training any neural network, i.e., Kohonen s Self-Organizing Maps, or Counter-Propagation Neural Networks. [Pg.217]

For example, the objects may be chemical compounds. The individual components of a data vector are called features and may, for example, be molecular descriptors (see Chapter 8) specifying the chemical structure of an object. For statistical data analysis, these objects and features are represented by a matrix X which has a row for each object and a column for each feature. In addition, each object win have one or more properties that are to be investigated, e.g., a biological activity of the structure or a class membership. This property or properties are merged into a matrix Y Thus, the data matrix X contains the independent variables whereas the matrix Ycontains the dependent ones. Figure 9-3 shows a typical multivariate data matrix. [Pg.443]

PCA is a frequently used method which is applied to extract the systematic variance in a data matrix. It helps to obtain an oveiwiew over dominant patterns and major trends in the data. [Pg.446]

In matrix notation PCA approximates the data matrix X, which has n objects and m variables, by two smaller matrices the scores matrix T (n objects and d variables) and the loadings matrix P (d objects and m variables), where X = TPT... [Pg.448]

An advantage of PCA is its ability to cope with almost any kind of data matrix, e.g., it can also deal with matrices with many rows and few columns or vice versa. [Pg.448]

The goal of PCR is to extract intrinsic effects in the data matrix X and to use these effects to predict the values of Y. [Pg.448]

The ordered set of measurements made on each sample is called a data vector. The group of data vectors, identically ordered, for all of the samples is called the data matrix. If the data matrix is arranged such that successive rows of the matrix correspond to the different samples, then the columns correspond to the variables as in Figure 1. Each variable, or aspect of the sample that is measured, defines an axis in space the samples thus possess a data stmcture when plotted as points in that / -dimensional vector space, where n is the number of variables. [Pg.417]

Eig. 1. Eormat of the data matrix, where Ai is the number of samples and iC is the number of variables. [Pg.417]

The successful appHcation of pattern recognition methods depends on a number of assumptions (14). Obviously, there must be multiple samples from a system with multiple measurements consistendy made on each sample. For many techniques the system should be overdeterrnined the ratio of number of samples to number of measurements should be at least three. These techniques assume that the nearness of points in hyperspace faithfully redects the similarity of the properties of the samples. The data should be arranged in a data matrix with one row per sample, and the entries of each row should be the measurements made on the sample, as shown in Figure 1. The information needed to answer the questions must be implicitly contained in that data matrix, and the data representation must be conformable with the pattern recognition algorithms used. [Pg.419]

The purpose of translation is to change the position of the data with respect to the coordinate axes. Usually, the data are translated such that the origin coincides with the mean of the data set. Thus, to mean-center the data, let be the datum associated with the kth measurement on the /th sample. The mean-centered value is computed as = x.f — X/ where xl is the mean for variable k. This procedure is performed on all of the data to produce a new data matrix the variables of which are now referred to as features. [Pg.419]

Preprocessing methods of rotation shift the orientation of the data points with respect to the coordinate axes by some angle 9 (Fig. 5). The operation is performed mathematically by applying a rotation matrix R to the original data matrix X to obtain the coordinates of the points with respect to Y, the new axes ... [Pg.420]

In order for a solution for the systems of equations expressed in equation 11 to exist, the number of sensors must be at least equal to the number of analytes. To proceed, the analyst must first determine the sensitivity factors using external standards, ie, solve equation 11 for Kusing known C and R. Because concentration C is generally not a square data matrix, equation 11 is solved by the generalized inverse method. K is given by... [Pg.427]

A method of resolution that makes a very few a priori assumptions is based on principal components analysis. The various forms of this approach are based on the self-modeling curve resolution developed in 1971 (55). The method requites a data matrix comprised of spectroscopic scans obtained from a two-component system in which the concentrations of the components are varying over the sample set. Such a data matrix could be obtained, for example, from a chromatographic analysis where spectroscopic scans are obtained at several points in time as an overlapped peak elutes from the column. [Pg.429]

In general, two related techniques may be used principal component analysis (PCA) and principal coordinate analysis (PCoorA). Both methods start from the n X m data matrix M, which holds the m coordinates defining n conformations in an m-dimensional space. That is, each matrix element Mg is equal to q, the jth coordinate of the /th conformation. From this starting point PCA and PCoorA follow different routes. [Pg.87]

State-of-the-art for data evaluation of complex depth profile is the use of factor analysis. The acquired data can be compiled in a two-dimensional data matrix in a manner that the n intensity values N(E) or, in the derivative mode dN( )/d , respectively, of a spectrum recorded in the ith of a total of m sputter cycles are written in the ith column of the data matrix D. For the purpose of factor analysis, it now becomes necessary that the (n X m)-dimensional data matrix D can be expressed as a product of two matrices, i. e. the (n x k)-dimensional spectrum matrix R and the (k x m)-dimensional concentration matrix C, in which R in k columns contains the spectra of k components, and C in k rows contains the concentrations of the respective m sputter cycles, i. e. ... [Pg.20]

A data matrix with column-wise organization is easily converted to row-wise organization by taking its matrix transpose, and vice versa. If you are not familiar with the matrix transpose operation, please refer to the discussion in Appendix A. [Pg.11]

Whether or not we scale, weight, and/or center our data, a mandatory pretreatment is required by most of the algorithms used to calculate the eigenvectors. Most algorithms require that we square our data matrix, A, by either pre- or post-multiplying it by its transpose ... [Pg.101]

We compute a PCR calibration in exactly the same way we computed an ILS calibration. The only difference is the data we start with. Instead of directly using absorbance values expressed in the spectral coordinate system, we use the same absorbance values but express them in the coordinate system defined by the basis vectors we have retained. Instead of a data matrix containing absorbance values, we have a data matrix containing the coordinates of each spectrum on each of the axes of our new coordinate system. We have seen that these new coordinates are nothing more than the projections of the spectra onto the basis vectors. These projections are easily computed ... [Pg.108]

To compute the variance, we first find the mean concentration for that component over all of the samples. We then subtract this mean value from the concentration value of this component for each sample and square this difference. We then sum all of these squares and divide by the degrees of freedom (number of samples minus 1). The square root of the variance is the standard deviation. We adjust the variance to unity by dividing the concentration value of this component for each sample by the standard deviation. Finally, if we do not wish mean-centered data, we add back the mean concentrations that were initially subtracted. Equations [Cl] and [C2] show this procedure algebraically for component, k, held in a column-wise data matrix. [Pg.175]

An alternative method of variance scaling is to scale each variable to a uniform variance that is not equal to unity. Instead we scale each data point by the root mean squared variance of all the variables in the data set This is, perhaps, the most commonly employed type of variance scaling because it is a bit simpler and faster to compute. A data set scaled in this way will have a total variance equal to the number of variables in the data set divided by the number of data points minus one. To use this method of variance scaling, we compute a scale factor, sr, over all of the variables in the data matrix, 8g,... [Pg.177]

Malinowski, E.R. "Determination of the Number of Factors and the Experimental Error in a Data Matrix", Anal. Chem. 1977, (49) 612-617. [Pg.193]

The evolution period tl is systematically incremented in a 2D-experiment and the signals are recorded in the form of a time domain data matrix S(tl,t2). Typically, this matrix in our experiments has the dimensions of 512 points in tl and 1024 in t2. The frequency domain spectrum F(o l, o 2) is derived from this data by successive Fourier transformation with respect to t2 and tl. [Pg.294]


See other pages where Data matrices is mentioned: [Pg.1508]    [Pg.1510]    [Pg.213]    [Pg.214]    [Pg.463]    [Pg.168]    [Pg.417]    [Pg.418]    [Pg.421]    [Pg.423]    [Pg.423]    [Pg.424]    [Pg.426]    [Pg.427]    [Pg.428]    [Pg.429]    [Pg.20]    [Pg.20]    [Pg.61]    [Pg.401]    [Pg.101]    [Pg.102]    [Pg.102]    [Pg.102]    [Pg.181]   
See also in sourсe #XX -- [ Pg.229 ]

See also in sourсe #XX -- [ Pg.109 , Pg.110 , Pg.113 , Pg.114 , Pg.127 , Pg.128 , Pg.501 ]

See also in sourсe #XX -- [ Pg.298 ]

See also in sourсe #XX -- [ Pg.161 ]

See also in sourсe #XX -- [ Pg.109 , Pg.110 , Pg.113 , Pg.114 , Pg.127 , Pg.128 , Pg.505 ]

See also in sourсe #XX -- [ Pg.229 ]

See also in sourсe #XX -- [ Pg.132 ]

See also in sourсe #XX -- [ Pg.137 ]




SEARCH



Chromatographic data matrix

Complete data matrix

Compositional analysis data matrix

Compositional data matrices

Compositional data matrices centering

Compositional data matrices standardization

Data correlation matrix

Data matrices alternating least squares

Data matrices capillary electrophoresis

Data matrices centering

Data matrices major product

Data matrices multivariate curve resolution

Data matrices standardization

Data matrix rank

Data matrix used for modelling

Data variance-covariance matrix

Excitation-emission matrix data

Matrix data resolution

Matrix-assisted laser desorption ionization data acquisition

Mean centered data matrix

Multivariate data matrices

Parallel Fock Matrix Formation with Distributed Data

Parallel Fock Matrix Formation with Replicated Data

Principal components analysis multivariate data matrices

Reconstructed data matrix

Residual data matrix

Silica matrix, separation data

Sorption data, matrix model

Sparse data matrix

The data resolution matrix

© 2024 chempedia.info