Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Principle component analysis statistical methods

The criteria for the choice of the CRM are not different from the criteria to select the material for the preparation of a laboratory reference material for method development, statistical control charts etc. The difference lies in the availability of adequate CRMs from reliable suppliers and the level of compromise which the analyst must make between an ideal situation and the reality of what is on offer. Massart and co-workers have proposed a principle component analysis to help select the best adapted CRMs available on the market to verify AAS analysis of foodstuffs [10], Their approach took into account the analytes as well as the matrix composition. Besides the fact that they highlighted a lack of sorts of CRM, in particular those having a fatty matrix, they demonstrated that such a statistical approach can help in the most appropriate selection of materials. Boenke also proposed a systematic approach for the choice of materials to be certified for mycotoxins [11] and which could be followed by potential users. The selection of the CRM by the analyst should include a certain number of parameters this can cover the following properties to fulfil the intended purpose level of concentration of the analytes ... [Pg.78]

There are a niunber of different experimental design techniques that can be used for medium optimization. Four simple methods that have been used successfully in titer improvement programs are discussed below. These should provide the basis for initial medium-improvement studies that can be carried out in the average laboratory. Other techniques requiring a deeper knowledge of statistics, including simplex optimization, multivariate analysis, and principle-component analysis, have been reviewed (5,6). [Pg.415]

However, no book on experimental design of this scope can be considered exhaustive. In particular, discussion of mathematical and statistical analysis has been kept brief Designs for factor studies at more than two levels are not discussed. We do not describe robust regression methods, nor the analysis of correlations in responses (for example, principle components analysis), nor the use of partial least squares. Our discussion of variability and of the Taguchi approach will perhaps be considered insufficiently detailed in a few years. We have confined ourselves to linear (polynomial) models for the most part, but much interest is starting to be expressed in highly non-linear systems and their analysis by means of artificial neural networks. The importance of these topics for pharmaceutical development still remains to be fully assessed. [Pg.10]

Statistical Analysis and Reporting Methods for statistical analysis of metabonomics data sets include a variety of supervised and unsupervised multivariate techniques (Holmes et al., 2000) as well as univariate analysis strategies. These chemometric approaches have been recently reviewed (Holmes and Antti, 2002 Robertson et al., 2007), and a thorough discussion of these is outside the scope of this chapter. Perhaps the best known of the unsupervised multivariate techniques is principle component analysis (PCA) and is widely... [Pg.712]

The appropriate tool for the construction of models is factor analysis and principle component analysis C189D. An introduction to these statistical methods is beyond the scope of this book and therefore only a brief discussion about modelling of clusters is given here. [Pg.88]

For many applications, quantitative band shape analysis is difficult to apply. Bands may be numerous or may overlap, the optical transmission properties of the film or host matrix may distort features, and features may be indistinct. If one can prepare samples of known properties and collect the FTIR spectra, then it is possible to produce a calibration matrix that can be used to assist in predicting these properties in unknown samples. Statistical, chemometric techniques, such as PLS (partial least-squares) and PCR (principle components of regression), may be applied to this matrix. Chemometric methods permit much larger segments of the spectra to be comprehended in developing an analysis model than is usually the case for simple band shape analyses. [Pg.422]

Finally it is important to note that modern analytical equipment frequently offers opportunities for measuring several or many characteristics of a material more or less simultaneously. This has encouraged the development of multivariate statistics methods, which in principle permit the simultaneous analysis of several components of the material. Partial least squares methods and principal component regression are examples of such techniques that are now finding extensive uses in several areas of analytical science. ... [Pg.81]

This same approach can be used for a mixture of three components. More complex mixtures can be unraveled through computer software that uses an iterative process at multiple wavelengths to calculate the concentrations. Mathematical approaches used include partial least squares, multiple least squares, principle component regression, and other statistical methods. Multicomponent analysis using UV absorption has been used to determine how many and what type of aromatic amino acids are present in a protein and to quantify five different hemoglobins in blood. [Pg.362]

In a study of performance measurement related to lean manufacturing that affects net profit of SMEs in the manufacturing sector in Thailand, the researchers used factor analysis to find the factors. The study extracted factors using the principle component method and forecasted identify factors that affect net profit by using multinomial logistic regression analysis and grouping patterns of business operations of SMEs with statistical analysis. [Pg.230]

When factor analysis has been done to determine the factors that affect the survival of SMEs in the province, the factors were extracted using principle component to see how many of the 23 variables could be factors. It was considered by eigenvalue that exceeds 1.0 the eigenvalue is indicative of the ability of the emerging factors to explain the variability of the original variables. Besides, in this research, we also applied the Varimax rotation method and the KMO statistics, which are used to measure the suitability of the information available, and KMO > 0.6 would be considered suitable data to use for factor analysis techniques. The results showed that the KMO = 0.8123, which was over 0.6, so the information was appropriate to use technical analysis. The results showed there were live factors that had eigenvalue over 1.0, so the analysis grouped the factors into live factors as in Table 3. [Pg.233]

One of the main purposes of measuring NIR data is the determination of chemical composition or physical properties in a quantitative way. The principle of the measurement procedure for quantitative analysis is based on recording the NIR spectra of reference samples (the number depending on the number of components or parameters to be determined) of known composition. The levels of the constituents or the physical parameters are determined by independent, conventional analytical or physical methods. Then the set of reference spectra and the independently determined values of the parameters under investigation are used by a selected statistical method to build a calibration. This enables unknown samples to be evaluated with regard to the individual parameters of interest. The accuracy of the NIR technique depends upon the validity of the calibration data set, which must incorporate the entire range of concentrations that will be determined by the instrument. This set must contain samples with varying ratios of each component. NIR calibrations do not typically extrapolate or interpolate well across concentrations. Typical calibration sets include more than... [Pg.39]

Among the multivariate statistical techniques that have been used as source-receptor models, factor analysis is the most widely employed. The basic objective of factor analysis is to allow the variation within a set of data to determine the number of independent causalities, i.e. sources of particles. It also permits the combination of the measured variables into new axes for the system that can be related to specific particle sources. The principles of factor analysis are reviewed and the principal components method is illustrated by the reanalysis of aerosol composition results from Charleston, West Virginia. An alternative approach to factor analysis. Target Transformation Factor Analysis, is introduced and its application to a subset of particle composition data from the Regional Air Pollution Study (RAPS) of St. Louis, Missouri is presented. [Pg.21]


See other pages where Principle component analysis statistical methods is mentioned: [Pg.9]    [Pg.272]    [Pg.127]    [Pg.336]    [Pg.174]    [Pg.532]    [Pg.1121]    [Pg.238]    [Pg.14]    [Pg.114]    [Pg.216]    [Pg.1346]    [Pg.153]    [Pg.176]    [Pg.306]    [Pg.163]    [Pg.84]    [Pg.495]    [Pg.74]    [Pg.114]    [Pg.619]    [Pg.893]    [Pg.134]    [Pg.72]    [Pg.131]    [Pg.371]   


SEARCH



Analysis principle

Component analysis

Component method

Principle component analysi

Principle component analysis

Statistical analysis

Statistical analysis principles

Statistical methods

Statistical principles

Statistics principles

© 2024 chempedia.info