Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Kernel PLS

The Analyze software uses the Kernel PLS method [114] with two key parameters, the number of latent variables and sigma. In this study these values were fixed at 5 and 10, respectively. K-PLS uses kernels and can therefore be seen as a nonlinear extension of the PLS method. The commonly used radial basis function kernel or Gaussian kernel was applied, where the kernel is expressed as [142]... [Pg.407]

Dasgupta, L., Lin, S. M., Carin, L. 2002. Modeling Pharmacogenomics of the NCI-60 Anticancer Data Set Utilizing kernel PLS to correlate the Microarray Data to Therapeutic Responses. In Methods of Microarray Data Analysis II (ed. S. Lin and K. M. Johnson). Kluwer Academic Publishers. [Pg.151]

Garza, J., Robles, J. (1993). Density functional theory softness kernel. Pl s. Rev. A 47,... [Pg.433]

LS-SVMlab, http //www.esat.kuleuven.ac.be/sista/lssvmlab/. LS-SVMlab, by Suykens, is a MATLAB implementation of least-squares support vector machines (LS-SVMs), a reformulation of the standard SVM that leads to solving linear KKT systems. LS-SVM primal-dual formulations have been formulated for kernel PCA, kernel CCA, and kernel PLS, thereby extending the class of primal-dual kernel machines. Links between kernel versions of classic pattern recognition algorithms such as kernel Fisher discriminant analysis and extensions to unsupervised learning, recurrent networks, and control are available. [Pg.390]

S. Rannar, F. Lindgren, P. Geladi and S. Wold, A PLS kernel algorithm for data sets with many variables and fewer objects. Part I theory and algorithm. J. Chemom., 8 (1994) 11-125. [Pg.159]

For subsequent PLS components, the NIPALS algorithm works differently than the kernel method however, the results are identical. NIPALS requires a deflation of X and of Y and the above pseudocode is continued by... [Pg.173]

Original x- and y-data are mapped to a new representation using a nonlinear function. For this purpose the theory of kernel-based learning has been adapted to PLS. In the new data space linear PLS can be applied (Rosipal and Trejo 2001). [Pg.176]

Like ANNs, SVMs can be useful in cases where the x-y relationships are highly nonlinear and poorly nnderstood. There are several optimization parameters that need to be optimized, including the severity of the cost penalty , the threshold fit error, and the nature of the nonlinear kernel. However, if one takes care to optimize these parameters by cross-validation (Section 12.4.3) or similar methods, the susceptibility to overfitting is not as great as for ANNs. Furthermore, the deployment of SVMs is relatively simpler than for other nonlinear modeling alternatives (such as local regression, ANNs, nonlinear variants of PLS) because the model can be expressed completely in terms of a relatively low number of support vectors. More details regarding SVMs can be obtained from several references [70-74]. [Pg.389]

The development of new data analysis methods is also an important area of QSAR research. Several methods have been developed in recent years, and these include kernel partial least squares (K-PLS) [92], robust continuum regression [93], local lazy regression [94], fuzzy interval number -nearest neighbor (FINkNN) [95], and fast projection plane classifier (FPPC) [96], These methods have been shown to be useful for the prediction of a wide variety of target properties, which include moisture, oil, protein and starch... [Pg.232]

The K-PLS method can be reformulated to resemble support vector machines, but it can also be interpreted as a kernel and centering transformation of the descriptor data followed by a regular PLS method [99]. K-PLS was first introduced by Lindgren, Geladi, and Wold [143] in the context of working with linear kernels on data sets with more descriptor fields than data, in order to make the PLS modeling more efficient. Early applications of K-PLS were done mainly in this context [144-146]. The Parzen window, o, in the formula above is a free parameter that is determined by hyper-tuning on a validation set. For each dataset c is then held constant, independent of the various bootstrap splits. [Pg.407]

Lindgren F, Geladi P, Wold S, The Kernel algorithm for PLS, Journal of Chemometrics, 1993, 7, 45-59. [Pg.361]

The combined effect of both kinds of collisions gives a line profile with a kernel that can be described by a Lorentzian profile slightly broadened by soft collisions. The wings, however, form a broad background caused by velocity-changing collisions. The whole profile cannot be described by a single Lorentzian function. In Fig. 8.4 such a line profile is shown for the Lamb peak in the laser output Pl(co) at... [Pg.433]

Gotor et ah (2007) reported the use of near-infrared reflectance spectrometry (NIRS) to predict the contents of tocopherol and phytosterol in sunflower seeds. About 1000 samples of ground sunflower kernels were scanned by NIRS at 2 nm intervals from 400 to 2500 nm. For each sample, standard measurements of tocopherol and phytosterol contents were made. The total tocopherol and phytosterol contents were assessed by FIPLC with a FL detector and GC, respectively. The calibration data set for tocopherol and phytosterol ranged from 175 to 1005 mg/kg oil (mean value around 510 140 mg/kg oil) and from 180 to 470 mg/100 g oil (mean value 320 50 mg 100/g oil), with values of 0.64 and 0.27, respectively. In this study, calibrations were obtained by a modified PLS method. [Pg.377]

FTIR spectroscopy was used in combination with partial least square (PLS) to differentiate and quantify these two oils. The calibration plot of PLS regression model was shown a good linearity between the actual value and FTIR predicted value of percentage of palm kernel olein in virgin coconut oil. The differenees between the actual adulteration concentration and the calculated adulteration predicted from the model were very small, with a determination coefficient (R ) of 0.9973 and root mean error of calibration of 0.0838. [Pg.149]

SAMPLS is a modification of PLS analysis. In SAMPLS, the PLS vectors, also called latent variables, are derived from the n X n covariance matrix. Whereas SAMPLS has no major advantages, as compared with ordinary PLS analysis, it operates a few to several orders of magnitude faster in cross-validation runs (see below), owing to a much smaller number of arithmetic operations. SAMPLS is only one example of so-called kernel algorithms other modifications, being applicable to data sets with several different y vectors, have been described (e.g.. Refs. 19, 39-42). [Pg.454]

The original algorithm for calculating the PLS regression modelworks well with data that contain approximately the same number of variables K -L M) as objects N. In the case where K N or N K, special algorithms can be constructed that work with the small kernel matrices XX. YY, X X. or X Y. 3- 5... [Pg.2010]


See other pages where Kernel PLS is mentioned: [Pg.167]    [Pg.21]    [Pg.1317]    [Pg.167]    [Pg.21]    [Pg.1317]    [Pg.127]    [Pg.5]    [Pg.170]    [Pg.171]    [Pg.172]    [Pg.174]    [Pg.175]    [Pg.206]    [Pg.5]    [Pg.405]    [Pg.429]    [Pg.442]    [Pg.354]    [Pg.327]    [Pg.69]    [Pg.214]    [Pg.1494]    [Pg.104]    [Pg.125]    [Pg.373]    [Pg.372]    [Pg.200]    [Pg.117]    [Pg.323]    [Pg.363]   
See also in sourсe #XX -- [ Pg.232 , Pg.405 , Pg.406 , Pg.407 , Pg.408 , Pg.409 , Pg.412 , Pg.413 , Pg.414 , Pg.415 , Pg.416 , Pg.417 , Pg.418 , Pg.419 ]




SEARCH



PLS

© 2024 chempedia.info