Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Support vector machines relationships

Huanxiang L, Xiaojun Y, Ruisheng Zh, Mancang L, Zhide H, Botao F (2005) Accurate quantitative structure-property relationship model to predict the solubility of C60 in various solvents based on a novel approach using a least-squares support vector machine. J. Phys. Chem. Sect B. 109 20565-20571. [Pg.349]

Panaye A, Fan BT, Doucet JP, Yao XJ, Zhang RS, et al. Quantitative structure-toxicity relationships (QSTRs) A comparative study of various nonlinear methods. General regression neural network, radial basis function neural network and support vector machine in predicting toxicity of nitro- and cyano- aromatics to Tetrahymena pyriformis. SAR QSAR Environ Res 2006 17 75-91. [Pg.198]

The Support vector machine technique is a relatively new method in the field of structure-property relationships. SVMs originated from the work of Vapnik et al. [91] and were originally applied to image analysis, text categorization, and... [Pg.390]

Czerminski, R., Yasri, A. and Hartsough, D. (2001) Use of support vector machine in pattern classification application to QSAR studies. Quantitative Structure—Activity Relationships, 20, 227-240. [Pg.406]

Quantitative structure-activity relationship Root mean square error Receiver-operating characteristic Recursive partitioning Support vector machine TOPological Substructural Molecular Design Topological polar surface area... [Pg.410]

Ivanciuc, O. (2002e) Structure-odor relationships for pyrazines with support vector machines. Internet Electron.]. Mol. Des., 1, 269-284. [Pg.1075]

Li, J., Liu, H Yao, X.-J., Liu, M., Hu, Z. and Fan, B.T. (2007) Quantitative structure-activity relationship study of acyl ureas as inhibitors of human liver glycogen phosphorylase using least squares support vector machines. Chemom. Intell. Lab. Syst., 87, 139-146. [Pg.1103]

Structural similarity. Such relationships are illustrated in Figure 11.8 that shotvs the variety of SSRs covered by one exemplary series of selective compounds. Despite the complexity of SSRs, in benchmark calculations, 2D fingerprints and support vector machines were successfully applied to identify target-selective compounds and significantly enrich selective compounds over nonselective and inactive molecules [55-58]. Thus, an interesting question has been whether or not this type of selectivity searching might also succeed in practical LEVS applications. [Pg.311]

An identical approach was used by Branca et al. [16] at Merck and led to the identifcationofanovel inhibitor of poly(ADP-ribose) polymerase-1 (PARPl). The 2D descriptors were similar to the CATS all the two-point pharmacophores in each molecule were derived from all possible atom pairs and described in terms of atom types, number of Jt electrons, number of heavy atoms attached, and number of covalent bonds separating the two atoms along the shortest path. Known PARPl inhibitors were collected from patents, publications, and publicly available databases and used to train a support vector machine (SVM) classifier. A SVM constructs the plane that best separates active and inactive compounds in the multidimensional space defined by the molecular descriptors. The results of the SVM were used to classify the compounds in the Merck collection and those predicted to be active were screened. One compound was particularly potent and was chosen as the starting point for a structure-activity relationship (SAR) exploration of this chemical class. Docking studies on the PARPl crystal structure were used to guide the synthetic efforts. [Pg.365]

A later chapter will discuss these methods in more detail. For example, support vector machines and traditional neural networks are analogs of multiple regression or discriminant analysis that provide more flexibility in the form of the relationship between molecular properties and bioactivity.Kohonen neural nets are a more flexible analog to principal component analysis. Various Bayesian approaches are alternatives to the statistical methods described earlier. A freely available program oflcrs many of these capabilities. ... [Pg.81]

They mainly consist of a training data set and analyse this training data to learn relationships between data elements to produce an inferred function. They involve algorithms such as Bayesian statistics, decision tree (DT) learning, support vector machine (SVM), random forest (RF) and nearest neighbour algorithms. [Pg.136]

Mathematical methods are used to identify the relationship between descriptors and the biological effects (multiple linear regressions, neural networks, nearest neighbors, support vector machine, random forest, etc.). [Pg.327]

The other technique, the support vector machines (SVM), is emerging as a powerful method to perform both classification and regression tasks. It can be employed as such or combined with other multivariate regression methods, such as PLS. SVM is not a natural computation method itself because it performs deterministic calculations so that randomness in the results is avoided. However, it derives from the automatic artificial learning field (some of the most relevant developers worked in ANNs as well) and there is a fairly close relationship with multilayer perceptrons (perceptrons will be introduced in the next section). Therefore, SVM has been included in this chapter. [Pg.367]

In summary, the support vector machine (SVM) and partial least square (PLS) methods were used to develop quantitative structure activity relationship (QSAR) models to predict the inhibitory activity of nonpeptide HIV-1 protease inhibitors. Cenetic algorithm (CA) was employed to select variables that lead to the best-fitted models. A comparison between the obtained results using SVM with those of PLS revealed that the SVM model is much better than that of PLS. The root mean square errors of the training set and the test set for SVM model were calculated to be 0.2027, 0.2751, and the coefficients of determination (R2) are 0.9800, 0.9355 respectively. Furthermore, the obtained statistical parameter of leave-one-out cross-validation test (Q ) on SVM model was 0.9672, which proves the reliability of this model. Omar Deeb is thankful for Al-Quds University for financial support. [Pg.79]

Section 2.1 has attempted to determine the maximal margin hyperplane in an intuitive way. Support Vector Machine, the successful implementation of statistical learning theory (SLT), are built on the basis of the maximal margin hyperplane described above. It is important to reveal the relationship between the formula (2.16) and SLT. [Pg.32]

Motivated by the requirement of analysis and separation of polycyclic aromatic hydrocarbons, several authors have determined the retention indices of some of these compounds in chromatographic columns. Since the retention indices of many polycyclic aromatic hydrocarbons have not been determined yet, it is desirable to find the mathematical model about the relationships between the value of retention indices and molecular structure of polycyclic aromatic hydrocarbons. Some of these relationships have been studied by PLS. But it can be shown that support vector machine can give mathematical model with better prediction ability. Table 13.3 illustrates the experimental values of 33 polycyclic aromatic hydrocarbons and their molecular parameters [52 5]. Support vector regression has been used for this modeling work. [Pg.264]

Chen, R. L., Lu, W. C. and Chen, N. Y. (2003). Support vector machine applied to relationship of trace element contents and hypertension disease. Computer and applied chemistry, 20, pp. 567-570. [Pg.321]


See other pages where Support vector machines relationships is mentioned: [Pg.123]    [Pg.49]    [Pg.43]    [Pg.45]    [Pg.205]    [Pg.83]    [Pg.326]    [Pg.323]    [Pg.192]    [Pg.405]    [Pg.580]    [Pg.416]    [Pg.99]    [Pg.496]    [Pg.338]    [Pg.678]    [Pg.137]    [Pg.496]    [Pg.270]    [Pg.424]    [Pg.125]    [Pg.63]    [Pg.1317]    [Pg.84]   
See also in sourсe #XX -- [ Pg.330 , Pg.331 , Pg.332 , Pg.333 ]




SEARCH



Support vector machines

Support vectors

Supported vector machine

© 2024 chempedia.info