Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network selection

The abbreviation QSAR stands for quantitative structure-activity relationships. QSPR means quantitative structure-property relationships. As the properties of an organic compound usually cannot be predicted directly from its molecular structure, an indirect approach Is used to overcome this problem. In the first step numerical descriptors encoding information about the molecular structure are calculated for a set of compounds. Secondly, statistical methods and artificial neural network models are used to predict the property or activity of interest, based on these descriptors or a suitable subset. A typical QSAR/QSPR study comprises the following steps structure entry or start from an existing structure database), descriptor calculation, descriptor selection, model building, model validation. [Pg.432]

Tasks for Neural Networks and Selection of an Appropriate Neural Network Method... [Pg.464]

Table 9-3 can act as a guideline for the proper selection of a neural network method. It summarizes the different network types and thcii learning strategy, and lists different types of applications. [Pg.464]

As explained in Chapter 8, descriptors are used to represent a chemical structure and, thus, to provide a coding which allows electronic processing of chemical data. The example given here shows how a GA is used to Rnd an optimal set of descriptors for the task of classification using a Kohoncii neural network. The chromosomes of the GA are to be used as a means for selecting the descriptors they indicate which descriptors are used and which are rejected ... [Pg.471]

The GA was then applied to select those descriptors which give the best classification of the structures when a Kohonen network is used. The objeetive function was based on the quality of the classification done by a neural network for the I educed descriptors. [Pg.472]

The same structure representation as the one taken in the original study [39] is selected in order to show some possibilities evolving from working with a neural network method. Tabic 10.1-1 gives the ten descriptors chosen lor the representation of the 115 molecules of the data set. [Pg.508]

After selection of descriptors/NN training, the best networks were applied to the prediction of 259 chemical shifts from 31 molecules (prediction set), which were not used for training. The mean absolute error obtained for the whole prediction set was 0.25 ppm, and for 90% of the cases the mean absolute error was 0.19 ppm. Some stereochemical effects could be correctly predicted. In terms of speed, the neural network method is very fast - the whole process to predict the NMR shifts of 30 protons in a molecule with 56 atoms, starting from an MDL Molfile, took less than 2 s on a common workstation. [Pg.527]

More elaborate scheme.s can he envisaged. Thus, a. self-organizing neural network as obtained by the classification of a set of chemical reactions as outlined in Section 3,5 can be interfaced with the EROS system to select the reaction that acmaliy occurs from among various reaction alternatives. In this way, knowledge extracted from rcaetion databases can be interfaced with a reaction prediction system,... [Pg.552]

Leane MM, Gumming I, Corrigan O. The use of artificial neural networks for the selection of the most appropriate formulation and processing variables in order to predict the in vitro dissolution of sustained release minitablets. Pharm Sci Tech 2003 4 218-29. [Pg.700]

Aqueous solubility is selected to demonstrate the E-state application in QSPR studies. Huuskonen et al. modeled the aqueous solubihty of 734 diverse organic compounds with multiple linear regression (MLR) and artificial neural network (ANN) approaches [27]. The set of structural descriptors comprised 31 E-state atomic indices, and three indicator variables for pyridine, ahphatic hydrocarbons and aromatic hydrocarbons, respectively. The dataset of734 chemicals was divided into a training set ( =675), a vahdation set (n=38) and a test set (n=21). A comparison of the MLR results (training, r =0.94, s=0.58 vahdation r =0.84, s=0.67 test, r =0.80, s=0.87) and the ANN results (training, r =0.96, s=0.51 vahdation r =0.85, s=0.62 tesL r =0.84, s=0.75) indicates a smah improvement for the neural network model with five hidden neurons. These QSPR models may be used for a fast and rehable computahon of the aqueous solubihty for diverse orgarhc compounds. [Pg.93]

There are many different methods for selecting those descriptors of a molecule that capture the information that somehow encodes the compounds solubility. Currently, the most often used are multiple linear regression (MLR), partial least squares (PLS) or neural networks (NN). The former two methods provide a simple linear relationship between several independent descriptors and the solubility, as given in Eq. (14). This equation yields the independent contribution, hi, of each descriptor, Di, to the solubility ... [Pg.302]

Two models of practical interest using quantum chemical parameters were developed by Clark et al. [26, 27]. Both studies were based on 1085 molecules and 36 descriptors calculated with the AMI method following structure optimization and electron density calculation. An initial set of descriptors was selected with a multiple linear regression model and further optimized by trial-and-error variation. The second study calculated a standard error of 0.56 for 1085 compounds and it also estimated the reliability of neural network prediction by analysis of the standard deviation error for an ensemble of 11 networks trained on different randomly selected subsets of the initial training set [27]. [Pg.385]

The 2D model was built from a wide array of descriptors, including also E-state indices, by Simulations Plus [89], The model is based on the associative neural network ensembles [86, 87] constructed from n=9658 compounds selected from the BioByte StarList [10] of ion-corrected experimental logP values. The model produced MAE = 0.24, r = 0.96 (R. Fraczkiewicz, personal communication). [Pg.394]

W. Wu and D.L. Massart, Artificial neural networks in classification of Nir spectral data selection of the input. Chemom. Intell. Lab. Syst., 35 (1996) 127-135. [Pg.697]

The selection of cluster number, which is generally not known beforehand, represents the primary performance criterion. Optimization of performance therefore requires trial-and-error adjustment of the number of clusters. Once the cluster number is established, the neural network structure is used as a way to determine the linear discriminant for interpretation. In effect, the RBFN makes use of known transformed features space defined in terms of prototypes of similar patterns as a result of applying /c-means clustering. [Pg.62]

Ball JW, Jurs PC (1993) Automated selection of regression models using neural networks for 13C NMR spectral prediction. Anal Chem 65 505... [Pg.282]

Artificial neural networks are as common outside science as they are within it, particularly in financial applications, such as credit scoring and share selection. They have even been used in such eccentric (but, perhaps, financially rewarding) activities as trying to predict the box office success of motion pictures.1... [Pg.11]


See other pages where Neural network selection is mentioned: [Pg.401]    [Pg.2317]    [Pg.401]    [Pg.2317]    [Pg.402]    [Pg.497]    [Pg.500]    [Pg.527]    [Pg.530]    [Pg.360]    [Pg.360]    [Pg.5]    [Pg.5]    [Pg.729]    [Pg.313]    [Pg.498]    [Pg.101]    [Pg.101]    [Pg.394]    [Pg.454]    [Pg.627]    [Pg.679]    [Pg.679]    [Pg.111]    [Pg.115]    [Pg.62]    [Pg.91]    [Pg.169]    [Pg.166]    [Pg.204]    [Pg.267]    [Pg.172]    [Pg.363]   
See also in sourсe #XX -- [ Pg.41 , Pg.42 , Pg.43 , Pg.44 , Pg.45 , Pg.46 , Pg.47 , Pg.48 , Pg.49 , Pg.50 , Pg.51 , Pg.52 , Pg.53 ]




SEARCH



Neural network

Neural networking

© 2024 chempedia.info