Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network algorithm training process

An automated FTP service was used to obtain the predictions for all of our 168 integral membrane proteins by using the Rost et al. method [9]. A total of 11870 residues were correctly predicted in the TMH conformations, 2436 residues were overpredicted, 2512 residues were underpredicted, while 50335 residues were correctly predicted not to be in the TMH conformation. One of many different performance parameter that can be constructed by using these data is the Aj] parameter (Methods). Its value is A m = 0.656, which is inferior to our value of 0.712 (Table 9) for the same parameter. However, when tested on the subset of 63 proteins used by Rost et al. [9] the Ajj parameter, calculated from predictions returned by automated service, becomes 0.733, which is comparable to our value of Ajj = 0.740 for the same subset of proteins (Table 9). Similar test on the subset of 105 proteins, never before seen in the training process for the neural network algorithm, gave quite a low value of Ajj = 0.610 for the Rost et al. method [9]. That value is lower than our value of Ajj = 0.682 for the same subset of 105 proteins (Table 9). All of 63 proteins selected by Rost et al. [9] are also predicted as membrane proteins, but their method does not recognize 2 out of 105 membrane proteins selected by us. Underprediction of membrane proteins is due to serious underprediction of transmembrane helices 50 of observed 419 TMH are underpredicted and 11 overpredicted by Rost et al. [9]. For comparison our Table 9 results (row f) for Aj are obtained for the case of 21 underpredicted and 25 overpredicted TMH in the same test set of 105 proteins. [Pg.429]

We propose the use of MSCL neural network, to train this 3-D dataset, taking into account the magnitude function, that can be defined to lead the training process of the palette to accomplish the desired task. This algorithm follows the general Competitive Learning steps ... [Pg.215]

Osuna, E., Freund, R. and Girosi, F. (1997). Improved training algorithm for support vector machines. Proc. IEEE Neural Networks in Signal Processing 97. [Pg.325]

Huang and Tang49 trained a neural network with data relating to several qualities of polymer yarn and ten process parameters. They then combined this ANN with a genetic algorithm to find parameter values that optimize quality. Because the relationships between processing conditions and polymer properties are poorly understood, this combination of AI techniques is a potentially productive way to proceed. [Pg.378]

Reasonable noise in the spectral data does not affect the clustering process. In this respect, cluster analysis is much more stable than other methods of multivariate analysis, such as principal component analysis (PCA), in which an increasing amount of noise is accumulated in the less relevant clusters. The mean cluster spectra can be extracted and used for the interpretation of the chemical or biochemical differences between clusters. HCA, per se, is ill-suited for a diagnostic algorithm. We have used the spectra from clusters to train artificial neural networks (ANNs), which may serve as supervised methods for final analysis. This process, which requires hundreds or thousands of spectra from each spectral class, is presently ongoing, and validated and blinded analyses, based on these efforts, will be reported. [Pg.194]

The fundamental idea behind training, for all neural network architectures, is this pick a set of weights (often randomly), apply the inputs to the network and see how the network performs with this set of weights. If it doesn t perform well, then modify the weights by some algorithm (specific to each architecture) and repeat the procedure. This iterative process is continued until some pre-specified stopping criterion has been reached. [Pg.51]

Lennox, B. Montague, G.A. Frith, A.M. Gent, C. Bevan, V. Industrial apphcations of neural networks—an investigation. J. Process Control. 2001, 11, 497-507. Murtoniemi, E. Merkku, P. Yliruusi, J. Comparison of four different neural network training algorithms in modelling the fluidized bed granulation process. Lab. Microcom-put. 1993, 12, 69-76. [Pg.2412]

In this paper, a plurality of data of analysis of ground water quality test were pre-processed with Immune Algorithm (lA). The key characteristics of coal mine water inrush source data are extracted by characteristic analysis method. The complexity of the data is reduced by reducing the dimensionality of the data set. With the help of the data after dimension reduction, the Back Propagation Neural Network (BPNN) is trained. The coal mine water inrush source is recognized by the trained BPNN. Experiments show that if the source of mine disaster water is identified by the method developed in the paper, its accuracy can reach 93%. The more detailed introduction on the method is given below. [Pg.179]

The framework of presented intelligent multi-sensor system is reflected by its data processing flow as illustrated in Fig. 3. Diversified sensors in field and sophisticated algorithms make the system scalable and adaptive to different driving profiles and scenarios. Data sets of complementary sensors are synchronized on the same time base before being conveyed to feature computation components. Based on the outcome of feature computation selected data sets are fused on the Mature level to construct input vectors for pattern classification so as to detect driver drowsiness. The classifier being used in this work is built upon Artificial Neural Network (ANN) or, more particularly. Multilayer Perceptrons (MLP) with supervised training procedure. [Pg.126]


See other pages where Neural network algorithm training process is mentioned: [Pg.689]    [Pg.581]    [Pg.436]    [Pg.129]    [Pg.60]    [Pg.667]    [Pg.668]    [Pg.3078]    [Pg.494]    [Pg.115]    [Pg.115]    [Pg.268]    [Pg.379]    [Pg.378]    [Pg.242]    [Pg.178]    [Pg.123]    [Pg.167]    [Pg.487]    [Pg.23]    [Pg.253]    [Pg.255]    [Pg.377]    [Pg.367]    [Pg.94]    [Pg.148]    [Pg.391]    [Pg.185]    [Pg.657]    [Pg.663]    [Pg.581]    [Pg.63]    [Pg.194]    [Pg.107]    [Pg.166]    [Pg.248]    [Pg.195]    [Pg.159]    [Pg.293]    [Pg.18]    [Pg.84]    [Pg.84]   
See also in sourсe #XX -- [ Pg.429 ]




SEARCH



Algorithm neural network

Network processes

Neural network

Neural network training processes

Neural networking

Neural processes

Process Algorithm

Training algorithm

Training network

Training neural network

© 2024 chempedia.info