Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network method comparison

Neural network methods require a fixed length representation of the data to be processed. Vibrational spectra recorded usually fulfill this requirement. With most applications in vibrational spectroscopy, the spectral range and resolution are fixed, and a comparison of spectra from different sources is directly possible. Appropriate scaling of the spectra allows handling different resolutions to obtain the same number of components in a descriptor. Digitized vibrational spectra typically contain absorbance or transmission values in wave-number format. Most of the spectrometers provide the standardized spectral data format JCAMP-DX developed by the Working Party on Spectroscopic Data Standards from the International Union of Pure and Applied Chemistry (lUPAC) [48]. [Pg.178]

In the paper, we describe the neural network to optimize the manufacturing data output. For the detail study, we watch the amormt of time necessary to obtain the result. In the conclusion, the neural network method result is compared with the presently used one. The comparison of both methods and their quality assessment is based on the comparison of distribution functions that describe the time to display the data output. The best solution is going to be applied in the Production module of the K2 ERP system. [Pg.1931]

The nearest neighbour method is often applied, with, in view of its simplicity, surprisingly good results. An example where -NN performs well in a comparison with neural networks and SIMCA (see further) can be found in [16]. [Pg.225]

Use of multivariate approaches based on classification modelling based on cluster analysis, factor analysis and the SIMCA technique [98,99], and the Kohonen artificial neural network [100]. All these methods, though rarely implemented, lead to very good results not achievable with classical strategies (comparisons, amino acid ratios, flow charts) and, moreover it is possible to know the confidence level of the classification carried out. [Pg.251]

It is very important to note that this method automatically contains the interactions between taste substances. If many measurements of various mixed solutions are made, comparison of output patterns between test solution and the mixed solutions can be easily made by adequate algorithms such as in neural networks. [Pg.398]

Fig. 6.7 Comparison of the maximum of the neural network approximation of the ODHE ethylene yield obtained in 10 runs of the genetic algorithm with a population size 60, and the global maximum obtained with a sequential quadratic programming method run for 15 different starting points. Fig. 6.7 Comparison of the maximum of the neural network approximation of the ODHE ethylene yield obtained in 10 runs of the genetic algorithm with a population size 60, and the global maximum obtained with a sequential quadratic programming method run for 15 different starting points.
E-state indexes were used by Votano et al. [55] to develop two models (one for aromatic and one for nonaromatic compounds) for a dataset of 5694 molecules. A comparison of PLS, MLRA, and ANN models for their prediction of the test set molecules clearly indicated the advantage of the nonlinear methods. The neural networks calculated MAE = 0.62 and MAE = 0.56 for aromatic and nonaromatic sets, respectively. The second-best results for the same test sets, MAE = 0.76 and MAE = 0.66, were calculated using the MLRA model. [Pg.249]

CNS activity is a complex process, and remains far from fuUy understood. It is difficult to interpret why certain fragments appear to influence CNS activity. Several compounds with a protonated tertiary amine can pass the blood-brain barrier, although they are not CNS-active. Tertiary amines have a pKa around 7 and can easily be protonated. A comparison of five different approaches, including Bayesian neural network and several other methods described here, revealed that substructural analysis gained so far the best prediction accuracy [38]. [Pg.1794]

An automated FTP service was used to obtain the predictions for all of our 168 integral membrane proteins by using the Rost et al. method [9]. A total of 11870 residues were correctly predicted in the TMH conformations, 2436 residues were overpredicted, 2512 residues were underpredicted, while 50335 residues were correctly predicted not to be in the TMH conformation. One of many different performance parameter that can be constructed by using these data is the Aj] parameter (Methods). Its value is A m = 0.656, which is inferior to our value of 0.712 (Table 9) for the same parameter. However, when tested on the subset of 63 proteins used by Rost et al. [9] the Ajj parameter, calculated from predictions returned by automated service, becomes 0.733, which is comparable to our value of Ajj = 0.740 for the same subset of proteins (Table 9). Similar test on the subset of 105 proteins, never before seen in the training process for the neural network algorithm, gave quite a low value of Ajj = 0.610 for the Rost et al. method [9]. That value is lower than our value of Ajj = 0.682 for the same subset of 105 proteins (Table 9). All of 63 proteins selected by Rost et al. [9] are also predicted as membrane proteins, but their method does not recognize 2 out of 105 membrane proteins selected by us. Underprediction of membrane proteins is due to serious underprediction of transmembrane helices 50 of observed 419 TMH are underpredicted and 11 overpredicted by Rost et al. [9]. For comparison our Table 9 results (row f) for Aj are obtained for the case of 21 underpredicted and 25 overpredicted TMH in the same test set of 105 proteins. [Pg.429]

A Comparison of the Preference Functions Method with the Neural Network (NN) Method... [Pg.141]

Trejo, L. J., Shensa, M. J. (1993). Linear and neural network models for predicting human signal detection performance from eventrelated potentials A comparison of wavelet transforms with other feature extraction methods. Proceedings of the International Simulation Corference. San Diego Society for Computer Simulation. [Pg.57]

Alon, I., Qi, M. and Sadowski, R. J., 2001. Forecasting aggregate retail sales a comparison of artificial neural networks and traditional methods. Journal of Retailing and Consumer Services, 8(3), 147-156. [Pg.194]

C.-W. Hsu, C.-J. lin, A comparison of methods for multi-class support vector machines. IEEE Trans. Neural Networks 13(2), 415-425 (2002)... [Pg.204]

Gomes, H.M. Awruch, A.M. 2004. Comparison of response surface and neural network with other methods for stmc-tural reliability analysis. Structural Safety 26(1) 49 7. [Pg.1314]

We can see that the method based on the RBF2 neural network has the best quality of the time response. User will wait for collision calculation to finish shorter time with comparison to the presently used algorithm. [Pg.1934]

Cobas C, Seoane F, Dominiguez S, Sykora S, Davies AN (2011) A new approach to improving automated analysis of proton NMR spectra through global spectral deconvolution(GSD) 23(1) Jens M, Maier W, Martin W, Reinhard M (2002) Using neural networks for 13C NMR chemical shift prediction-comparison with traditional methods. J Magn Reson 157(2) 242-252... [Pg.414]

Although conceptually simpler, a direct construction of the terms in the many-body expansion in Eq. 17 using neural networks has been suggested only recently by Raff and coworkers. Like the HDMR-based method this approach is systematic and for the expression of each A-body term NNs are used. In comparison to the HDMR method the computational costs are reduced because there are much less N-body terms than m-dimensional component functions. In a 6 atom system, for example, the HDMR ansatz includes 1925 component functions up to 4-dimensions, a many-body expansion up to four-body interactions has only 50 terms. [Pg.20]


See other pages where Neural network method comparison is mentioned: [Pg.199]    [Pg.131]    [Pg.161]    [Pg.1009]    [Pg.2308]    [Pg.111]    [Pg.78]    [Pg.397]    [Pg.289]    [Pg.386]    [Pg.17]    [Pg.52]    [Pg.148]    [Pg.153]    [Pg.99]    [Pg.4015]    [Pg.308]    [Pg.358]    [Pg.116]    [Pg.458]    [Pg.385]    [Pg.126]    [Pg.11]    [Pg.293]    [Pg.41]    [Pg.1931]    [Pg.362]    [Pg.278]    [Pg.49]    [Pg.185]    [Pg.925]    [Pg.156]    [Pg.359]    [Pg.348]   
See also in sourсe #XX -- [ Pg.128 ]




SEARCH



Network method

Neural network

Neural networking

© 2024 chempedia.info