Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural networks approximation

Fig. 6.7 Comparison of the maximum of the neural network approximation of the ODHE ethylene yield obtained in 10 runs of the genetic algorithm with a population size 60, and the global maximum obtained with a sequential quadratic programming method run for 15 different starting points. Fig. 6.7 Comparison of the maximum of the neural network approximation of the ODHE ethylene yield obtained in 10 runs of the genetic algorithm with a population size 60, and the global maximum obtained with a sequential quadratic programming method run for 15 different starting points.
White, H., Artificial Neural Networks Approximation and Learning Theory. Blackwell Publishers, Cambridge, 1992. [Pg.172]

Selmic, R.R. Ixvis, F.L. 2002. Neural-network approximation of piecewise continuous functions application to function compensation. IEEE Transactions on Neural Networks. 13(3) 745-751. [Pg.1876]

Section 8.4 Improving neural network approximations HCN synthesis. Progress in combinatorial development of heterogeneous catalysts during the last five years is summed up in the following illustrations for the years 2003 to early 2009 (Section 9.2 experimental development of catalysts Section 9.3 new methodologies). Reference is made to the various sources where the selected examples are comprehensively described. The selected descriptions are based on the abstracts of the respective publications. In the final Section 9.4 some conclusions are drawn from the work up to now and an outlook for further development in the field is presented. [Pg.155]

Chapter 8. Improving Neural Network Approximations (M. Holena)... [Pg.190]

Barron, A. R. Approximation and estimation bounds for artificial neural networks. Mach. Learn. 14, 115 (1994). [Pg.204]

Hartman, E., Keeler, K., and Kowalski, J. K., Layered Neural Networks with Gaussian hidden rmits as uttiversal approximators. Neural Comput. 2, 210 (1990). [Pg.204]

Homik, K., Stinchcombe, M., and White, H., Multi-layer feedforward networks are universal approximators. Neural Networks 2, 359 (1989). [Pg.204]

Funahashi, K. I. On the approximate realization of continuous mappings by neural networks. Neural Networks 2 183-192 (1989). [Pg.268]

Afantitis et al. investigated the use of radial basis function (RBF) neural networks for the prediction of Tg [140]. Radial basis functions are real-valued functions, whose value only depends on their distance from an origin. Using the dataset and descriptors described in Cao s work [130] (see above), RBF networks were trained. The best performing network models showed high correlations between predicted and experimental values. Unfortunately the authors do not formally report an RMS error, but a cursory inspection of the reported data in the paper would suggest approximate errors of around 10 K. [Pg.138]

A neural network is a program that processes data like (a part of) the nervous system. Neural networks are especially useful for classification problems and for function approximation problems which are tolerant of some imprecision, which have lots of training data available, but to which hard and fast rules (such as laws of nature) cannot easily be applied. [Pg.330]

An artificial neural network (ANN) model was developed to predict the structure of the mesoporous materials based on the composition of their synthesis mixtures. The predictive ability of the networks was tested through comparison of the mesophase structures predicted by the model and those actually determined by XRD. Among the various ANN models available, three-layer feed-forward neural networks with one hidden layer are known to be universal approximators [11, 12]. The neural network retained in this work is described by the following set of equations that correlate the network output S (currently, the structure of the material) to the input variables U, which represent here the normalized composition of the synthesis mixture ... [Pg.872]


See other pages where Neural networks approximation is mentioned: [Pg.167]    [Pg.168]    [Pg.170]    [Pg.135]    [Pg.137]    [Pg.139]    [Pg.141]    [Pg.143]    [Pg.145]    [Pg.147]    [Pg.149]    [Pg.151]    [Pg.153]    [Pg.167]    [Pg.168]    [Pg.170]    [Pg.135]    [Pg.137]    [Pg.139]    [Pg.141]    [Pg.143]    [Pg.145]    [Pg.147]    [Pg.149]    [Pg.151]    [Pg.153]    [Pg.450]    [Pg.57]    [Pg.429]    [Pg.199]    [Pg.24]    [Pg.205]    [Pg.454]    [Pg.460]    [Pg.158]    [Pg.377]    [Pg.324]    [Pg.230]    [Pg.704]    [Pg.228]    [Pg.159]    [Pg.140]    [Pg.181]    [Pg.229]    [Pg.331]    [Pg.129]    [Pg.205]   
See also in sourсe #XX -- [ Pg.355 , Pg.356 ]




SEARCH



Neural network

Neural networking

© 2024 chempedia.info