Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Universal approximators

In recent years some theoretical results have seemed to defeat the basic principle of induction that no mathematical proofs on the validity of the model can be derived. More specifically, the universal approximation property has been proved for different sets of basis functions (Homik et al, 1989, for sigmoids Hartman et al, 1990, for Gaussians) in order to justify the bias of NN developers to these types of basis functions. This property basically establishes that, for every function, there exists a NN model that exhibits arbitrarily small generalization error. This property, however, should not be erroneously interpreted as a guarantee for small generalization error. Even though there might exist a NN that could... [Pg.170]

Homik, K., Stinchcombe, M., and White, H., Multi-layer feedforward networks are universal approximators. Neural Networks 2, 359 (1989). [Pg.204]

J. Park and I.W. Sandberg, Universal approximation using radial basis function networks. Neural Computation, 3 (1991) 246-257. [Pg.698]

An artificial neural network (ANN) model was developed to predict the structure of the mesoporous materials based on the composition of their synthesis mixtures. The predictive ability of the networks was tested through comparison of the mesophase structures predicted by the model and those actually determined by XRD. Among the various ANN models available, three-layer feed-forward neural networks with one hidden layer are known to be universal approximators [11, 12]. The neural network retained in this work is described by the following set of equations that correlate the network output S (currently, the structure of the material) to the input variables U, which represent here the normalized composition of the synthesis mixture ... [Pg.872]

When an online interpolator is used to estimate the uncertain term, the interpolation error g can be kept bounded, provided that a suitable interpolator structure is chosen [26, 28], Among universal approximators, Radial Basis Function Interpolators (RBFIs) provide good performance in the face of a relatively simple structure. Hence, Gaussian RBFs have been adopted, i.e.,... [Pg.103]

The dimensionless product c[k]] is defined as the coil overlap parameter it provides information about the changing nature of the interactions in a dispersion (Blanshard and Mitchell, 1979 Morris et al., 1981). For dilute dispersions, i.e., below c, the slope of log( qsp/cI) vs log(c[T ]) universally approximates 1.4. At the upper practical extreme, with exceptions (especially the galactomannans Morris et al., 1981), the slope increases sharply to 3.3, illustrating wide deviations from Newtonian flow in the segment approaching elasticity. The deviations are significant when 5 < < 10 (Barnes... [Pg.74]

The universal approximation of Shoup and Szabo (12.23) has a similar, but wider gap range 0.002 < T <10, within which it has two excursions as high as 0.6% in amplitude. This is in accord with the original statement in the work [507], guaranteeing a maximum error of 0.6%. [Pg.207]

We can sum up what one can do with a neural network. In principle, neural networks are universal approximators and can compute any computable function. In practice, neural networks are especially useful for classification and function approximation/mapping problems that have plenty of training data available and can tolerate some imprecision but that resist the easy application of hard and fast rules. [Pg.157]

Hartman, E., Keeler, K., and Kowalski, J. K., Layered Neural Networks with Gaussian hidden units as universal approximators. Neural Comput. 2, 210 (1990). [Pg.189]

Ascertaining the node number of hidden layer. The universal approximation theore requires enough number of hidden layer nodes, but too many layers and excessive neuronal data of hidden layer cause excessive number of connections and make a worse result of network generalization ability. To overcome this defect the value range of hidden node number should be ascertained and the maximum value in this range is the number of hidden layer nodes. T value range of hidden node number m is yhl [Pg.1206]

K. Hosnik, M. Stinchcombe, and M. White, Universal approximation of an unknown mapping and its derivatives using multilayer feed-forward networks. Neural Net., 3 551-560 (1990). [Pg.63]

The history of NNs can be traced back to 1943, when physiologists McCulloch and Pitts established the model of a neiuon as a binary linear threshold rmit (McCulloch and Pitts, 1943). One of the most well-known features of NNs is that they can be used as universal approximators (Scarselli and Tsoi, 1998 Zhang et al., 2012). In view of this feature, NNs have been widely applied to a variety of related problems, such as forecasting, modeling, classification and clustering. [Pg.16]

Russell, S. andNorvig, R, 1994. Artificial Intelligence A Modem Approach, Pve,niicQY a. Scarselli, F. and Tsoi, A., 1998. Universal approximation using feedforward neural networks A survey of some existing methods, and some new results. Neural Networks, 11 (1), 15-37. [Pg.39]

Zhang, R., Lan, Y, Huang, G.B. and Xu, Z.B., 2012. Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems, 23 (2), 365-371. [Pg.40]

The Universal Approximation Theorem states that a single layer net, with a suitably large number of hidden nodes, can well approximate any suitably smooth function. Hence for a given input, the network output may be compared with the required output. The total mean square error function is then used to measure how close the actual... [Pg.83]

A neural network has the advantage that it is a universal approximator and the inner PLS model is therefore not limited to some predefined functional form. In Qin McAvoy (1992) the neural network PLS (NNPLS) algorithm is introduced by replacing the linear inner relationship in equation (4) with a feed-forward multilayer perceptron neural network, such that... [Pg.437]

An alternative approach for modelling chemical kinetics can be realized by the use of neural networks which serve as universal approximators (Hertz el al [4]). Neural models describe only relations between measurable quantities which in the case of the example above can be the slow-varying concentration of hydrobromic acid. Within the framework of neural networks we have to model the relationship between concentration and reaction rate of hydrobromic acid... [Pg.244]


See other pages where Universal approximators is mentioned: [Pg.894]    [Pg.29]    [Pg.159]    [Pg.160]    [Pg.169]    [Pg.171]    [Pg.186]    [Pg.704]    [Pg.100]    [Pg.100]    [Pg.75]    [Pg.7]    [Pg.343]    [Pg.912]    [Pg.14]    [Pg.144]    [Pg.145]    [Pg.154]    [Pg.156]    [Pg.171]    [Pg.266]   
See also in sourсe #XX -- [ Pg.158 ]




SEARCH



© 2024 chempedia.info