Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Network pruning

Hassibi, B Stork, D. G. Wolff, G. J. (1993). Optimal brain surgeon and general network pruning. Proc IEEE International Corf Neural Networks 1,293-9. [Pg.100]

Hassibi, B. Stork, D. G. (1993). Second order derivatives for network pruning Optimal Brain Surgeoa In Advances in Neural Information Processing Systems, vol. 5 (ed. S. J. Hanson, J. D. Cowan and C. L. Giles), pp. 263-270. Morgan-Kaufmann, San Mateo, Ca. [Pg.150]

Stahlberger, A. Riedmiller, M. (1997). Fast network pruning and feature extraction using the Unit-OBS algorithm. In Advances in Neural Information Procession Systems, vol. 9 (ed. M. C. Mozer, M. I. Jordan and T. Petsche), pp. 655-661. the MIT Press, Cambridge, MA. [Pg.151]

Hassibi, B. Stork, D.G. (1993). Second order derivatives for network pruning Optimal Brain Surgeon Advances in Neural Information Processing Systems 5,164-71. [Pg.158]

Stahlberger, A. Riedmiller, M. (1997). Fast network pruning and feature extraction by using the Unit-OBS algorithm. Advances in Neural Information Processing Systems 9,655-61. [Pg.159]

B. Hassibi and D. Stork, ia Advances in Neural Processing Systems 5 (NIPS 5), S. Hanson,]. Cowan, and C. Giles, Eds., Morgan-Kaufman Publishers, San Mateo, CA, 1993, pp. 164-171. Second-Order Derivatives for Network Pruning Optimal Brain Surgeon. [Pg.139]

F2708.m sine wave approximation F2712.m code for generation of neural net model F2716.m code for neural network pruning Fit.m used to calculate model fit nnsiso.mat data for non-linear SISO process... [Pg.379]

G. Castellano, A.M. Fanelli and M. Pelillo, An iterative pruning algorithm for feedforward neural networks. IEEE Trans. Neural Networks, 8 (1997) 519-531. [Pg.696]

J. Zhang, J.-H. Jiang, P. Liu, Y.-Z. Liang and R.-Q. Yu, Multivariate nonlinear modelling of fluorescence data by neural network with hidden node pruning algorithm. Anal. Chim. Acta, 344(1997) 29 0. [Pg.696]

An alternative to expanding a small network is to start with a large network and work backward, gradually pruning out nodes and the links to them, then retraining the network, continuing the process until the performance of the network starts to deteriorate. [Pg.43]

The slow decay of the signal counter, unless it is boosted by fresh wins, serves a second purpose. While a large counter indicates a suitable region of the map for the insertion of a new unit, a very small value indicates the opposite. The unit may be of so little value to the network that it is a candidate for deletion (section 4.5). Unlike the SOM, not only can units be added in the GCS, they can also be removed, so the signal counter can be used to identify redundant areas of the network where pruning of a unit may enhance efficiency. [Pg.103]

In a completed map, every unit should have a similar probability of being the winning emit for a sample picked at random from the dataset. However, as the map evolves and the weights vectors adjust, the utility of an individual unit may change. Because the signal counter or the local errors are reduced every epoch by a small fraction, the value of this measure for units that are very rarely selected as BMUs will diminish to a value close to zero, indicating that these units contribute little to the network and, therefore, can be pruned out. [Pg.108]

Pruning units out of a network that has only recently been grown sounds like a curious tactic. Why expand the network in the first place if later on bits of it will be removed However, the ability to delete units as well as add them maximizes the utility of the network by spreading knowledge as evenly as possible across the units. [Pg.109]

Reed, R. (1993). Pruning algorithms - A survey. IEEE Trans. Neural Networks 4,740-7. [Pg.101]

Koene, R. A. Takane, Y. (1999). Discriminant component pruning. Regularization and interpretation of multi-layered back-propagation networks. Neural Comput 11,783-802. [Pg.150]

The process of extracting rules from a trained network can be made much easier if the complexity of the network has first been reduced. Furthermore, it is expected that fewer connections will result in more concise rules. Setiono (1997a) described an algorithm for extracting rules from a pruned network. The network was a standard three-layer feedforward back-propagation network trained with a pre-specified accuracy rate. The pruning process attempted to eliminate as many connections as possible while maintaining the accuracy rate. [Pg.152]

Setiono, R. (1997a). Extracting rules from neural networks by pruning and hidden-unit splitting. Neural Comput 9,205-25. [Pg.158]

Setiono, R. (1997b). A penalty-function approach for pruning feedforward neural networks. Neural Comput 9,185-204. [Pg.158]


See other pages where Network pruning is mentioned: [Pg.44]    [Pg.84]    [Pg.90]    [Pg.154]    [Pg.155]    [Pg.179]    [Pg.179]    [Pg.180]    [Pg.44]    [Pg.84]    [Pg.90]    [Pg.154]    [Pg.155]    [Pg.179]    [Pg.179]    [Pg.180]    [Pg.87]    [Pg.678]    [Pg.268]    [Pg.43]    [Pg.43]    [Pg.44]    [Pg.483]    [Pg.389]    [Pg.192]    [Pg.200]    [Pg.190]    [Pg.1902]    [Pg.734]    [Pg.206]    [Pg.426]    [Pg.755]    [Pg.147]    [Pg.147]    [Pg.152]    [Pg.153]    [Pg.153]   
See also in sourсe #XX -- [ Pg.43 ]

See also in sourсe #XX -- [ Pg.90 ]




SEARCH



Prune

© 2024 chempedia.info