Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Gradient-descent optimization

The Back-Propagation Algorithm (BPA) is a supervised learning method for training ANNs, and is one of the most common forms of training techniques. It uses a gradient-descent optimization method, also referred to as the delta rule when applied to feedforward networks. A feedforward network that has employed the delta rule for training, is called a Multi-Layer Perceptron (MLP). [Pg.351]

The feed-forward network can be trained offline in batch mode, using data or a look-up table with any of the training algorithms in Back Propagation. The back propagation algorithm for multilayer networks is a gradient descent optimization procedure in which minimization of a mean square... [Pg.570]

When a non-parametric, monotonic relationship is searched between the distance matrix and the distance between objects in the projection space, non-metric MDS is introduced. Usually the approaches differ in the stress criterion which is chosen to be minimized, and no analytical solution is available, so that other methods must be considered, such as an iterative, gradient descent optimization in Sammon s mapping, repeating the mapping several times starting from different sets and with different parameters to avoid local... [Pg.126]

All the techniques are iterative and, except for the simplest chemical systems, require a computer. The methods include optimization by steepest descent (White et al., 1958 Boynton, 1960) and gradient,descent (White, 1967), tback substitution (Kharaka and Barnes, 1973 Truesdell and Jones, 1974), and progressive narrowing of the range of the values allowed for each variable. (.the monotone sequence method Wolery and Walters, 1975). [Pg.61]

Using a first-order optimization algorithm (gradient descent), we find our modification in each weight to be... [Pg.426]

Simulated annealing " is an optimization technique particularly well suited to the multiple-minima characteristic of macromolecular structure refinement. Unlike gradient-descent methods, simulated annealing can overcome barriers between... [Pg.1525]

HyperChem supplies three types of optimizers or algorithms steepest descent, conjugate gradient (Hetcher-Reeves and Polak-Ribiere), and block diagonal (Newton-Raphson). [Pg.58]


See other pages where Gradient-descent optimization is mentioned: [Pg.259]    [Pg.123]    [Pg.1525]    [Pg.259]    [Pg.123]    [Pg.1525]    [Pg.545]    [Pg.272]    [Pg.53]    [Pg.253]    [Pg.731]    [Pg.159]    [Pg.157]    [Pg.7]    [Pg.12]    [Pg.17]    [Pg.260]    [Pg.264]    [Pg.312]    [Pg.237]    [Pg.423]    [Pg.257]    [Pg.196]    [Pg.199]    [Pg.241]    [Pg.64]    [Pg.418]    [Pg.25]    [Pg.29]    [Pg.30]    [Pg.95]    [Pg.88]    [Pg.233]    [Pg.269]    [Pg.42]    [Pg.1527]    [Pg.1534]    [Pg.1825]    [Pg.255]    [Pg.69]    [Pg.91]    [Pg.2335]    [Pg.163]    [Pg.173]    [Pg.122]    [Pg.99]   
See also in sourсe #XX -- [ Pg.351 ]




SEARCH



Gradients optimization

© 2024 chempedia.info