Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hyperbolic tangent activation functions

The first work on pKa determination by zone electrophoresis using paper strips was described by Waldron-Edward in 1965 (15). Also, Kiso et al. in 1968 showed the relationship between pH, mobility, and p/C, using a hyperbolic tangent function (16). Unfortunately, these methods had not been widely accepted because of the manual operation and lower reproducibility of the paper electrophoresis format. The automated capillary electrophoresis (CE) instrument allows rapid and accurate pKa determination. Beckers et al. showed that thermodynamic pATt, (pATf) and absolute ionic mobility values of several monovalent weak acids were determined accurately using effective mobility and activity at two pH points (17). Cai et al. reported pKa values of two monovalent weak bases and p-aminobenzoic acid (18). Cleveland et al. established the thermodynamic pKa determination method using nonlinear regression analysis for monovalent compounds (19). We derived the general equation and applied it to multivalent compounds (20). Until then, there were many reports on pKa determination by CE for cephalosporins (21), sulfonated azo-dyes (22), ropinirole and its impurities (23), cyto-kinins (24), and so on. [Pg.62]

The neurons weight all inputs and provide an output via the activation function. The complexity of the neural networks used will be determined by the number of nodes in the hidden layer (2,3,5 or 7). The activation applied in this application is a hyperbolic tangent function. In mathematical terms, the output of neuron j is defined by n With yj output of neuron j... [Pg.58]

Activation function Every neuron has its own activation function and generally only two activation functions are used in a particular NN. Neurons in the input layer use the identity function as the activation function. That is, the output of an input neuron equals its input. The activation functions of hidden and output layers can be differentiable and non-linear in theory. Several well-behaved (bounded, monotonically increasing and differentiable) activation functions are commonly used in practice, including (1) the sigmoid function f X) = (1 + exp(-A)) (2) the hyperbolic tangent function f X) = (exp(A) - exp(-A))/ (exp(A) + exp(-A)) (3) the sine or cosine function f(X) = sin(A) or f X) = cos(A) (4) the linear function f X) = X (5) the radial basis function. Among them, the sigmoid function is the most popular, while the radial basis function is only used for radial basis function networks. [Pg.28]

All four parameters a, b, c and d can be related to weight parameters of the NN. The effect of these parameters on the shape of the resulting activation functions is shown in Fig. 4 for the case of the hyperbolic tangent. The resulting flexibility is the reason for the capability of artificial NNs to adapt very accurately to any kind of function by combining a large number of these simple elements. [Pg.346]

Fig. 4 Illustration of the flexibility of the hyperbolic tangent activation function. The neural network consists of nested functional elements of the form h(x) = c tanh(n x + b) + d. In (a) the slope is changed by modifying parameter a, in (b) the function is shifted horizontally by changing b, in (c) the function is stretched by changing parameter c, and in (d) the function is shifted vertically by changing parameter d. Fig. 4 Illustration of the flexibility of the hyperbolic tangent activation function. The neural network consists of nested functional elements of the form h(x) = c tanh(n x + b) + d. In (a) the slope is changed by modifying parameter a, in (b) the function is shifted horizontally by changing b, in (c) the function is stretched by changing parameter c, and in (d) the function is shifted vertically by changing parameter d.
The structure of a NN, i.e., the number of layers and nodes per layer, is called architecture or topology of the NN. To describe the architecture a notation is commonly used, which specifies the number of nodes in each layer. The NN in Fig. 2, for example, would be a 3-4-3-1 NN. Often, also the types of activation functions are given, using e.g. a t for the hyperbolic tangent and a 1 for a linear function the notation 3-4-3-1 ttl specifies the use of the hyperbolic tangent in both hidden layers and a linear function in the output layer. [Pg.347]

The influence of temperature on a battery s available capacity is highly nonlinear. Shen et al. improved the previous ANN by adding the battery temperature as a second input [5]. They also added two other neurons in the hidden layer and replaced the hyperbolic tangent activation function by a sigmoid one, as follows ... [Pg.236]


See other pages where Hyperbolic tangent activation functions is mentioned: [Pg.252]    [Pg.263]    [Pg.174]    [Pg.377]    [Pg.309]    [Pg.915]    [Pg.425]    [Pg.183]    [Pg.194]    [Pg.209]    [Pg.345]    [Pg.449]    [Pg.374]    [Pg.236]    [Pg.238]    [Pg.224]    [Pg.113]    [Pg.571]    [Pg.1229]   
See also in sourсe #XX -- [ Pg.349 ]




SEARCH



Activating function

Activation function

Active functional

Function tangent

Functional activation

Functional activity

Functions activity

Hyperbolic

Hyperbolic function

Hyperbolic tangent function

Hyperbolicity

Tangent

© 2024 chempedia.info