Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Summation functions

Weighted Sum This is by far the most common summation function. The output of each PE (/) connecting to i is multiplied by the weight in the connection from / to i  [Pg.76]

Majority The number of terms in the weighted sum that are greater than zero minus the number of terms in the weighted sum which are less than or equal to zero  [Pg.77]

City Block The city block distance between the output vector and the weight vector. It is defined as follows  [Pg.77]


In most real situations, the summation function in eqn. (61) will make only a minor contribution to l/i>cat and we may accordingly adopt the approximation... [Pg.95]

The dimensionless temperature given by eq.(66) are plotted on the Fig. 8 a, b. Our summation function, in contrast to the built-in Mathematica Sum function, take as many term as necessary. To avoid using extremely large number of terms we intentionally start plotting from X=0.01, because the missing part of the plot is not necessary for our conclusion. [Pg.63]

From the table of the distribution summation function (in statistical teaching books) n - f) the value 17.7 is derived for 29 degrees of freedom. Figure 21-45 allows a fast judgment of these values without consulting stastical tables. Values for (n — 1)/% are shown for different number of samples n. [Pg.2278]

Finally, we evaluate the mole fraction summation functions. [Pg.446]

As tjie values of /> corresponding to different values of 7 will have to be included in a summation function, it seems appropriate to express them all as functions of the same unit vectors, Tor example a2, a a, a"2, and bx. [Pg.92]

One problem with using the likelihood function directly is that it often involves multiplying many different numbers together which may lead to numerical overflow or underflow. For this reason, the log of the likelihood, which is called the log-likelihood function, is often used instead. Taking the logarithm of a likelihood function leads to a summation function, which is much easier for computers to handle. For example, the log-likelihood function for Eq. (A.68) is... [Pg.351]

A PE has one or more inputs. If the PE is not in the input layer, these inputs are the outputs of other PEs and will have weights associated with them. The first step in PE operation is to sum the inputs with a summation function. The result of the summation function may be thought of as the effective input to the PE. This effective input is then transformed via a transfer function, which depends on the effective input and an arbitrary but adjustable parameter typically referred to as gain. (In some instances, noise may be added to the effective input to a PE before the transfer function is applied. Typically a random number within a specified range is added to the effective input to each PE within a layer. The distribution of random numbers is either uniform or Gaussian.) The result of transformation, T, is then scaled linearly according to... [Pg.75]

We have used the phrase winning PE rather glibly, without explaining what it means. Usually the PE with the largest value of T in a layer is the winning PE. However, if the Euclidean distance or city block distance summation function is used, the opposite holds true, that is, the PE with the smallest value of T, is the winnen These summation functions are usually used with special-purpose transfer functions (an example is a radial basis transfer function), which perform a monotonic inversion of the effective input. ... [Pg.80]

Figure S A simple backpropagation network. The label to the right of each layer gives the layer name, and the summation function, transfer function, output function, and learning rule, respectively, for the layer in a typical network. The input layer is fully connected to the middle layer, which in turn is fully connected to the output layer. Figure S A simple backpropagation network. The label to the right of each layer gives the layer name, and the summation function, transfer function, output function, and learning rule, respectively, for the layer in a typical network. The input layer is fully connected to the middle layer, which in turn is fully connected to the output layer.
Reflux ratio Stripping factor Summation function on tray j Time... [Pg.361]

Unfortunately, this dramatic improvement in performance comes at a significant price. Using the stochastic approach outlined above, Agrafiotis showed that the method has a tendency to over-sample the principal axes of the property space [54]. He postulated that this behavior is an artifact of the simple summation function used for the dissimilarity metric and the fact that the cosine coefficient only measures the angle between two property vectors and ignores their lengths, which are necessary to measure spread. [Pg.85]


See other pages where Summation functions is mentioned: [Pg.207]    [Pg.165]    [Pg.74]    [Pg.317]    [Pg.484]    [Pg.10]    [Pg.76]    [Pg.76]    [Pg.77]    [Pg.80]    [Pg.89]    [Pg.99]    [Pg.100]    [Pg.754]    [Pg.244]   
See also in sourсe #XX -- [ Pg.75 , Pg.76 , Pg.89 ]




SEARCH



Summation

© 2024 chempedia.info