Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Weighted scalar product

A weighted Euclidean metric is defined by the weighted scalar product ... [Pg.170]

It has been shown in Chapter 29 that the set of vectors of the same dimension defines a multidimensional space S in which the vectors can be represented as points (or as directed line segments). If this space is equipped with a weighted metric defined by W, it will be denoted by the symbol S. The squared weighted distance between two points representing the vectors x and y in is defined by the weighted scalar product ... [Pg.171]

A similarity measure can be easily generalized, choosing any positive-definite operator written as i2(r,r ) > 0, so the previous overlap similarity measure (Equation 17.1) can transform into the operator weighted scalar product ... [Pg.351]

It must be stressed that every Un(t,J) brings dq(0) to a different coordinate system. Consequently, the averaged operator (A7.13) is actually a weighted sum of the quantities in differently oriented reference systems. It can nevertheless be used to find the scalar product (A7.7), that is the orientational correlation function. [Pg.270]

It is also useful to define the transformed operator L whose operation on a function f is L f = L[Peqf). This operator coincides with the time reversed backward operator, further details on these relationships may be found in Refs. 43,44. L operates in the Hilbert space of phase space functions which have finite second moments with respect to the equilibrium distribution. The scalar product of two functions in this space is defined as (f, g) = (fgi q. It is the phase space integrated product of the two functions, weighted by the equilibrium distribution P The operator L is not Hermitian, its spectrum is in principle complex, contained in the left half of the complex plane. [Pg.10]

Exercise. The objects (1.4) form a linear vector space. Let the scalar product be defined with a weight function 1/s , so that (1.5) is the scalar product (A, Q). Write (1.3) and (1.7) as scalar products. [Pg.32]

This scalar product belongs to the set of data x and non-negative weights Wi, i =. , n. Functions j and are said to be orthonormal with... [Pg.413]

We can reduce the number of calculations even more dramatically by invoking the fact that we do not need to know the Pr6chet derivative matrix itself on the n-th iteration, but rather the result of its application to the weighted residual field F W R . In Chapter 10 (see formula (10.60)) we demonstrated that this term is equal to the scalar product between the complex conjugate electric field E , computed at the n-th iteration, and the auxiliary complex conjugate electric field E due to the reciprocal sources on the observation surface ... [Pg.388]

FIGURE 4. The sign of the scalar product from weight vector w and pattern vector x is positive for class 1 and negative ft)r class 2. [Pg.7]

FIGURE 17. Correction of the weight vector w which gave a wrong classification for pattern x (x Is considered as a member of class 1 and should give a positive scalar product). The new weight vector w classifies X correctly. [Pg.34]

If a training set pattern x gives a scalar product with a wrong sign then c is calculated by equation (43) and the weight vector w is corrected by equation (39). x-x corresponds to the sum of the squared components of pattern x-... [Pg.35]

Patterns x are considered to be correctly classified by the weight vector w if the scalar product s is... [Pg.39]

The scalar product s is used to measure the distance between a pattern X and the decision plane. Because s not only depends on the position of X and w but also on the length of the vectors, the weight vector must be normalized to a fixed length w (e.g. to length 1) Chapter 2-1.7. [Pg.39]

A demand is set up about the value of the scalar product it should be +z for alL patterns belonging to class 1 and -z for all patterns belonging to class 2- A weight vector w is sought that fulfills this demand in an optimum way. A convenient approach to this optimization problem is the least-squares error method. The error e which is made when a pattern x. is classified is given by the difference of the actual... [Pg.42]

In certain classification problems, a linear separation of two classes by only one decision plane is impossible. Figure 27 shows a two-modal class (+) consisting of two distinct clusters (subclasses). Evidently, this class should be represented by two prototypes (Wg/ w ) and a minimum distance classifier would be successful. In this way, the pattern space is partitioned by several decision planes (piecewise-linear separation). Classification of an unknown pattern requires the calculation of the scalar products with all weight vectors (prototype vectors). The unknown is assigned to the class with the largest scalar product (Chapter 2.1.5.). In the same way, a mu Iticategory classification is possible C89, 3963. [Pg.56]

Suppose a pattern x, actually belonging to subclass m, is presented to all prototypes and subclass I yields the largest scalar product. The weight vector w which should give the maximum scalar product is... [Pg.56]

The weight vector which erronously gave the largest scalar product is corrected by equation (72). [Pg.57]

Correction means that the weights for all association units which were excited by the mi sclassified pattern are increased if the scalar product was too small the weights are decreased if the scalar product was too large. [Pg.73]

Because x. is 0 or 1 the computation of a scalar product is reduced to a summation of all weight vector components. In this way, a maximum likelihood classification can be rapidly applied. [Pg.84]

The summation is taken over all patterns or only over those which were used for weight vector corrections during the training. Features with smallest values for g are eliminated because they have the smallest average contribution to the classification (or to the calculation of the scalar product) 11131/ 1603. [Pg.112]

Positive components of the weight vector should correspond to mass numbers which are characteristic of the molecular structure to be classified (if it is defined that a positive scalar product indicates the presence of that molecular structure). Although it was not possible to interpret in detail all weight vector components, the relationship was principally confirmed by some authors. [Pg.154]

Here, M is the atomic mass of uranium, and m is the atomic mass of fluorine. The kinetic energy can be reduced to a uniform scalar product by mass weighting the coordinates, i.e., by multiplying the S coordinates with the square root of the atomic mass of the displaced atom. We shall denote these as the vector Q. Hence, Qi = -s/mSf. [Pg.80]

This means that the scalar product is determined not only by functions cpj and cp but also the data set and the values of weights w,-. Functions [Pg.277]

The first term is characterized by a scalar, 7, and it is the dominant term. Be aware of a convention disagreement in the definition of this term instead of -27, some authors write -7, or 7, or 27, and a mistake in sign definition will turn the whole scheme of spin levels upside down (see below). The second and third term are induced by anisotropic spin-orbit coupling, and their weight is predicted to be of order Ag/ge and (Ag/ge)2, respectively (Moriya 1960), when Ag is the (anisotropic) deviation from the free electron -value. The D in the second term has nothing to do with the familiar axial zero-field splitting parameter D, but it is a vector parameter, and the x means take the cross product (or vector product) an alternative way of writing is the determinant form... [Pg.189]

The orthogonal characteristic polynomials or eigenpolynomials Qn(u) play one of the central roles in spectral analysis since they form a basis due to the completeness relation (163). They can be computed either via the Lanczos recursion (84) or from the power series representation (114). The latter method generates the expansion coefficients q , -r through the recursion (117). Alternatively, these coefficients can be deduced from the Lanczos recursion (97) for the rth derivative Q /r(0) since we have qni r = (l/r )Q r(0) as in Eq. (122). The polynomial set Qn(u) is the basis comprised of scalar functions in the Lanczos vector space C from Eq. (135). In Eq. (135), the definition (142) of the inner product implies that the polynomials Qn(u) and Qm(u) are orthogonal to each other (for n= m) with respect to the complex weight function dk, as per (166). The completeness (163) of the set Q (u) enables expansion of every function f(u) e C in a series in terms of the... [Pg.193]

A moment in mechanics is generally defined as Uj = Qd, where Uj is the jth moment, about a specified line or plane a of a vector or scalar quantity Q (e.g., force, weight, mass, area), d is the distance from Q to the reference line or plane, and j is a number indicating the power to which d is raised. [For example, the first moment of a force or weight about an axis is defined as the product of the force and the distance of the fine of action of the force jfrom the axis. It is commonly known as the torque. The second moment of the force about the same axis (i.e., i = 2) is the moment of inertia.] If Q has elements Qi, each located a distance di from the same reference, the moment is given by the sum of the individual moments of the elements ... [Pg.182]


See other pages where Weighted scalar product is mentioned: [Pg.189]    [Pg.189]    [Pg.293]    [Pg.66]    [Pg.52]    [Pg.358]    [Pg.363]    [Pg.272]    [Pg.69]    [Pg.6]    [Pg.21]    [Pg.73]    [Pg.341]    [Pg.364]    [Pg.84]    [Pg.253]    [Pg.339]    [Pg.236]    [Pg.159]    [Pg.141]   
See also in sourсe #XX -- [ Pg.170 ]




SEARCH



Scalar

Weight products

Weighted product

© 2024 chempedia.info