Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrix outer product

Fortran/VPLIB Code for Matrix Matrix "Outer Product"... [Pg.226]

So long as the field is on, these populations continue to change however, once the external field is turned off, these populations remain constant (discounting relaxation processes, which will be introduced below). Yet the amplitudes in the states i and i / do continue to change with time, due to the accumulation of time-dependent phase factors during the field-free evolution. We can obtain a convenient separation of the time-dependent and the time-mdependent quantities by defining a density matrix, p. For the case of the wavefiinction ), p is given as the outer product of v i) with itself. [Pg.229]

This outer product gives four temis, which may be arranged in matrix fonn as... [Pg.229]

Fig. 29.6. Schematic illustration of four types of special matrix products the matrix-by-vector product, the vector-by-matrix product, the outer product and the scalar product between vectors, respectively from top to bottom. Fig. 29.6. Schematic illustration of four types of special matrix products the matrix-by-vector product, the vector-by-matrix product, the outer product and the scalar product between vectors, respectively from top to bottom.
The outer product of two vectors can be thought of as the matrix product between a single-column matrix with a single-row matrix ... [Pg.25]

The outer product of two vectors xm and y is the matrix Amx , such that... [Pg.55]

Let us prove a useful alternative expression of the matrix product. Let e,(i = l, n) be the column vector whose n coordinates are zero except for the ith which is equal to 1. The n e-s form a base of the Euclidian space 91". Ue, is the ith column of a matrix Umxn while ef V is the ith row of a matrix V x p. Outer products such as etef are n x n matrices. From the previous definitions... [Pg.56]

The common-dimension expansion shows that the matrix product can be viewed as a linear combination of the pairwise outer products of U columns and V rows. Example ... [Pg.57]

No bold face will be used for the subscript X of T,x. The current element (il, i2) of can be rewritten as S(Xn Xi2)—S(Xil)S(Xi2). The condensed form of the covariance-matrix is obtained by using the outer product defined in Section 2.1... [Pg.203]

The product of a column vector with m rows and a row vector with n columns results in a matrix with m rows and n columns. This is the so-called outer product. [Pg.18]

The matrix product (outer product) of two vectors is a matrix (example abT. The first vector must be a column vector, and the second a row vector (see Figure A.2.3). [Pg.313]

This construction in which a vector is used to form a matrix v(i)Xv(i) is called an "outer product". The projection matrix thus formed can be shown to be idempotent, which means that the result of applying it twice (or more times) is identical to the result of applying it once P P = P. This property is straightforward to demonstrate. Let us consider... [Pg.628]

Similarly, Messick et al. [44] suggested that the NAS can be found by orthogonal projection of Equation 12.16 following unfolding each / x J sample and interferent matrix into an /./ x I vector. The three-way NAS is the consequent NAS of Equation 12.16 refolded into an I x J matrix. The third alternative, propounded by Wang et al. [41], is to construct the NAS from the outer products of the X-way and Y-way profiles that are unique to the analyte. In this method, no projections are explicitly calculated. [Pg.497]

Figure 8. Pictorial representation of the outer product matrix multiplication algorithm. ... Figure 8. Pictorial representation of the outer product matrix multiplication algorithm. ...
For example, a matrix of rank 1 can be formulated as the outer product of two vectors, such as uvT. (The rank of this matrix is 1 because all rows are scalar multiples of one other). Applying condition [46] to this update form B +1 = Bk + uvT, we obtain the condition that u is a vector in the direction of (yk - Bksk). If yk = Bksk,Bk already satisfies the QN condition [46]. Otherwise, we can write the general rank 1 update formula as ... [Pg.40]

The outer product of a row and column matrix, both of order m, is a square matrix of order m and is written as [C] = with... [Pg.510]

The matrix formed from the product of vectors, P = u (u ), is called a vector outer product. The expansion of a matrix in terms of these outer products is called the spectral resolution of the matrix. The matrix P satisfies the relation pkpt pk (Jq matrices of the more general form, P = X] P , where the summation is over an arbitrary subset of outer product matrices constructed from orthonormal vectors. Matrices that satisfy the relation P = P are called projection operators or projection matrices If P is a projection matrix, then (1 - P) is also a projection matrix. Projection matrices operate on arbitrary vectors, measure the components within a subspace (e.g. spanned by the vectors u used to define the projection matrix) and result in a vector within this subspace. [Pg.73]

A matrix of the form A = (1 — 2xx ) where x = 1 is another outer product matrix that is useful in the MCSCF method. This matrix is both unitary and Hermitian and is called an elementary Householder transformation matrix °. These transformation matrices are useful in bringing Hermitian matrices to tridiagonal form. [Pg.73]

Another outer product matrix that is particularly useful is of the form... [Pg.73]

A general way of exploiting this sparseness in both the gradient and the Hessian construction involves the use of outer product algorithms to perform the matrix element assembly. In the case of the matrix multiplications required in the F matrix construction, this simply means that the innermost DO loop is over X in Eq. (260). (If t were the innermost DO loop, the result would be a series of dot products or an inner product algorithm.) When an outer product algorithm is used, the magnitude of the density matrix elements may be tested and the innermost DO loop is only performed for non-zero elements. (In the case of Hessian matrix construction, the test may occur outside of the two... [Pg.176]

For the more restrictive CSF expansion spaces, such as PPMC and RCI expansions, it occurs that entire transition density vectors will vanish for particular combinations of orbital indices. It is most convenient if a logical flag is set during the construction of the three vectors, one flag for each vector, to indicate that it contains non-zero elements. This avoids the eflbrt required to check each element individually for these zero vectors. The updates of the elements of the matrix C that result from a particular transition density vector involve two DO loops one over the CSF index, which determines the second subscript of the matrix C, and the other over an orbital index, which, combined with a density matrix orbital index, is used to determine the first subscript of C. Either choice of the ordering of these two loops results in an outer product matrix assembly method. [Pg.180]

Another approach to the C matrix construction is a CSF-driven approach proposed by Knowles et al.. With this approach, the density matrix elements dlgrs ars constructed for all combinations of orbital indices p, q, r and s, but for a fixed CSF labeled by n. Each column of the matrix C is constructed in the same way that the Fock matrix F is computed except that the arrays D" and d" are used instead of D and d. As with the F matrix construction described earlier, there are two choices for the ordering of the innermost DO loops. One choice results in an inner product assembly method while the other choice results in an outer product assembly method. The inner product choice, which does not allow the density matrix sparseness to be exploited, results in SDOT operations of length m or about m, depending on the integral storage scheme. The outer product choice, which does allow the density matrix sparseness to be exploited, has an effective vector length of n, the orbital basis dimension. However, like the second index-driven method described above, this may involve some extraneous effort associated with redundant orbital rotation variables in the active-active block of the C matrix. [Pg.181]

It is useful to compare these approaches when applied to a wavefunction expansion that results in a sparse density matrix. For example with a PPMC expansion, each d , with about possible unique elements, contains only about non-zero elements m(m -I- 2)/8 non-zero elements of the typ)e and mil non-zero elements of the type For m = 20 the matrix d" is only 0.29% non-zero. The inner product CSF-driven approach is clearly not suited for the sparse transition density matrix resulting from this type of wavefunction. The outer product CSF-driven approach does account for the density vector sparseness but the effective vector length is only n, the orbital basis dimension. [Pg.181]

The matrix product is well known and can be found in any linear algebra textbook. It reduces to the vector product when a vector is considered as an I x 1 matrix, and a transposed vector is a 1 x I matrix. The product ab is the inner product of two vectors. The product ab is called the outer product or a dyad. See Figure 2.2. These products have no special symbol. Just putting two vectors or matrices together means that the product is taken. The same also goes for products of vectors with scalars and matrices with scalars. [Pg.13]

This reflects exactly the approach of Pearson defining a line of closest fit. The vector pi gives a direction in the 7-dimensional space (defining a line) and ti represents the scores (orthogonal projections) on that line. The outer product ti pj is a rank one matrix and is the best rank-one approximation of X in a least squares sense. This approach can be generalized for more than one component. Then the problem becomes one of finding the subspace of closest fit. [Pg.39]

The SAU collaboration system has an ad hoc theoretical formulation. An originating atomic chart is considered as a two-subscript null matrix it is formed by replacing appropriate zeros of the matrix by symbols of the elements. The outer product of this matrix with itself is taken once to create the periodic system for diatomic molecules, twice for triatomic molecules (acyclic or cyclic), and so on. The result is a four, six, or more subscript matrix for those molecules. [Pg.233]

A four-subscript matrix can be imagined as a four-dimensional array of symbols a six-subscript matrix can be imagined as a six-dimensional array, and so on these arrays are the periodic system (Hefferlin and Kuhlman 1980 Hefferlin 1989a, Chapter 10). In general, the outer product is taken N — 1 times to create the 2/V-dirncnsional periodic system for N atomic molecules. [Pg.233]


See other pages where Matrix outer product is mentioned: [Pg.43]    [Pg.45]    [Pg.65]    [Pg.75]    [Pg.313]    [Pg.130]    [Pg.284]    [Pg.482]    [Pg.224]    [Pg.331]    [Pg.171]    [Pg.171]    [Pg.178]    [Pg.179]    [Pg.179]    [Pg.220]    [Pg.14]    [Pg.33]    [Pg.159]    [Pg.229]    [Pg.116]   
See also in sourсe #XX -- [ Pg.25 , Pg.43 ]

See also in sourсe #XX -- [ Pg.510 ]

See also in sourсe #XX -- [ Pg.280 ]




SEARCH



Outer product

© 2024 chempedia.info