Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrix multiplication method

US model can be combined with the Monte Carlo simulation approach to calculate a r range of properties them is available from the simple matrix multiplication method. 2 RIS Monte Carlo method the statistical weight matrices are used to generate chain irmadons with a probability distribution that is implied in their statistical weights. [Pg.446]

This model is, in any case, unacceptable Examination of formula 70 shows that, according to what was previously stated regarding the n-pentane, in the G G conformation two carbon atoms approach to about 2.5 A and consequently this conformation has little probability of existence. The introduction of correlation between adjacent rotation angles entails the use of a mathematical formulation (the matrix multiplication method) which will not be discussed here the reader is referred to the texts cited and to the specialized literature for details. [Pg.55]

V. Galiatsatos, Polym. Prepr. ACS, Div. Polym. Chem., 32(1), 572 (1992). Application of Matrix Multiplication Methods to the Statistical Correlation of Configuration-Dependent Properties of Polymer Chains. [Pg.205]

This may be accomplished by writing stationary relationships (necessary n-add relationships) involving conditional and unconditional probabilities, followed by solving such relationships simultaneously for the conditional probabilities. While this can be done algebraically, it is simpler to do this by conducting operations on a matrix constructed from the conditional probabilities. The procedure described by Price (5) can provide an algebraic solution if desired, but it can be the basis of a program that provides numerical results. The matrix multiplication method (1), provides numerical results only, but it seems to be the preferred approach when the polymerization system is complex. [Pg.139]

Figure 2 shows the 9x9 transition matrix that is constructed from these conditional probabilities. It should be noted that all initial and final states are dyads in this case. Since it would be difficult to use Price s cofactor method (5) with such a large matrix, the matrix multiplication method we have described previously (1) was used to evaluate unconditional dyad probabilities (e.g., P(AB), P(CC), etc.). Thus, repeated multiplication of the transition matrix by itself causes it to converge to the matrix shown in Figure 3, which contains unconditional dyad probabilities as its elements. These are then added to obtain monomer probabilities, viz., ... [Pg.146]

Berreman used a 4x4 matrix multiplication method. Assuming the incident and reflected wavevectors to be in the xz plane, z being along the helical axis of the cholesteric, the dependence on the y coordinate may be ignored altogether. Writing exp [i( ut—etc., it is easily verified... [Pg.245]

The quantitative stereosequence distribution was then reproduced by making use of matrix multiplication methods in the framework of appropriate stochastic models (7). [Pg.194]

The approach used in the present paper, given in ISA (2002, Part 4), is named the Matrix Multiplication method and is the most practical for large systems. This method transforms the rates ofa normal transition matrix into discrete time-steps and calculates the PFD directly. [Pg.1605]

Matrix multiplication method has been used as an effident tool for the calculation of orientational auto- and cross-correlations for v torial quantities such as dipole moments, transitktn momrats eto., rigidly embedded in polymo ... [Pg.165]

A comparison has been made of rotational isomeric state chains modelled by matrix multiplication methods, as in most of the papers mentioned in this section, with lattice and off-lattice chains obtained by Monte Carlo techniques. excluded volume effects than or Rotational angle flexibility has been expressed in the co-ordinates transform motion matrix, and its influence examined upon , the distribution function for r, and the correlation of the relative orientation of the end bonds of the polymer. ... [Pg.381]

In a series of papers Uchino used the matrix multiplication method for obtaining the canonical code and automorphisms of a molecular graph. He considered adjacency, distance, and open walks matrices in a series of efficient algorithms which offer the automorphism partition of graphs. [Pg.181]

Having filled in all the elements of the F matr ix, we use an iterative diagonaliza-tion procedure to obtain the eigenvalues by the Jacobi method (Chapter 6) or its equivalent. Initially, the requisite electron densities are not known. They must be given arbitrary values at the start, usually taken from a Huckel calculation. Electron densities are improved as the iterations proceed. Note that the entire diagonalization is carried out many times in a typical problem, and that many iterative matrix multiplications are carried out in each diagonalization. Jensen (1999) refers to an iterative procedure that contains an iterative procedure within it as a macroiteration. The term is descriptive and we shall use it from time to time. [Pg.251]

The SSW form an ideal expansion set as their shape is determined by the crystal structure. Hence only a few are required. This expansion can be formulated in both real and reciprocal space, which should make the method applicable to non periodic systems. When formulated in real space all the matrix multiplications and inversions become 0(N). This makes the method comparably fast for cells large than the localisation length of the SSW. In addition once the expansion is made, Poisson s equation can be solved exactly, and the integrals over the intersitital region can be calculated exactly. [Pg.234]

The PLS algorithm is relatively fast because it only involves simple matrix multiplications. Eigenvalue/eigenvector analysis or matrix inversions are not needed. The determination of how many factors to take is a major decision. Just as for the other methods the right number of components can be determined by assessing the predictive ability of models of increasing dimensionality. This is more fully discussed in Section 36.5 on validation. [Pg.335]

This is not a problem, as long as this common reference already mentioned is available. For the characterization of the reference material, especially in the case of matrix reference materials, it is often desirable to use multiple methods, and often also multiple laboratories. Under these conditions, it is easier to arrive at an uncertainty that represents the state-of-the-arF of the laboratories. [Pg.15]

Figure 10.1 Schematic diagram of the sequential solution of model and sensitivity equations. The order is shown for a three parameter problem. Steps l, 5 and 9 involve iterative solution that requires a matrix inversion at each iteration of the fully implicit Euler s method. All other steps (i.e., the integration of the sensitivity equations) involve only one matrix multiplication each. Figure 10.1 Schematic diagram of the sequential solution of model and sensitivity equations. The order is shown for a three parameter problem. Steps l, 5 and 9 involve iterative solution that requires a matrix inversion at each iteration of the fully implicit Euler s method. All other steps (i.e., the integration of the sensitivity equations) involve only one matrix multiplication each.
The solution of Equation 10.28 is obtained in one step by performing a simple matrix multiplication since the inverse of the matrix on the left hand side of Equation 10.28 is already available from the integration of the state equations. Equation 10.28 is solved for r=l,...,p and thus the whole sensitivity matrix G(tr,) is obtained as [gi(tHt), g2(t,+1),- - , gP(t,+i)]. The computational savings that are realized by the above procedure are substantial, especially when the number of unknown parameters is large (Tan and Kalogerakis, 1991). With this modification the computational requirements of the Gauss-Newton method for PDE models become reasonable and hence, the estimation method becomes implementable. [Pg.176]

One method of partitioning the system equations is to compute the maximal loops using powers of the adjacency matrix as discussed in Section II. Certain modifications to the methods of Section II are needed in order to reduce the computation time. The first modification is to obtain the product of the matrices using Boolean unions of rows instead of the multiplication technique previously demonstrated to obtain a power of an adjacency matrix. To show how the Boolean union of rows can replace the standard matrix multiplication, consider the definition of Boolean matrix multiplication, Eq. (2), which can be expanded to... [Pg.202]

With these modifications to the methods of Section II, only two matrices, the matrix of the current power and the one of the previous power, need be stored in the computer memory at one time, and the number of matrix multiplications is reduced from n multiplications for the methods of Section II to log n/log 2 multiplications for the modified method. [Pg.203]

Basic understanding and efficient use of multivariate data analysis methods require some familiarity with matrix notation. The user of such methods, however, needs only elementary experience it is for instance not necessary to know computational details about matrix inversion or eigenvector calculation but the prerequisites and the meaning of such procedures should be evident. Important is a good understanding of matrix multiplication. A very short summary of basic matrix operations is presented in this section. Introductions to matrix algebra have been published elsewhere (Healy 2000 Manly 2000 Searle 2006). [Pg.311]

Basically, two types of approaches are developed here iterative (optimization-based) approaches like the one by Sippl et al. [101] and direct approaches like the one by Kabsch [102, 103], based on Lagrange multipliers. Unfortunately, the much expedient direct methods may fail to produce a sufficiently accurate solution on some degenerate cases. Redington [104] suggested a hybrid method with an improved version of the iterative approach, which requires the computation of only two 3x3 matrix multiplications in the inner loop of the optimization. [Pg.71]

A perturbation expansion version of this matrix inversion method in angular momentum space has been introduced with the Reverse Scattering Perturbation (RSP) method, in which the ideas of the RFS method are used the matrix inversion is replaced by an iterative, convergent expansion that exploits the weakness of the electron backscattering by any atom and sums over significant multiple scattering paths only. [Pg.29]


See other pages where Matrix multiplication method is mentioned: [Pg.308]    [Pg.171]    [Pg.353]    [Pg.415]    [Pg.220]    [Pg.716]    [Pg.149]    [Pg.164]    [Pg.1342]    [Pg.361]    [Pg.308]    [Pg.171]    [Pg.353]    [Pg.415]    [Pg.220]    [Pg.716]    [Pg.149]    [Pg.164]    [Pg.1342]    [Pg.361]    [Pg.480]    [Pg.78]    [Pg.13]    [Pg.12]    [Pg.30]    [Pg.271]    [Pg.311]    [Pg.163]    [Pg.1083]    [Pg.203]    [Pg.48]    [Pg.61]    [Pg.141]    [Pg.116]    [Pg.188]   
See also in sourсe #XX -- [ Pg.171 ]




SEARCH



Matrix multiplication

Methods multiple

© 2024 chempedia.info