Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Moore-Penrose

It should be noted that in the case of a singular matrix A, the dimensions of V and A are pxr and rxr, respectively, where r is smaller than p. The expression in eq. (29.53) allows us to compute the generalized inverse, specifically the Moore-Penrose inverse, of a symmetric matrix A from the expression ... [Pg.38]

A. Albert, Regression and the Moore-Penrose Pseudoinverse, Academic Press, New York,... [Pg.163]

To be more precise, only the 1,3 generalized inverses solve the least squares problem, where 1 and 3 refer to the four Moore-Penrose conditions. [Pg.48]

In algebra, a number multiplied by its inverse results in a value of 1. In matrix algebra, the inverse of a square matrix (denoted by a superscript T) multiplied by itself results in the identity matrix. In other words, the inverse of X is the matrix X-1 such that XX-1 = X-1X = I. Two matrices are said to be orthogonal or independent if XYT = I. The inverse of an orthogonal matrix is its transpose. Not all matrices can be inverted. However, one condition for inversion is that the matrix must be square. Sometimes an inverse to a matrix cannot be found, particularly if the matrix has a number of linearly dependent column. In such a case, a generalized estimate of inverted matrix can be estimated using a Moore Penrose inverse (denoted as superscript e.g., X-). [Pg.342]

In equations 7.13,7.25, and 7.27 we have denoted L+ = (L Ly L1, and similarly for Mi+ in equation 7.27, and D+ in equation 7.28, and N4- in equation 7.29. This notation is used because these are all special cases of the Moor e-Penrose Pseudoinverse M which can be defined for an arbitrary matrix M and which gives the minimum least-squares approximation even in cases where the columns of M may not be linearly independent (see Lawson and Hanson, 1974). Similarly, in equations 7.25, 7.27, and 7.29 we have denoted Ai+ = A/ (AiA/) 1 since this is another special case of the Moore-Penrose Pseudoinverse, for the case where the matrix in question, Ai, has linearly independent rows. [Pg.179]

A. Alpert, Regres sion and the Moore-Penrose Pseudoiniverse (Academ ic Press, New York, 1972). [Pg.105]

Computation of the regression coefficients b vectorwise is carried out by formation of the pseudo-inverse matrix X" " (Moore-Penrose matrix) according to... [Pg.235]

In the case of full rank, all singular values will be obviously different from zero and the SVD solution equals that of OLS. However, one often comes up with several small singular values because of ill-conditioned systems. Therefore, the main goal of PCR is not to keep all singular values for an exact representation of the Moore-Penrose matrix, but to select a subset of singular values that best guarantee predictions of unknown cases. [Pg.235]

Solution of the system of equations The system of Eq. (3), whose equations combine numerical values, theoretical expressions, and covariances, can be solved for the adjusted variables Z best estimates of their values can thus be calculated. The method used in [2,3] consists in using a sequence of linear approximations to system (3), around a numerical vector Z that converges toward the solution of the full, non-linear system (this is akin to Newton s method—see, e.g. [23]). Each of the successive linear approximations to system (3) is solved through the Moore-Penrose pseudo-inverse [20] (see, also. Ref. [2, App. E]). The numerical solution for Z as found in CODATA 2002 can be found on the web . These values are such that the equations in system (3) are satisfied, as a whole, as best as possible [3, App. E]). [Pg.264]

The determination of output weights between hidden and output layers is to find the least-square solution to the given linear system. The minimum norm least-square solution to hnear system (1) is M Y, where M is the Moore-Penrose generalized inverse of matrix M. The minimum norm least-square solution is unique and has the smallest norm among the least-square solutions. [Pg.30]

Since C is not a square matrix (it is 4 X 3), the unknown X, Y, Z coordinates can be solved by using the Moore-Penrose generalized inverse, as follows ... [Pg.124]

To calculate the correction vector, Eq. 6.19 is solved via matrix inversion, e.g. by calculating the Moore-Penrose generalized matrix inverse ... [Pg.123]

Alternatively, a weighted generalized inverse G may be used instead of the Moore-Penrose pseudo-inverse ... [Pg.41]

The easiest way to solve Eq. (2.3.15a) is to use directly the representation of the Moore Penrose pseudo-inverse G" " = G (GG ) This corresponds to the solution of the normal equations. [Pg.46]


See other pages where Moore-Penrose is mentioned: [Pg.184]    [Pg.603]    [Pg.129]    [Pg.129]    [Pg.147]    [Pg.149]    [Pg.194]    [Pg.20]    [Pg.32]    [Pg.373]    [Pg.374]    [Pg.158]    [Pg.651]    [Pg.652]    [Pg.17]    [Pg.48]    [Pg.48]    [Pg.50]    [Pg.113]    [Pg.293]    [Pg.406]    [Pg.268]    [Pg.54]    [Pg.30]    [Pg.177]    [Pg.178]    [Pg.78]    [Pg.46]    [Pg.36]    [Pg.41]    [Pg.42]   
See also in sourсe #XX -- [ Pg.177 ]




SEARCH



Moore

Mooring

Moors

Penrose

© 2024 chempedia.info