Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Covariant derivatives density

Therefore, /M ( vac) is a covariant conserved charge current density in the vacuum. The coefficient g of the covariant derivative has the units [47-61] of k/A in the vacuum. Using... [Pg.27]

The Lagrangian density that gives, on variation, the topologically covariant field equations (33) is an explicit function of the spinor variables, j, if), 2,4>2 and their respective covariant derivatives. (The dagger superscript denotes the Hermitian conjugate of the function.) It has the form ... [Pg.695]

For other centric space groups, the most convenient way to derive the covariance between p vA) and p(rB) is to assume that the densities are calculated as for Pi, and then averaged over the n symmetry-equivalent positions. This leads, for the averaged density pobs, to... [Pg.112]

Quadratic discriminant analysis (QDA) is a probabilistic parametric classification technique which represents an evolution of EDA for nonlinear class separations. Also QDA, like EDA, is based on the hypothesis that the probability density distributions are multivariate normal but, in this case, the dispersion is not the same for all of the categories. It follows that the categories differ for the position of their centroid and also for the variance-covariance matrix (different location and dispersion), as it is represented in Fig. 2.16A. Consequently, the ellipses of different categories differ not only for their position in the plane but also for eccentricity and axis orientation (Geisser, 1964). By coimecting the intersection points of each couple of corresponding ellipses (at the same Mahalanobis distance from the respective centroids), a parabolic delimiter is identified (see Fig. 2.16B). The name quadratic discriminant analysis is derived from this feature. [Pg.88]

Derive the log-likelihood function for the model in (13-18) assuming that, it and u, are normally distributed. [Hints Write the log-likelihood function as InL = Z "=1 InLj where InT, is the log-likelihood function for the T observations in group i. These T observations are joint nonnally distributed with covariance matrix given in (14-20).] The log-likelihood is the sum of the logs of the joint normal densities of the n sets of T observations,... [Pg.55]

Box and Draper (1965) took another major step by deriving a posterior density function p 6 Y), averaged over S, for estimating a parameter vector 6 from a full matrix Y of multiresponse observations. The errors in the observations were assumed to be normally distributed with an unknown m X m covariance matrix S. Michael Box and Norman Draper (1972) gave a corresponding function for a data matrix Y of discrete blocks of responses and applied that function to design of multiresponse experiments. [Pg.142]

Box and Draper (1965) derived a density function for estimating the parameter vector 6 of a multiresponse model from a full data matrix Y, subject to errors normally distributed in the manner of Eq. (4.4-3) with a full unknown covariance matrix E. With this type of data, every event u has a full set of m responses, as illustrated in Table 7.1. The predictive density function for prospective data arrays Y from n independent events, consistent with Eqs. (7.1-1) and (7.1-3), is... [Pg.143]

Although very popular and useful in many situations, the minimization of the least-squares norm is a non-Bayesian estimator. A Bayesian estimator [28] is basically concerned with the analysis of the posterior probability density, which is the conditional probability of the parameters given the measurements, while the likelihood is the conditional probability of the measurements given the parameters. If we assume the parameters and the measurement errors to be independent Gaussian random variables, with known means and covariance matrices, and that the measurement errors are additive, a closed form expression can be derived for the posterior probability density. In this case, the estimator that maximizes the posterior probability... [Pg.44]

Main objective in this paper is the revision and application of parametric methods in imcertainty propagation. However, before applying these methods we must ensure that output vector really follows a normal multivariate distribution. If no information is available about the joint density function in the output, the widespread procedure to ensure this assumption is by means of a multinormal contrast of goodness of fit. Fortunately, following theorem ensures the as3nnptotic j oint normal distribution o f the output vector, when this normality is fulfilled in the input vector, under some weakly conditions about the differentiability of functions in the simulator, providing also the mean vector and covariance matrix of the output vector as functions of their equivalents parameters in the input and the partial derivates of functions in the simulator. [Pg.480]

Eq. (3.198) just represents the inhomogeneous Maxwell equations in covariant form, cf. Eq. (3.172). We have thus derived the inhomogeneous Maxwell equations as the natural equations of motion for the gauge potential A. The sources, as described by the charge-current density are considered as external variables which do not represent dynamical degrees of freedom, i.e., only the action (or effect) of the sources on the gauge fields is taken into account. [Pg.100]

The discrete set of the sample points, xS-i R", is utilized to derive a representation of the prior density p(x tlyi jfc-i)- The prior mean and covariance are approximated as the weighted sample mean and covariance of the prior sigma points and the time update step is continued as follows ... [Pg.1680]

Atomic basis functions in quantum chemistry transform like covariant tensors. Matrices of molecular integrals are therefore fully covariant tensors e.g., the matrix elements of the Fock matrix are F v = (Xn F Xv)- In contrast, the density matrix is a fully contravariant tensor, P = (x IpIx )- This representation is called the covariant integral representation. The derivation of working equations in AO-based quantum chemistry can therefore be divided into two steps (1) formulation of the basic equations in natural tensor representation, and (2) conversion to covariant integral representation by applying the metric tensors. The first step yields equations that are similar to the underlying operator or orthonormal-basis equations and are therefore simple to derive. The second step automatically yields tensorially correct equations for nonorthogonal basis functions, whose derivation may become unwieldy without tensor notation because of the frequent occurrence of the overlap matrix and its inverse. [Pg.47]


See other pages where Covariant derivatives density is mentioned: [Pg.266]    [Pg.99]    [Pg.163]    [Pg.28]    [Pg.149]    [Pg.150]    [Pg.155]    [Pg.160]    [Pg.191]    [Pg.192]    [Pg.202]    [Pg.203]    [Pg.209]    [Pg.135]    [Pg.237]    [Pg.147]    [Pg.736]    [Pg.84]    [Pg.396]    [Pg.20]    [Pg.164]    [Pg.599]    [Pg.57]    [Pg.571]   


SEARCH



Covariance

Covariant

Covariant derivative

Covariates

Covariation

Density derivatives

© 2024 chempedia.info