Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrix kernel

The matrix kernel in the integral equation (18) defines the zeroth order propagator... [Pg.93]

The relevant matrix kernel from the second term on the right hand side of Eq. (18) is expressed in terms of a couple of modified functions ... [Pg.94]

The second and the third term on the right hand side equal zero since the trace of the matrix kernel Q(r,r) vanishes. The next term reduces when the explicit form is used,... [Pg.95]

Here S, are elements of the matrix kernel M. The latter is similar to that in Eq. (3.470) but of a higher rank (6x6). The particle balance provides all other equations ... [Pg.290]

The general formalism developed in Ref. 43 provides the following definition of the matrix kernel (3.104) through the Green function of the pair evolution during encounter ... [Pg.344]

A Dirac density operator is defined at specified time by its matrix kernel... [Pg.80]

After cleaning to remove coarse material, ie, cobs, and fines (broken com, dust, etc), the com is steeped in a sulfurous acid solution to soften the com and render the starch granules separable from the protein matrix that envelopes them. About 7% of the kernel s dry substance is leached out during this step, forming protein-rich steep-water, a valuable feed ingredient and fermentation adjunct. [Pg.359]

An exhaustive statistical description of living copolymers is provided in the literature [25]. There, proceeding from kinetic equations of the ideal model, the type of stochastic process which describes the probability measure on the set of macromolecules has been rigorously established. To the state Sa(x) of this process monomeric unit Ma corresponds formed at the instant r by addition of monomer Ma to the macroradical. To the statistical ensemble of macromolecules marked by the label x there corresponds a Markovian stochastic process with discrete time but with the set of transient states Sa(x) constituting continuum. Here the fundamental distinction from the Markov chain (where the number of states is discrete) is quite evident. The role of the probability transition matrix in characterizing this chain is now played by the integral operator kernel ... [Pg.185]

The 2D Laplace inversion, such as Eq. (2.7.1), can in fact be cast into the ID form of Eq. (2.7.11). However, the size of the kernel matrix will be huge. For example, a Ti-T2 experiment may acquire 30 % points and 8192 echoes for each X assuming that 100 points for Ti and T2 are used, respectively. Thus, the kernel will be a matrix of (30 8192) 10 000 with 2.5 109 elements. SVD of such a matrix is not practical on current desktop computers. Thus the ID algorithm cannot be used directly. [Pg.171]

Here K is the kernel matrix determining the linear operator in the inversion, A is the resulting spectrum vector and Es is the input data. The matrix element of K for Laplace inversion is Ky = exp(—ti/xy) where t [ and t,- are the lists of the values for tD and decay time constant t, respectively. The inclusion of the last term a 11 A 2 penalizes extremely large spectral values and thus suppresses undesired spikes in the DDIF spectrum. [Pg.347]

Endosperm constitutes the main part of the corn kernel and consists of 85 to 90% starch, 8 to 10% protein, and a small amount of oil and other compounds. Corn endosperm can be divided into two distinct parts floury and horny endosperm. In floury endosperm, starch particles are round and are dispersed loosely in the protein matrix. In the horny endosperm, the protein matrix is stronger and starch particles are held more firmly. Starch granules are encased in the continuous protein matrix. The tighter setting in horny endosperm gives starch particles a polygonal shape. On average, the amount of horny endosperm in the corn kernel is twice that of the floury endosperm. However, this ratio is a function of the corn kernel protein content (Wolf et al., 1952). [Pg.153]

Due to the structure of the corn kernel (cutinized outer layer of the pericarp surrounding the corn kernel), the diffusion of water and chemicals inside the kernel is through a very specific pathway. Initial results with the use of enzymes during steeping (Figure 1) indicated that enzymes were not able to penetrate the kernels and break down the protein matrix surrounding starch particles. For enzymes to penetrate the corn kernel, it was necessary... [Pg.160]

One uses a simple CG model of the linear responses (n= 1) of a molecule in a uniform electric field E in order to illustrate the physical meaning of the screened electric field and of the bare and screened polarizabilities. The screened nonlocal CG polarizability is analogous to the exact screened Kohn-Sham response function x (Equation 24.74). Similarly, the bare CG polarizability can be deduced from the nonlocal polarizability kernel xi (Equation 24.4). In DFT, xi and Xs are related to each other through another potential response function (PRF) (Equation 24.36). The latter is represented by a dielectric matrix in the CG model. [Pg.341]

The right nullspace or kernel of N is defined by r — rank (A) linearly independent columns A , arranged into a matrix K that fulfills... [Pg.126]

The kernel algorithm also works for univariate t-data (PLS1). Like for PLS2, the deflation is carried out only for the matrix X. Now there exists only one positive eigenvalue for Equation 4.69, and the corresponding eigenvector is the vector w1. In this case, the eigenvectors for Equation 4.70 are not needed. [Pg.172]

Support Vector Machines (SVMs) generate either linear or nonlinear classifiers depending on the so-called kernel [149]. The kernel is a matrix that performs a transformation of the data into an arbitrarily high-dimensional feature-space, where linear classification relates to nonlinear classifiers in the original space the input data lives in. SVMs are quite a recent Machine Learning method that received a lot of attention because of their superiority on a number of hard problems [150]. [Pg.75]

The second-order reduced density matrix in geminal basis is expressed by the parameters of the wave function [6-9]. The second-order reduced density matrix (3) is the kernel of the second-order reduced density operator. Quantities 0 are matrix elements of the second-order reduced density operator in the basis of geminals. In spite of this, the expression element of density matrix is usual. In this sense, in the followings 0 is called as element of second-order reduced density matrix. [Pg.153]

In SIMCA the distribution of the object in the inner model space is not considered, so the probability density in the inner space is constant and the overall PD appears as shown in Figs. 29, 30 for the enlarged and reduced SIMCA models. In CLASSY, Kernel estimation is used to compute the PD in the inner model space, whereas the errors in the outer space are considered, as in SIMCA, uncorrelated and with normal multivariate distribution, so that the overall distribution, in the inner and outer space of a one-dimensional model, looks like that reported in Fig. 31. Figures 32, 33 show the PD of the bivariate normal distribution and Kernel distribution (ALLOC) for the same data matrix as used for Fig. 31. Although in the data set of French wines no really important differences have been detected between SIMCA (enlarged model), ALLOC and CLASSY, it seems that CLASSY should be chosen when the number of objects is large and the distribution on the components of the inner model space is very different from a rectangular distribution. [Pg.125]

The ALLOC method with Kernel probability functions has a feature selection procedure based on prediction rates. This selection method has been used for miik >s5) and wine > data, and it has been compared with feature selection by SELECT and SLDA. Coomans et al. suggested the use of the loss matrix for a better evaluation of the relative importance of prediction errors. [Pg.135]


See other pages where Matrix kernel is mentioned: [Pg.151]    [Pg.284]    [Pg.32]    [Pg.76]    [Pg.217]    [Pg.151]    [Pg.284]    [Pg.32]    [Pg.76]    [Pg.217]    [Pg.645]    [Pg.683]    [Pg.342]    [Pg.200]    [Pg.238]    [Pg.140]    [Pg.218]    [Pg.170]    [Pg.171]    [Pg.5]    [Pg.150]    [Pg.162]    [Pg.776]    [Pg.814]    [Pg.21]    [Pg.55]    [Pg.149]    [Pg.150]    [Pg.152]    [Pg.155]    [Pg.174]    [Pg.274]    [Pg.83]    [Pg.166]    [Pg.549]    [Pg.128]   
See also in sourсe #XX -- [ Pg.29 , Pg.144 ]




SEARCH



Kernel projector matrices

Memory kernel density matrix

© 2024 chempedia.info