Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Linear models vector

Approximate inference regions for nonlinear models are defined by analogy to the linear models. In particular, the (I-a)I00% joint confidence region for the parameter vector k is described by the ellipsoid,... [Pg.178]

The linear model can be extended to include more distant neighbours and to three dimensions. Let us consider an elastic lattice wave with wave vector q. The collective vibrational modes of the lattice are illustrated in Figure 8.6. The formation of small local deformations (strain) in the direction of the incoming wave gives rise to stresses in the same direction (upper part of Figure 8.6) but also perpendicular (lower part of Figure 8.6) to the incoming wave because of the elasticity of the material. The cohesive forces between the atoms then transport the deformation of the lattice to the... [Pg.236]

Note that the applicahon of representation theory to quantum mechanics depends heavily on the linear nature of quantum mechanics, that is. on the fact that we can successfully model states of quantum systems by vector spaces. (By contrast, note that the states of many classical systems cannot be modeled with a linear space consider for example a pendulum, whose motion is limited to a sphere on which one cannot dehne a natural addition.) The linearity of quantum mechanics is miraculous enough to beg the ques-hon is quantum mechanics truly linear There has been some inveshgation of nonlinear quantum mechanical models but by and large the success of linear models has been enormous and long-lived. [Pg.136]

As can be seen from Figures 6.5 and 6.6, there are several similarities between PLS and PCA. For example, both methods make a linear model of the data table X by means of a score vector, t (one score for each object), and a loading vector, p, which measures the importance of the variables. However, in PCA, neither t nor p is influenced (computationally) by anything but the variation in the measurements. Hence, if it is attempted to relate the measurements X to some external event (for example, drug treatment) via the PC t-scores, it must be realised that, unless this external event is a sufficiently large... [Pg.301]

Support Vector Machine (SVM) is a classification and regression method developed by Vapnik.30 In support vector regression (SVR), the input variables are first mapped into a higher dimensional feature space by the use of a kernel function, and then a linear model is constructed in this feature space. The kernel functions often used in SVM include linear, polynomial, radial basis function (RBF), and sigmoid function. The generalization performance of SVM depends on the selection of several internal parameters of the algorithm (C and e), the type of kernel, and the parameters of the kernel.31... [Pg.325]

Linear. Since mass and energy are linearly related between modules, purely linear flowsheet calculations can be formulated as a solution to a set of linear equations once linear models for the modules can be constructed. Linear systems, especially for material balance calculations can be very useful (16). Two general systems, based on linear models, SYMBOL (77) and MPB II (7 ) are indicated in Table 1. MPB II is based on a thesis by Kniele (79). If Y is the vector of stream outputs and the module stream inputs are X, then as discussed by Mahalec, Kluzik and Evans (80)... [Pg.26]

Linear Models in Parameters, Single Reaction We adopt the terminology from Froment and Hosten, Catalytic Kinetics—Modeling, in Catalysis—Science and Technology, Springer-Verlag, New York, 1981. For n observations (experiments) of the concentration vector y for a model linear in the parameter vector [1 of length p < n, the residual error e is the difference between the kinetic model-predicted values and the measured data values ... [Pg.37]

The linear model is represented as a linear transformation of the parameter vector [1 through the model matrix X. Estimates b of the true parameters [1 are obtained by minimizing the objective junction S(P), the sum of squares of the residual errors, while varying the values of the parameters ... [Pg.37]

Let X be the model matrix which corresponds to the linear model, and let X2 be a complementary matrix which is used to append tbe interaction terms to tbe linear model. The columns in Xj are the vectors of variable cross-products. Let bj be the vector of "true" parameters of the linear model, and let b2 be the vector of "true" cross-product coefficients. [Pg.192]

The uncertainty of a multiple linear model can be estimated with a reliable estimate of the variance of y a o" value can also be used for uncertainty predictions. Otherwise, one requires an approximation based on the sum of square residuals. The residual vector for a multiple linear model was previously given in Equation (3.47). In terms of e and other previously defined quantities, the variance may be estimated [2] by... [Pg.241]

In brief, ICA provides a linear model of the information, decomposing it in different contributions under criteria of independence maximization. These directions in the space are described by the independent components, which can be considered formally equivalent to PCA loadings. The mixing matrix provides the contribution of each random vector to each direction in the independent components space, which is formally equivalent to PCA scores. [Pg.60]


See other pages where Linear models vector is mentioned: [Pg.133]    [Pg.61]    [Pg.94]    [Pg.271]    [Pg.30]    [Pg.186]    [Pg.395]    [Pg.346]    [Pg.352]    [Pg.136]    [Pg.36]    [Pg.193]    [Pg.264]    [Pg.329]    [Pg.422]    [Pg.154]    [Pg.29]    [Pg.664]    [Pg.175]    [Pg.83]    [Pg.671]    [Pg.174]    [Pg.495]    [Pg.61]    [Pg.94]    [Pg.232]    [Pg.182]    [Pg.345]    [Pg.47]   


SEARCH



Linearized model

Model Linearity

Models linear model

Models linearization

© 2024 chempedia.info