Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Variables latent

The PLS regression method can be extended to accommodate problems where multiple y variables must be predicted. This extension, commonly called PLS2, operates on the same principle as PLS, where the goal is to hnd compressed variables (latent variables) that sequentially describe the most variance in both the x and y data [1]. However, the algorithm for the PLS2 method is slightly different than that of the PLS method, in that one must now account for covariance between different y variables as well as covariance between x variables. [Pg.387]

In order to handle multiple Y-variables, an extension of the PLS regression method discussed earlier, called PLS-2, must be used.1 The algorithm for the PLS-2 method is quite similar to the PLS algorithms discussed earlier. Just like the PLS method, this method determines each compressed variable (latent variable) based on the maximum variance explained in both X and Y. The only difference is that Y is now a matrix that contains several Y-variables. For PLS-2, the second equation in the PLS model (Equation 8.36) can be replaced with the following ... [Pg.292]

Stimulation of the peripheral nerve trunk of intact animals leads to generation of muscle action potentials of three types. According to the duration of latent periods, they fall into the following order M-response (the result of the direct stimulation of a-motor neuron axons), Fl-response (the monosynaptic response), and polysynaptic responses with the variable latent period from 8-12 up to about 40 ms. In test animals of the III group, the changes of temporal parameters refer mainly to the latent period and duration of M-response (Table 7.4). Polysynaptic responses occur at all intensities of excitation and have a more pronounced character than in intact rats. A marked level and more distinct differentiation of the peaks of the complex action potential were noted. [Pg.79]

Motor capacity (static endurance and dynamic performance at forced motor activity) was studied in a total of 96 albino rats that were repeatedly exposed to 0 (42 males/group), 48, 385, or 770 ppm carbon disulfide via inhalation (18 males/group) (Frantik 1970). Acute toxicity was measured 0-60 minutes after termination of exposure, and chronic toxicity was measured 48-72 hours postexposure. After initial exposure to the 770-ppm dose there were reductions in spontaneous motor activity (60%), conditioned avoidance, and motor performance. Effects persisted for 24 hours but disappeared completely 3 days postexposure and failed to reappear after repeated experiments. Symptoms of motor impairment were observed after a variable latent period and were related to exposure concentration (385-ppm dose,... [Pg.56]

All students of the action of fluoroac-etate have been impressed with the unusually long and variable latent period between... [Pg.791]

The linear PLS model finds A new variables, latent variables, also called X scores and denoted by (a = 1,2,..., A). These scores are linear combinations of the original variable.s Xk with the coefficients (weights) wl (a = 1,2,. A). [Pg.2008]

Following a variable latent period ranging from 1 to 24 h, there may be signs of developing pulmonary oedema with dyspnoea, coughing and the production of large amounts of foamy sputum (Fig. 7.2). [Pg.138]

Another problem is to determine the optimal number of descriptors for the objects (patterns), such as for the structure of the molecule. A widespread observation is that one has to keep the number of descriptors as low as 20 % of the number of the objects in the dataset. However, this is correct only in case of ordinary Multilinear Regression Analysis. Some more advanced methods, such as Projection of Latent Structures (or. Partial Least Squares, PLS), use so-called latent variables to achieve both modeling and predictions. [Pg.205]

We have to apply projection techniques which allow us to plot the hyperspaces onto two- or three-dimensional space. Principal Component Analysis (PCA) is a method that is fit for performing this task it is described in Section 9.4.4. PCA operates with latent variables, which are linear combinations of the original variables. [Pg.213]

The profits from using this approach are dear. Any neural network applied as a mapping device between independent variables and responses requires more computational time and resources than PCR or PLS. Therefore, an increase in the dimensionality of the input (characteristic) vector results in a significant increase in computation time. As our observations have shown, the same is not the case with PLS. Therefore, SVD as a data transformation technique enables one to apply as many molecular descriptors as are at one s disposal, but finally to use latent variables as an input vector of much lower dimensionality for training neural networks. Again, SVD concentrates most of the relevant information (very often about 95 %) in a few initial columns of die scores matrix. [Pg.217]

The aim of PCA is to create a set of latent variables which is smaller than the set of original variables but still explains all the variance in the matrix X of the original variables. [Pg.446]

Table 12.5 Weightings of the various parameters in the first three latent variables. Table 12.5 Weightings of the various parameters in the first three latent variables.
The maximum number of latent variables is the smaller of the number of x values or the number of molecules. However, there is an optimum number of latent variables in the model beyond which the predictive ability of the model does not increase. A number of methods have been proposed to decide how many latent variables to use. One approach is to use a cross-validation method, which involves adding successive latent variables. Both leave-one-out and the group-based methods can be applied. As the number of latent variables increases, the cross-validated will first increase and then either reach a plateau or even decrease. Another parameter that can be used to choose the appropriate number of latent variables is the standard deviation of the error of the predictions, SpREss ... [Pg.725]

Partial least-squares path modeling with latent variables (PLS), a newer, general method of handling regression problems, is finding wide apphcation in chemometrics. This method allows the relations between many blocks of data ie, data matrices, to be characterized (32—36). Linear and multiple regression techniques can be considered special cases of the PLS method. [Pg.426]

Other chemometrics methods to improve caUbration have been advanced. The method of partial least squares has been usehil in multicomponent cahbration (48—51). In this approach the concentrations are related to latent variables in the block of observed instmment responses. Thus PLS regression can solve the colinearity problem and provide all of the advantages discussed earlier. Principal components analysis coupled with multiple regression, often called Principal Component Regression (PCR), is another cahbration approach that has been compared and contrasted to PLS (52—54). Cahbration problems can also be approached using the Kalman filter as discussed (43). [Pg.429]

Physical and Chemical Properties - Physical State at 15 X and 1 atm. Liquid Molecular Weight Variable — 200 to 2000 Boiling Point at 1 atm. Not pertinent (decomposes) Freezing Point -22 to -58, -30 to -50, -243 to 223 Critical Temperature Not pertinent Critical Pressure Not pertinent Specific Gravity 1.012 at 20 °C (liquid) Vqjor (Gas) Specific Gravity Not pertinent Ratio of Specific Heats of Vapor (Gas) Not pertinent Latent Heat of Vaporization Not pertinent Heat [Pg.322]

Ehie to the dilute nature of the stream, we may assume that the latent heat of condensation for MEK is much smaller than the sensible heat removed from the gas. Therefore, we can apply the procedure presented in Section 10.5. Once a value is selected for ATj ", Eq. (10.19) can be used to determine T and Eqs. (10.3b) and (10.7) can be employed to calculate the value of Since the bounds on p (and consequently on AT ) are tight, we will iterate over two values of 0.96 and 1.00 (Ar2 " = 90.0 and 94.5 K). The other iterative variable is AT, . Both variables are used to trade off fixed versus operating costs. As an illustration, consider the following iteration AT," = 5 K and... [Pg.256]

We are about to enter what is, to many, a mysterious world—the world of factor spaces and the factor based techniques, Principal Component Analysis (PCA, sometimes known as Factor Analysis) and Partial Least-Squares (PLS) in latent variables. Our goal here is to thoroughly explore these topics using a data-centric approach to dispell the mysteries. When you complete this chapter, neither factor spaces nor the rhyme at the top of this page will be mysterious any longer. As we will see, it s all in your point of view. [Pg.79]

Partial least-squares in latent variables (PLS) is sometimes called partial least-squares regression, or PLSR. As we are about to see, PLS is a logical, easy to understand, variation of PCR. [Pg.131]

PLS is more complex than PCR because we are simultaneously using degrees of fieedom in both the x-block and the y-block data. In the absence of a rigourous derivation of the proper number of degrees of freedom to use for PLS a simple approximation is the number of samples, n, minus the number of factors (latent variables), f, minus 1. [Pg.170]

Partial least squares regression (PLS). Partial least squares regression applies to the simultaneous analysis of two sets of variables on the same objects. It allows for the modeling of inter- and intra-block relationships from an X-block and Y-block of variables in terms of a lower-dimensional table of latent variables [4]. The main purpose of regression is to build a predictive model enabling the prediction of wanted characteristics (y) from measured spectra (X). In matrix notation we have the linear model with regression coefficients b ... [Pg.544]


See other pages where Variables latent is mentioned: [Pg.426]    [Pg.1419]    [Pg.1419]    [Pg.162]    [Pg.317]    [Pg.214]    [Pg.123]    [Pg.426]    [Pg.1419]    [Pg.1419]    [Pg.162]    [Pg.317]    [Pg.214]    [Pg.123]    [Pg.135]    [Pg.219]    [Pg.722]    [Pg.723]    [Pg.725]    [Pg.80]    [Pg.476]    [Pg.14]    [Pg.15]    [Pg.53]    [Pg.53]    [Pg.538]    [Pg.1171]    [Pg.701]    [Pg.194]    [Pg.203]    [Pg.210]    [Pg.538]    [Pg.499]    [Pg.441]    [Pg.133]    [Pg.93]   
See also in sourсe #XX -- [ Pg.205 ]

See also in sourсe #XX -- [ Pg.7 ]

See also in sourсe #XX -- [ Pg.205 , Pg.259 ]




SEARCH



Chemometrical latent variable

Feature selection with latent variable methods

Latent

Latent ordering variable

Latent variable analysis

Latent variable decomposition

Latent variable regression

Latent variable regression analysis

Latent variables calibration

Latent variables definition

Latent variables, multivariate data

Partial least-squares in latent variables

© 2024 chempedia.info