Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Relationships with Other Regression Methods

ANNs can be related to some other classical regression methods. Here we will summarise the excellent introductory work of Naes et al. [54] and Despagne and Massart [7], in particular those relationships among ANNs and the three regression methods considered in this book. Although Kateman stated that the [Pg.264]

the original spectra that we have recorded have many variables and so the number of input nodes in any ANN would be too large, with a subsequent [Pg.265]

To develop our calibration data set using an experimental design (see Chapter 2) in order to be, hopefully, reasonably sure that all the experimental domain is represented by the standard solutions to be prepared. [Pg.267]

Subsequently, we fixed the number of neurons in the input layer as the number of PC scores (here we started with 10). Regarding the hidden layer, we followed current suggestions [49] and a reduced number of neurons were assayed, from 2 to 6. The number of neurons in the output layer is often simpler to set as we are interested in performing a regression between a set of input variables and a variable to be predicted e.g. concentration of an analyte) and a unique neuron would be sufficient. [Pg.267]

Before training the net, the transfer functions of the neurons must be established. Here, different assays can be made (as detailed in the previous sections), but most often the hyperbolic tangent function tansig function in Table 5.1) is selected for the hidden layer. We set the linear transfer function purelin in Table 5.1) for the output layer. In all cases the output function was the identity function i.e. no further operations were made on the net signal given by the transfer function). [Pg.267]


Non-linear models may be fitted to data sets by the inclusion of functions of physicochemical parameters in a linear regression model—for example, an equation in n and as shown in Fig. 6.5—or by the use of non-linear fitting methods. The latter topic is outside the scope of this book but is well covered in many statistical texts (e.g. Draper and Smith 1981). Construction of linear regression models containing non-linear terms is most often prompted when the data is clearly not well fitted by a linear model, e.g. Fig. 6.4e, but where regularity in the data suggests that some other model will fit. A very common example in the field of quantitative structure-activity relationship (QSAR) involves non-linear relationships with hydrophobic descriptors such as log P or n. Non-linear dependency of biological properties on these parameters became apparent early in the... [Pg.127]

The other technique, the support vector machines (SVM), is emerging as a powerful method to perform both classification and regression tasks. It can be employed as such or combined with other multivariate regression methods, such as PLS. SVM is not a natural computation method itself because it performs deterministic calculations so that randomness in the results is avoided. However, it derives from the automatic artificial learning field (some of the most relevant developers worked in ANNs as well) and there is a fairly close relationship with multilayer perceptrons (perceptrons will be introduced in the next section). Therefore, SVM has been included in this chapter. [Pg.367]

The pool of descriptors that is calculated must be winnowed down to a manageable set before constructing a statistical or neural network model. This operation is called feature selection. The first step of feature selection is to use a battery of objective statistical methods. Descriptors that contain little information, descriptors that have little variation across the data set, or descriptors that are highly correlated with other descriptors are candidates for elimination. Multivariate correlations among descriptor can also be discovered with multiple linear regression analysis, and these relationships can be broken down by elimination of descriptors. [Pg.2325]

At this point it should be remarked that multivariate regression with latent variables is a useful tool for describing the relationship between complex processes and/or features in the environment. A specific example is the prediction of the relationship between the hydrocarbon profile in samples of airborne particulate matter and other variables, e.g. extractable organic material, carbon preference index of the n-alkane homologous series, and particularly mutagenicity. The predictive power was between 68% and 81% [ARMANINO et al., 1993]. VONG [1993] describes a similar example in which the method of PLS regression was used to compare rainwater data with different emission source profiles. [Pg.263]

Given a set of experimental data, we look for the time profile of A (t) and b(t) parameters in (C.l). To perform this key operation in the procedure, it is necessary to estimate the model on-line at the same time as the input-output data are received [600]. Identification techniques that comply with this context are called recursive identification methods, since the measured input-output data are processed recursively (sequentially) as they become available. Other commonly used terms for such techniques are on-line or real-time identification, or sequential parameter estimation [352]. Using these techniques, it may be possible to investigate time variations in the process in a real-time context. However, tools for recursive estimation are available for discrete-time models. If the input r (t) is piecewise constant over time intervals (this condition is fulfilled in our context), then the conversion of (C.l) to a discrete-time model is possible without any approximation or additional hypothesis. Most common discrete-time models are difference equation descriptions, such as the Auto-.Regression with eXtra inputs (ARX) model. The basic relationship is the linear difference equation ... [Pg.360]

Model II hnear regression is suitable for experiments where a dependent variable Y varies with an independent variable X which has an error associated with it and the mean (expected) value of Y is given hy a + bX. This might occur where the experimenter is measuring two variables and beheves there to be a causal relationship between them both variables will be snbject to errors in this case. The exact method to use depends on whether your aim is to estimate the functional relationship or to estimate one variable from the other. [Pg.279]


See other pages where Relationships with Other Regression Methods is mentioned: [Pg.264]    [Pg.386]    [Pg.264]    [Pg.386]    [Pg.276]    [Pg.341]    [Pg.104]    [Pg.133]    [Pg.297]    [Pg.387]    [Pg.46]    [Pg.388]    [Pg.182]    [Pg.245]    [Pg.67]    [Pg.373]    [Pg.368]    [Pg.634]    [Pg.771]    [Pg.274]    [Pg.593]    [Pg.281]    [Pg.382]    [Pg.238]    [Pg.554]    [Pg.130]    [Pg.92]    [Pg.371]    [Pg.518]    [Pg.299]    [Pg.367]    [Pg.73]    [Pg.498]    [Pg.397]    [Pg.388]    [Pg.164]    [Pg.75]    [Pg.324]    [Pg.53]    [Pg.182]    [Pg.184]    [Pg.277]    [Pg.129]    [Pg.107]    [Pg.680]   


SEARCH



Others methods

Regression methods

Relationships with

© 2024 chempedia.info