Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Hyperplane model

Radial basis function networks (RBF) are a variant of three-layer feed forward networks (see Fig 44.18). They contain a pass-through input layer, a hidden layer and an output layer. A different approach for modelling the data is used. The transfer function in the hidden layer of RBF networks is called the kernel or basis function. For a detailed description the reader is referred to references [62,63]. Each node in the hidden unit contains thus such a kernel function. The main difference between the transfer function in MLF and the kernel function in RBF is that the latter (usually a Gaussian function) defines an ellipsoid in the input space. Whereas basically the MLF network divides the input space into regions via hyperplanes (see e.g. Figs. 44.12c and d), RBF networks divide the input space into hyperspheres by means of the kernel function with specified widths and centres. This can be compared with the density or potential methods in pattern recognition (see Section 33.2.5). [Pg.681]

Partition-based methods address dimensionality by selecting input variables that are most relevant to efficient empirical modeling. The input space is partitioned by hyperplanes that are perpendicular to at least one of the input axes, as depicted in Fig. 6d. [Pg.11]

Methods based on linear projection transform input data by projection on a linear hyperplane. Even though the projection is linear, these methods may result in either a linear or a nonlinear model depending on the nature of the basis functions. With reference to Eq. (6), the input-output model for this class of methods is represented as... [Pg.33]

The well-known Box-Wilson optimization method (Box and Wilson [1951] Box [1954, 1957] Box and Draper [1969]) is based on a linear model (Fig. 5.6). For a selected start hyperplane, in the given case an area A0(xi,x2), described by a polynomial of first order, with the starting point yb, the gradient grad[y0] is estimated. Then one moves to the next area in direction of the steepest ascent (the gradient) by a step width of h, in general... [Pg.141]

Near the optimum both the step width and the model of the hyperplane are changed, the latter mostly from a first order model to a second order model. The vicinity of the optimum can be recognized by the coefficients fli,a2,... of Eq. (5.14) which approximate to zero or change their sign, respectively. For the second order model mostly a Box-Behnken design is used. [Pg.141]

Up to this point the methods of classification operate in the same way. They differ considerably, however, in the way that rules for classification are derived. In this regard the various methods are of three types 1) class discrimination or hyperplane methods, 2) distance methods, and 3) class modeling methods. [Pg.244]

Only one class modeling method is conmonly applied to analytical data and this is the SIMCA method ( ) of pattern recognition. In this method the class structure (cluster) is approximated by a point, line, plane, or hyperplane. Distances around these geometric functions can be used to define volumes where the classes are located in variable space, and these volumes are the basis for the classification of unknowns. This method allows the development of information beyond class assignment ( ). [Pg.246]

The dimensionality of the model, a, is estimated so as to give the model as good predictive properties as possible. Geometrically, this corresponds to the fitting of an a-dimensional hyperplane to the object points in the measurement space. The fitting is made using the least squares criterion, i.e. the sum of squared residuals is minimized for the class data set. [Pg.85]

The method of PLS bears some relation to principal component analysis instead of Lnding the hyperplanes of maximum variance, it Lnds a linear model describing some predicted variables in terms of other observable variables. It is used to Lnd the fundamental relations between two matrices (X andY), that is, a latent variable approach to modeling the covariance structures in these two spaces. A PLS model will try to Lnd the multidimensional direction irMIspace that explains the maximum multidimensional variance direction in flrfspace. [Pg.54]

The application of this polynomial model to responses of factorial plans enables the computation of the calibration hyperplane for the investigated system. Modeling of interference effects on the atomic absorption spectrometric determination of calcium is described by KOSCIELNIAK and PARCZEWSKI [1983 1985],... [Pg.365]

For a given value of a, the brute force bifurcation diagram displays all the values of the relative arteriolar radius r that the model displays when the steady state trajectory intersects a specified hyperplane (the Poincare section) in phase space. Due to the coexistence of several stable solutions, the brute force diagram must be obtained by scanning a in both directions. [Pg.327]

Preference mapping can be accomplished with projection techniques such as multidimensional scaling and cluster analysis, but the following discussion focuses on principal components analysis (PCA) [69] because of the interpretability of the results. A PCA represents a multivariate data table, e.g., N rows ( molecules ) and K columns ( properties ), as a projection onto a low-dimensional table so that the original information is condensed into usually 2-5 dimensions. The principal components scores are calculated by forming linear combinations of the original variables (i.e., properties ). These are the coordinates of the objects ( molecules ) in the new low-dimensional model plane (or hyperplane) and reveal groups of similar... [Pg.332]

SVM s are an outgrowth of kernel methods. In such methods, the data is transformed with a kernel equation (such as a radial basis function) and it is in this mathematical space that the model is built. Care is taken in the constmction of the kernel that it has a sufficiently high dimensionality that the data become linearly separable within it. A critical subset of transformed data points, the support vectors , are then used to specify a hyperplane called a large-margin discriminator that effectively serves as a hnear model within this non-hnear space. An introductory exploration of SVM s is provided by Cristianini and Shawe-Taylor and a thorough examination of their mathematical basis is presented by Scholkopf and Smola. ... [Pg.368]

Assume that a principal components model with A components (/>j, />2>—> Pa) has been determined. The model can be used in two ways (1) For a new compound "r" the corresponding score values can be determined by projecting the descriptors of "r down to the hyperplane spanned by the components. (2) It is then possible to predict the original descriptors of "r from the scores and the loading vectors. If the model is good, the predicted value, ijj, of a descriptor should be close to the observed value, X ,.. The difference, Xj, - is the prediction error. (The letter/will be used to... [Pg.364]

When a swarm of data points in the K-dimensional descriptor space is projected down to the hyperplane defined by a A -dimensional principal components model, there will always be deviations, residuals, between the original data point and its projected point on the model, see Fig. 15.15. These residuals can provide additional information. [Pg.366]

For this model, the iso-response surfaces are parallel hyperplanes in 4 dimensional factor space. In general, for k variables, the iso-response surfaces are - 1 dimensional hyperplanes. The path of steepest ascent is a straight line orthogonal to these planes, usually starting from the centre of the domain. The step-size for each variable is proportional to the estimate of the corresponding coefficient in the model. [Pg.290]

Kernel methods, which include support vector machines and Gaussian processes, transform the data into a higher dimensional space, where it is possible to construct one or more hyperplanes for separation of classes or regression. These methods are more mathematically rigorous than neural networks and have in recent years been widely used in QSAR modeling. ... [Pg.273]

Fig. 9.5. The McCulloch-Pitts artificial neuron is a bi-state model which output depends on the sum of the weighted inputs. Networks built with this neuron can only solve problems whose solution is defined by a hyperplane. Fig. 9.5. The McCulloch-Pitts artificial neuron is a bi-state model which output depends on the sum of the weighted inputs. Networks built with this neuron can only solve problems whose solution is defined by a hyperplane.
With this method, the direction of steepest descent is searched on a plane or hyperplane of the objective function in dependence on the parameters of the model. The basis is a design, for example, 2 , in the m parameters where the objective function, x (Eq. (6.110)), is approximated by means of a linear model in the parameters a ... [Pg.259]


See other pages where Hyperplane model is mentioned: [Pg.290]    [Pg.290]    [Pg.684]    [Pg.246]    [Pg.181]    [Pg.223]    [Pg.24]    [Pg.195]    [Pg.195]    [Pg.199]    [Pg.93]    [Pg.114]    [Pg.333]    [Pg.343]    [Pg.219]    [Pg.363]    [Pg.391]    [Pg.158]    [Pg.50]    [Pg.79]    [Pg.273]    [Pg.101]    [Pg.30]    [Pg.94]   
See also in sourсe #XX -- [ Pg.290 ]




SEARCH



Hyperplanes

© 2024 chempedia.info