Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Region hyperplane

We have seen that the output neuron in a binary-threshold perceptron without hidden layers can only specify on which side of a particular hyperplane the input lies. Its decision region consists simply of a half-plane bounded by a hyperplane. If one hidden layer is added, however, the neurons in the hidden layer effectively take an intersection (i.e. a Boolean AND operation) of the half-planes formed by the input neurons and can thus form arbitrary (possible unbounded) convex regions. ... [Pg.547]

Consider a simple perceptron with N continuous-valued inputs and one binary (— 1) output value. In section 10.5.2 we saw how, in general, an A -dimensional input space is separated by an (N — l)-dimensional hyperplane into two distinct regions. All of the points lying on one side of the hyperplane yield the output -)-l all the points on the other side of the hyperplane yield -1. [Pg.550]

In the original problem one usually has m < n. Thus, the vertices of the region of solution lie on the coordinate planes. This follows from the fact that, generally, in n dimensions, n hyperplanes each of dimension (n — 1) intersect at a point. The dual problem defines a polytope in m-dimensional space. In this case not all vertices need lie on the coordinate planes. [Pg.292]

Radial basis function networks (RBF) are a variant of three-layer feed forward networks (see Fig 44.18). They contain a pass-through input layer, a hidden layer and an output layer. A different approach for modelling the data is used. The transfer function in the hidden layer of RBF networks is called the kernel or basis function. For a detailed description the reader is referred to references [62,63]. Each node in the hidden unit contains thus such a kernel function. The main difference between the transfer function in MLF and the kernel function in RBF is that the latter (usually a Gaussian function) defines an ellipsoid in the input space. Whereas basically the MLF network divides the input space into regions via hyperplanes (see e.g. Figs. 44.12c and d), RBF networks divide the input space into hyperspheres by means of the kernel function with specified widths and centres. This can be compared with the density or potential methods in pattern recognition (see Section 33.2.5). [Pg.681]

There are very few examples of scalar-mixing cases for which an explicit form for (e, 0) can be found using the known constraints. One of these is multi-stream mixing of inert scalars with equal molecular diffusivity. Indeed, for bounded scalars that can be transformed to a mixture-fraction vector, a shape matrix can be generated by using the surface normal vector n( ) mentioned above for property (ii). For the mixture-fraction vector, the faces of the allowable region are hyperplanes, and the surface normal vectors are particularly simple. For example, a two-dimensional mixture-fraction vector has three surface normal vectors ... [Pg.301]

For the n-dimensional case, the region that is defined by the set of hyperplanes resulting from the linear constraints represents a convex set of all points which satisfy the constraints of the problem. If this is a bounded set, the enclosed space is a convex polyhedron, and, for the case of monotonically increasing or decreasing values of the objective function, the maximum or minimum value of the objective function will always be associated with a vertex... [Pg.382]

The essence of the differences between the operation of radial basis function networks and multilayer perceptrons can be seen in Figure 4.1, which shows data from the hypothetical classification example discussed in Chapter 3. Multilayer perceptrons classify data by the use of hyperplanes that divide the data space into discrete areas radial basis functions, on the other hand, cluster the data into a finite number of ellipsoid regions. Classification is then a matter of finding which ellipsoid is closest for a given test data point. [Pg.41]

After developing novel approaches to exhaustive conformational analysis, Crippen [21] at the University of Michigan took a novel approach to pharmacophore discovery, based on Voronoi polyhedra (using hyperplanes to partition space into regions encompassing active molecules). This line of investigation was ultimately abandoned, as Crippen was unable to find a satisfactory resolution to the problem of multiple solutions consistent with the SAR. [Pg.441]

Rem. 16, the Second law (1.18) forbids for (infinite dimensional vector of) heat distribution of cyclic processes (or its densities) the region with absorbed heat only (q > 0,q =0). Moreover, using closeness and completeness of universe Ul, U2 with Carnot cycles, the heat distributions (or their densities) must fall into halfspace which does not meet the forbidden region (with corresponding boundary hyperplane of reversible cyclic processes). This may be similarly expressed through positive function /(H) > 0 of empirical temperature d by... [Pg.28]

Notice that Equation 6.6 describes n linear inequality relations, which are different to standard equations (equality relations) each row in Equation 6.6 describes a linear inequality (a hyperplane) that separates n-dimensional space into two half spaces. The collection of all n inequalities describes a convex region in R". Concentrations residing in this region thus satisfy both mass balance and nonnegativity constraints, and hence the region defined by Equafion 6.6 describes, mafhemafically, fhe sfoichiomefric subspace S. [Pg.151]

In Figure 6.5(a), a convex region is shown. A plane is introduced so that it meets the surface joined by extreme points ABCD. In this instance, points contained in the rectangular region bounded by ABCD are not considered as exposed points, for all points in this section are contained within the same hyperplane. (Points A, B, C, and D are all extreme points but they are not exposed points because all four points lie in the hyperplane and thus the hyperplane is not supported at a single point.)... [Pg.160]

For example, the equation x -1- y = 0 describes a line in whereas the equivalent inequality x -1- y < 0 describes a region whereby any combination of x -1- y less than zero is satisfied. Similarly, the hyperplane equation H(n, b) = y separates space into two half-spaces given by H(n, b) < y and H(n, b) > y. [Pg.239]

Here and are matrices representing region Pj. as a list of hyperplane constraints in state space. C, is hence a feasible point if it satisfies all hyperplane constraints given by Equation 8.9. Intersections with the current region are performed with the same system of inequality constraints. For each potential concentration C, generated in the complement region S X, it is possible to express the intersection point C as a linear combination of the rate vector at C, and a scalar variable t. [Pg.259]

Hyperplanes are moved into the current polytope Pjt with the purpose of cutting away unattainable space from the region. The resulting polytope P +i is defined by the collection of all hyperplanes and smaller than the original polytope Pjj, and thus it provides a closer approximation to the tme AR. In the limit of infinitely many elimination steps, the remaining set of points is an approximation to the AR. Figure 8.21 shows a schematic of the construction sequence for the method. [Pg.262]

The central approach to constmction is to iterate over each comer of the current polytope P introduce new hyperplanes that eliminate unattainable space. Sharp corners of the polytope are slowly smoothed out by the introduction of additional bounding planes, and the level of accuracy obtained is hence a strong function of the number of unique hyperplanes that are introduced. Curvature of a region, such as that generated by a PFR manifold on the AR boundary, may be approximated by the use of many bounding hyperplanes. [Pg.262]

Note that no assumption is made regarding the attainability of points where rate vectors point out of or are tangent to the hyperplane. It is not possible to discern from the achievability condition, and so points satisfying n r(C) > 0 cannot be excluded from the region on this basis alone. [Pg.263]

Hyperplanes can be used to divide a region into two halfspaces. If the hyperplanes can be orientated in such a way as to exclude unachievable states from a larger region (containing both achievable and unachievable states), then this can be used as a method of AR constmction. [Pg.263]


See other pages where Region hyperplane is mentioned: [Pg.517]    [Pg.548]    [Pg.293]    [Pg.189]    [Pg.46]    [Pg.49]    [Pg.181]    [Pg.278]    [Pg.293]    [Pg.95]    [Pg.278]    [Pg.195]    [Pg.16]    [Pg.112]    [Pg.46]    [Pg.49]    [Pg.172]    [Pg.206]    [Pg.322]    [Pg.343]    [Pg.338]    [Pg.174]    [Pg.15]    [Pg.71]    [Pg.173]    [Pg.258]    [Pg.259]    [Pg.262]    [Pg.262]    [Pg.263]    [Pg.264]    [Pg.264]   


SEARCH



Hyperplanes

© 2024 chempedia.info