Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Decision vector

Thus, a new object x, is classified by calculating the discriminant score v, (projection on the direction defined by the decision vector)... [Pg.215]

The assumption we make is that similar samples will define points in 5-dimensional space that lie close to each other. The problem is then reduced to one of discriminating between geometric regions or "categories" in this multidimensional coordinate system. This is achieved by the introduction of a suitable decision vector ... [Pg.386]

The decision vector chosen was empirically derived and the optimized value was found to be the following ... [Pg.387]

The decision vector was formulated such that when the dot product of it and the sample vector was taken, one of two results would be obtained. Either,... [Pg.387]

In general, as noted in the previous discussion on probabilities, trace elemental concentrations in juices of Florida origin were found to be lower than those of non-domestic origin. Therefore, three cases were utilized to test the efficiency of the pattern recognition decision vector. [Pg.387]

The conclusion of these calculations is that for the worst possible case in which all elements of interest in a Florida sample have their highest possible values and those of Brazil have their lowest, the proposed technique offers a clear and substantial separation of the two groups. With the samples of the population that has been observed to date, the decision vector is 100 successful. These preliminary studies have shown this approach to be a valid one and will undoubtedly be of increasing interest and value in the future. [Pg.388]

For multi-objective optimization, theoretical background has been laid, e.g., in Edgeworth (1881) Koopmans (1951) Kuhn and Tucker (1951) Pareto (1896, 1906). Typically, there is no unique optimal solution but a set of mathematically incomparable solutions can be identified. An objective vector can be regarded as optimal if none of its components (i.e., objective values) can be improved without deterioration to at least one of the other objectives. To be more specific, a decision vector x S and the corresponding objective vector f(x ) are called Pareto optimal if there does not exist another x G S such that / (x) < /j(x ) for alH = 1,..., A and /j(x) < /j(x ) for at least one index j. In the MCDM literature, widely used synonyms of Pareto optimal solutions are nondominated, efficient, noninferior or Edgeworth-Pareto optimal solutions. [Pg.156]

As mentioned in the introduction, we here assume that a DM is able to participate in the solution process. (S)he is expected to know the problem domain and be able to specify preference information related to the objectives and/or different solutions. We assume that less is preferred to more in each objective for him/her. (In other words, all the objective functions are to be minimized.) If the problem is correctly formulated, the final solution of a rational DM is always Pareto optimal. Thus, we can restrict our consideration to Pareto optimal solutions. For this reason, it is important that the multi-objective optimization method used is able to find any Pareto op>-timal solution and produce only Pareto optimal solutions. However, weakly Pareto optimal solutions are sometimes used because they may be easier to generate than Pareto optimal ones. A decision vector x G S (and the corresponding objective vector) is weakly Pareto optimal if there does not exist another x G S such that /i(x) < /i(x ) for alH = 1,..., A . Note that Pareto optimality implies weak Pareto optimality but not vice versa. [Pg.156]

An individual encodes the corresponding decision vector into a chromosome based on an appropriate structure (chromosome design and its parameters are specific to the problem to be solved.). [Pg.344]

W old decision vector W -. new decision vector X incorrectly classified spectrum vector c correction factor... [Pg.93]

A fundamental difficulty about this method of finding decision vectors is that the learning machine orients itself on the extreme points. The final position of the decision boundary is largely determined by spectra which are most atypical of their dass. Therefore useful results can only be expected from a carefully selected, self-consistent and error-free training set. [Pg.94]

Decision vectors have been published for determining the presence of heteroatoms, and the number of carbon and hydrogen atoms from low-resolution mass spectra. A sequence of appropriate vector-based decisions can determine the molecular formula of an imknown compound from its low-resolution mass spectrum alone. 20) However, the probability of obtaining the correct answer diminishes exponentially with the number of elementary decisions made. The present state of theory and practice in the application of learning machines and decision vectors to the classification of spectral data is such that the necessary reliability for longer decision chains exists only for very simple, exactly delimited classes of compounds. [Pg.94]

If the two classes are not linearly separable yet do not overlap, correct classification may be achieved by what is called a committee machine . Here a group of mutually independent decision vectors decides on the classification, the final result being given by a majority vote (Fig. lb). If the two classes overlap, no reliable decision is possible (Fig. Ic). [Pg.94]

Figure 6 represents the evolution of the time-dependent imavailability of the HPIS for the optimal case when cost Cs is minimized. Thus, larger amount of peaks is visible, maintaining the unavailability value less then 0.01. The optimal case is characterized by the following decision vector ... [Pg.636]

FIGURE 3. A decision plane (a straight line in this 2-dimensional example) separates the two classes of objects and is defined by a decision vector w. [Pg.5]

The decision plane is usually defined by a decision vector (weight vector) w orthogonal to the plane and through the origin. The weight vector is very suitable to decide whether a point lies on the left or... [Pg.5]

The development of a decision vector is usually computer time consuming- But the application of a given decision vector to a concrete classification requires only some multiplications and summations which can be easily done even by a pocket calculator-... [Pg.6]

Many considerations in the d-dimensional hyperspace are simplified if the decision plane and the decision vector pass through the origin- Such a decision plane was possible in the special case shown in Figure 1, but an extension of the data is necessary for general cases (Figured)-... [Pg.6]

The parameters w of the decision function form the decision vector w which is perpendicular to the decision plane required. The scalar product of decision vector w and pattern vector x gives the classification result. By definition, positive values refer to class 1 (z = +1) and negative values to class 2 (z = -1). [Pg.44]

In order to establish the decision vector which minimizes the sum of the squared errors as given in equation (49), ECz-s3 is differentiated with respect to w and the derivatives are set to zero. [Pg.44]

Solution of this set of equations gives the desired components of the decision vector w. [Pg.45]

The search for a decision plane may be considered as an optimization problem. The decision vector w(w, defined by d compo-... [Pg.48]

Wilkins et. al. have reported the application of simplex optimization to the training of classifiers for the recognition of 11 molecular structures C24, 165, 242, 2433- The best decision vectors found by the authors are completely tabulated C1653. [Pg.153]

Comerford et- al. C533 combined infrared spectra and Raman spectra Best results were achieved by concatenating the data from both sources and treating them as a single vector. Predictive abilities for esters, alcohols, ethers, compounds containing C=C double bonds and ketones ranged from 89 to 100 % (mean 95 %). The decision vectors have been used for the assignment of vibrational frequencies. [Pg.232]

The notion of dominance may be used to make Pareto optimality clearer. A decision vector 0 is said to strictly dominate another (denoted 0 -< ) iS... [Pg.220]

Within the model, x is an n-dimensional decision vector, is a f-dimensional random vector and its probability density function is f x, Q is the target function, gj x, and h]c x, are random constraint function, and E is the expected... [Pg.59]

Then/(x, a is a strictly convex function defined on the convex set in 91, where x is an n-dimensional decision vector. [Pg.60]


See other pages where Decision vector is mentioned: [Pg.215]    [Pg.324]    [Pg.91]    [Pg.93]    [Pg.93]    [Pg.93]    [Pg.94]    [Pg.94]    [Pg.95]    [Pg.95]    [Pg.95]    [Pg.2556]    [Pg.636]    [Pg.1527]    [Pg.1529]    [Pg.6]    [Pg.6]    [Pg.30]    [Pg.32]    [Pg.32]    [Pg.32]    [Pg.162]    [Pg.221]   
See also in sourсe #XX -- [ Pg.5 ]




SEARCH



© 2024 chempedia.info