Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Structural risk minimization

The main advantage of SVM over other data analysis methods is its relatively low sensitivity to data overfitting, even with the use of a large number of redundant and overlapping molecular descriptors. This is due to its reliance on the structural risk minimization principle. Another advantage of SVM is the ability to calculate a reliability score, R-value, which provides a measure of the probability of a correct classification of a compound [70], The R-value is computed by using the distance between the position of the compound and the hyperplane in the hyperspace. The expected classification accuracy for the compound can then be obtained from the 7 -value by using a chart which shows the statistical relationship between them. As with other methods, SVM requires a sufficient number of samples to develop a classification system and irrelevant molecular descriptors may reduce the prediction accuracies of the SVM classification systems. [Pg.226]

A Support Vector Machine (SVM) is a class of supervised machine learning techniques. It is based on the principle of structural risk minimization. The ideal of SVM is to search for an optimal hyperplane to separate the data with maximal margin. Let <5 -dimensional input x belong to two classwhich was labeled... [Pg.172]

Support vector machine (SVM) is originally a binary supervised classification algorithm, introduced by Vapnik and his co-workers [13, 32], based on statistical learning theory. Instead of traditional empirical risk minimization (ERM), as performed by artificial neural network, SVM algorithm is based on the structural risk minimization (SRM) principle. In its simplest form, linear SVM for a two class problem finds an optimal hyperplane that maximizes the separation between the two classes. The optimal separating hyperplane can be obtained by solving the following quadratic optimization problem ... [Pg.145]

Support vector machine (SVM) is a widely used machine learning algorithm for binary data classification based on the principle of structural risk minimization (SRM) [21, 22] unlike the traditional empirical risk minimization (ERM) of artificial neural network. For a two class problem SVM finds a separating hyperplane that maximizes the width of separation of between the convex hulls of the two classes. To find the expression of the hyperplane SVM minimizes a quadratic optimization problem as follows ... [Pg.195]

Support vector machine (SVM) operates on the principle of structural risk minimization (SRM) [11, 12]. It constructs a hyper-plane or a set of hyper-planes on... [Pg.126]

SVM is a binary classifier that finds the optimal linear decision surface based on the concept of structural risk minimization. The input to a SVM algorithm is a set... [Pg.701]

The VC confidence term in Eq. [8] depends on the chosen class of functions, whereas the empirical risk and the actual risk depend on the particular function obtained from the training algorithm. It is important to find a subset of the selected set of functions such that the risk bound for that subset is minimized. A structure is introduced by classifying the whole class of functions into nested subsets (Figure 20), with the property dvc,i < dwc,i < dye,3- For each subset of functions, it is either possible to compute dye or to get a bound on the VC dimension. Structural risk minimization consists of finding the subset of functions that minimizes the bound on the actual risk. This is done by training for each subset a machine model. For each model the goal is to minimize the empirical risk. Subsequently, one selects the machine model whose sum of empirical risk and VC confidence is minimal. [Pg.308]

A common belief is that because SVM is based on structural risk minimization, its predictions are better than those of other algorithms that are based on empirical risk minimization. Many published examples show, however, that for real applications, such beliefs do not carry much weight and that sometimes other multivariate algorithms can deliver better predictions. [Pg.351]

An important question to ask is as follows Do SVMs overfit Some reports claim that, due to their derivation from structural risk minimization, SVMs do not overfit. However, in this chapter, we have already presented numerous examples where the SVM solution is overfitted for simple datasets. More examples will follow. In real applications, one must carefully select the nonlinear kernel function needed to generate a classification hyperplane that is topologically appropriate and has optimum predictive power. [Pg.351]

To tame these challenges and reduce the risk and effort involved, new concepts such as SOA are required. From the theoretic approach, SOA aims to create a cost optimized and easy to maintain IT environment. On the way of implementing SOA it is more and more obvious that the SOA approach is the foundation for a step by step, cost optimized and risk minimizing way to renew an established IT infira-structure. The way of a continuous transition from an old stable, but inflexible and cost intensive environment, into a new flexible and future oriented landscape is more than a project or a program, it is a continuous journey that takes more than one decade [15]. [Pg.617]

Selecting an optimum group of descriptors is both an important and time-consuming phase in developing a predictive QSAR model. Frohlich, Wegner, and Zell introduced the incremental regularized risk minimization procedure for SVM classification and regression models, and they compared it with recursive feature elimination and with the mutual information procedure. Their first experiment considered 164 compounds that had been tested for their human intestinal absorption, whereas the second experiment modeled the aqueous solubility prediction for 1297 compounds. Structural descriptors were computed by those authors with JOELib and MOE, and full cross-validation was performed to compare the descriptor selection methods. The incremental... [Pg.374]

Piping systems should be designed to minimize the use of components that are likely to leak or fail. Sight glasses and flexible connectors such as hoses and bellows should be eliminated wherever possible. Where these devices must be used, they must be specified in detail so they are structurally robust, compatible with process fluids, and installed to minimize the risk of external damage or impact. [Pg.72]

The value placed on efficiency and predictability, and the institutional pressures for cost-containment, accountability and measurability are enhancing the appeal of reductionist theories. They fit with the tendency to locate social problems in individual pathology. They suit the actuarial mentality that places faith in statistical information as a means to predict and minimize future risk.7 Genetic and evolutionary explanations have become a way to address the issues that trouble society - the perceived decline of the family, the problems of crime and persistent poverty, changes in the ethnic structure of the population, and the pressures on public schools. [Pg.307]

Repair and Reuse After Explosion. Although the risk of a high order detonation of a munition during disassembly is low, this hazard does exist. In the event of such an incident, it is a design requirement for the containment rooms to suffer only minimal damage and allow rapid refurbishment. To assure this capability, the containment room structural design criteria are more conservative than Department of Defense Explosive Safety Criteria would normally require. This is considered appropriate since vapor containment is so critical in this facility. [Pg.250]

Blast resistant design, or the structural strengthening of buildings, is one of the measures an owner may employ to minimize the risk to people and facilities from the hazards of accidental explosions in a plant, Other mitigative or preventive measures, including siting (adequate spacing from potential explosion hazards) and hazard reduction (inventory and process controls, occupancy limitations, etc.), arc not covered in this report. [Pg.142]

An analysis of more than 130 preclinical candidates that had attrited during further development showed the failure of the chemotype approach (i.e. that a compound of the same/similar chemotype will have similar risks of attrition and that a structurally diverse chemotype will offer the best approach to minimize attrition risk) and 2D structure-based methods to be able to effectively differentiate compounds [29]. Thus, the risk of failing or succeeding in development is not related to being of the same chemotype , and differentiation by this method may not be the most effective way dangers are both that a valuable series/chemotype could be discarded because of one bad result and that a structurally different compound may actually have similar off-target effects (e.g. due to the decoration versus the scaffold). [Pg.36]


See other pages where Structural risk minimization is mentioned: [Pg.338]    [Pg.364]    [Pg.2405]    [Pg.225]    [Pg.363]    [Pg.13]    [Pg.33]    [Pg.306]    [Pg.498]    [Pg.338]    [Pg.364]    [Pg.2405]    [Pg.225]    [Pg.363]    [Pg.13]    [Pg.33]    [Pg.306]    [Pg.498]    [Pg.609]    [Pg.706]    [Pg.199]    [Pg.918]    [Pg.49]    [Pg.128]    [Pg.134]    [Pg.327]    [Pg.168]    [Pg.128]    [Pg.423]    [Pg.245]    [Pg.101]    [Pg.292]    [Pg.95]    [Pg.94]    [Pg.457]    [Pg.81]    [Pg.160]    [Pg.433]    [Pg.100]    [Pg.114]    [Pg.300]    [Pg.135]   
See also in sourсe #XX -- [ Pg.12 , Pg.14 , Pg.33 ]

See also in sourсe #XX -- [ Pg.306 ]




SEARCH



© 2024 chempedia.info