Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Issues - Neural Networks

An important way to improve network performance is through the use of prior knowledge, which refers to information that one has about the desired form of the solution and which is additional to the information provided by the training data. Prior knowledge can be incorporated into the pre-processing and post-processing stages (Chapter 7), or into the network structure itself. [Pg.89]

Examined below are several neural network design considerations, including the architecture (8.1), learning algorithm (8.2), network parameters (8.3), training and test data (8.4), and evaluation mechanism (8.5). [Pg.89]


Neural networks can learn automatically from a data set of examples. In the case of NMR chemical shiffs, neural networks have been trained to predict the chemical shift of protons on submission of a chemical structure. Two main issues play decisive roles how a proton is represented, and which examples are in the data set. [Pg.523]

Especially the last few years, the number of applications of neural networks has grown exponentially. One reason for this is undoubtedly the fact that neural networks outperform in many applications the traditional (linear) techniques. The large number of samples that are needed to train neural networks remains certainly a serious bottleneck. The validation of the results is a further issue for concern. [Pg.680]

From a modeling standpoint, the prediction of a molecule s solubility is a very difficult task because of the issues listed above [13-15]. The problem of predicting solubility has been attacked with reasonable success with complex neural network models. While not interpretable, neural networks can function as an in silico assay. Other techniques which are more interpretable have also been applied to the problem. [Pg.453]

For fitting a neural network, it is often recommended to optimize the values of A via C V. An important issue for the number of parameters is the choice of the number of hidden units, i.e., the number of variables that are used in the hidden layer (see Section 4.8.3). Typically, 5-100 hidden units are used, with the number increasing with the number of training data variables. We will demonstrate in a simple example how the results change for different numbers of hidden units and different values of A. [Pg.236]

MacKay s textbook [114] offers not only a comprehensive coverage of Shannon s theory of information but also probabilistic data modeling and the mathematical theory of neural networks. Artificial NN can be applied when problems appear with processing and analyzing the data, with their prediction and classification (data mining). The wide range of applications of NN also comprises optimization issues. The information-theoretic capabilities of some neural network algorithms are examined and neural networks are motivated as statistical models [114]. [Pg.707]

Chen VCP, Rollins DK (2000), Issues regarding artificial neural network modeling for reactors and fermenters, Bioprocess. Eng. 22 85-93. [Pg.270]

The purpose of the following four chapters is to build a foundation of understanding of basic neural network principles. Subsequent chapters will address issues specific to choosing the neural network design (architecture) for particular applications and the preparation of data (data encoding) for use by neural networks. [Pg.17]

Figure 6.1 Design issues of neural network applications for genome informatics. Figure 6.1 Design issues of neural network applications for genome informatics.
The selection of training data presented to the neural network influences whether or not the network learns a particular task. Some major considerations include the generalization/memorization issue, the partitioning of the training and prediction sets, the quality of data, the ratio of positive and negative examples, and the order of example presentation. [Pg.94]

Major problems facing an investigator who wants to prepare data for analysis or neural network modeling concern what input data features are to be used and how the information will be encoded before presentation to the model. Another issue to be faced concerns discovery of biological rules and features from the data, after analysis or modeling-e.g., what do the results mean Interpretation of weights after training, for example, is a particularly difficult problem. [Pg.143]

Neural networks are often viewed as black boxes. Despite the high level of predictive accuracy, one usually cannot understand why a particular outcome is predicted. Although this is generally true, especially for multilayer networks whose weights can not be easily interpreted, there are methods for analyzing trained networks and extracting rules or features. The issue can be framed as different set of questions. How does one extract rules from trained networks (13.2.1) Is it possible to measure the importance of inputs (13.2.2) How should input variables be selected (13.2.3) Another related question concerns the interpretation of network output How likely is the prediction to be correct (13.2.4) ... [Pg.152]

One final issue to note with supervised neural networks is that, because they fit to the data available to... [Pg.2401]


See other pages where Issues - Neural Networks is mentioned: [Pg.89]    [Pg.91]    [Pg.93]    [Pg.95]    [Pg.97]    [Pg.99]    [Pg.101]    [Pg.89]    [Pg.91]    [Pg.93]    [Pg.95]    [Pg.97]    [Pg.99]    [Pg.101]    [Pg.540]    [Pg.115]    [Pg.112]    [Pg.3]    [Pg.229]    [Pg.394]    [Pg.222]    [Pg.266]    [Pg.540]    [Pg.296]    [Pg.126]    [Pg.322]    [Pg.63]    [Pg.458]    [Pg.167]    [Pg.418]    [Pg.404]    [Pg.351]    [Pg.239]    [Pg.8]    [Pg.8]    [Pg.65]    [Pg.67]    [Pg.95]    [Pg.120]    [Pg.145]    [Pg.146]    [Pg.217]    [Pg.710]    [Pg.5]    [Pg.119]   


SEARCH



Network Issue

Neural Network Applications and Issues

Neural network

Neural networking

© 2024 chempedia.info