Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training an Artificial Neural Network

In this seetion, the proeess by whieh an ANN learns is summarised. To simplify (although without laek of generality), explanations will be given for a network with an input layer, a hidden layer and an output layer. Finally, some notes will be presented for networks implying more than one hidden layer. In all eases, i denotes the number of variables in the input veetor (1,2,. ..,/ i.e. the atomie speetral variables) j represents the number of neurons in the hidden layer (1, 2,. . . , 7) and k is the number of neurons in the output layer (1,2. fc). [Pg.255]

As for any ealibration method, we need a reliable set of standards where the atomie speetra have been measured and the eoneentration(s) of the analyte(s) is (are) known. Here reliable is used sensu stricto, whieh means that the standards should be as similar as possible to the unknown samples. This can be justified easily by recalling that ANNs extract knowledge from a training set to, later, predict unknowns. If and only if the standards resemble the unknowns closely can the ANNs be expected to extract the correct knowledge and predict future samples properly. As for MLR, PCR and PLS, this issue is of paramount importance. [Pg.255]

As was explained above, each neuron of the hidden layer has one response (a°), calculated with the activation function (f )-. [Pg.255]

As in our example here there is only a neuron at the exit layer (we are considering only calibration), the activation function yields a value that is the final response of the net to our input spectrum (recall that the output function of the neuron at the output layer for calibration purposes is just the identity function)  [Pg.256]

Undoubtedly, we do not expect the ANN to predict exactly the concentration of the analyte we used to prepare the standard (at least in these initial steps), so we can calculate the difference among the target (r) and predicted value (u)  [Pg.256]


Instead of using one or two process measnrements, all the measured process conditions (e.g., fuel feed rate, oxygen in the flue gas, heating value of the fuel, ambient air temperature, and so on) have been empirically correlated to predict the NO concentration in the fine gas. The empirical correlation is based on training an artificial neural network to predict the flue gas NO concentration from all the available data. [Pg.1236]

A classical Hansch approach and an artificial neural networks approach were applied to a training set of 32 substituted phenylpiperazines characterized by their affinity for the 5-HTiA-R and the generic arAR [91]. The study was aimed at evaluating the structural requirements for the 5-HTiA/ai selectivity. Each chemical structure was described by six physicochemical parameters and three indicator variables. As electronic descriptors, the field and resonance constants of Swain and Lupton were used. Furthermore, the vdW volumes were employed as steric parameters. The hydrophobic effects exerted by the ortho- and meta-substituents were measured by using the Hansch 7t-ortho and n-meta constants [91]. The resulting models provided a significant correlation of electronic, steric and hydro-phobic parameters with the biological affinities. Moreover, it was inferred that the... [Pg.169]

A large combined data set of 48 propafenones was then analyzed by both Free-Wilson analysis and a combined Hansch/Free-Wilson approach using an artificial neural network (ANN). With this approach it was possible, in contrast to conventional MLR analysis, to correctly predict the MDR-reversing activity of 34 compounds of the data set after the ANN was trained by only 14 compounds. Best results were obtained using those descriptors showing the highest statistical significance in MLR analysis [150]. [Pg.279]

A diverse set of 4173 compounds was used by Karthikeyan et al. [133] to derive their models with a large number of 2D and 3D descriptors (ah calculated by MOE following 3D-structure generation of Concord [134]) and an artificial neural network. The authors found that 2D descriptors provided better prediction accuracy (RMSE = 48-50°C) compared to models developed using 3D indexes (RMSE = 55-56°C) for both training and test sets. The use of a combined 2D and 3D dataset did not improve the results. [Pg.262]

There are many other known methods for SoC determination. Filler [16] describes an artificial neural network (ANN) and a simple linear equation method. The structure of the ANN is given in Fig. 8.16. For training the ANN, measured values of temperature, voltage, current and a SoC value (calculated with a reference method) are necessary. [Pg.224]

One commercially available sensor array analysis system, offered by Mosaic Industries [51], is Rhino , a microprocessor-based instrument with an array composed of discrete, resistive gas sensors. An artificial neural network processes sensor inputs and relates them to patterns established by training the instrument with gas components and mixtures of interest for a specific application. In principle, each system is customized for an application by the choice of sensors and the gas detection needs. Potential applications for this system are limited by the availability of suitable sensors and the complexity needed for discrimination. [Pg.383]

Foody, G.M., McCulloch, M.B., and Yates, W.B., 1995. Classification of remotely sensed data by an artificial neural network issues related to training data characteristics. Photogrammetric Engineering and Remote Sensing. [Pg.286]

These features are then input to a statistical classifier, such as an artificial neural network. The classifier is trained to distinguish benign from malignant lesion (CADx) or true from false detected lesions (CADe). The output of the statistical classifier is a... [Pg.89]

Dai and MacBeth [1995] used an artificial neural network for automatic picking of local earthquake data. The network is trained on noise and P-wave segments. This method is also not applied to the raw signal directly, but to the modulus of a windowed segment of the signal. The output of the network consists of two values, which are parameters of a function that accentuates the difference between the actual output and ideal noise. The disadvantage of this approach is that it is time intensive. [Pg.104]

Recently, a new approach called artificial neural networks (ANNs) is assisting engineers and scientists in their assessment of fuzzy information, Polymer scientists often face a situation where the rules governing the particular system are unknown or difficult to use. It also frequently becomes an arduous task to develop functional forms/empirical equations to describe a phenomena. Most of these complexities can be overcome with an ANN approach because of its ability to build an internal model based solely on the exposure in a training environment. Fault tolerance of ANNs has been found to be very advantageous in physical property predictions of polymers. This chapter presents a few such cases where the authors have successfully implemented an ANN-based approach for purpose of empirical modeling. These are not exhaustive by any means. [Pg.1]

Even so, artificial neural networks exhibit many brainlike characteristics. For example, during training, neural networks may construct an internal mapping/ model of an external system. Thus, they are assumed to make sense of the problems that they are presented. As with any construction of a robust internal model, the external system presented to the network must contain meaningful information. In general the following anthropomorphic perspectives can be maintained while preparing the data ... [Pg.8]


See other pages where Training an Artificial Neural Network is mentioned: [Pg.22]    [Pg.254]    [Pg.166]    [Pg.1779]    [Pg.377]    [Pg.22]    [Pg.254]    [Pg.166]    [Pg.1779]    [Pg.377]    [Pg.105]    [Pg.455]    [Pg.500]    [Pg.232]    [Pg.175]    [Pg.123]    [Pg.245]    [Pg.107]    [Pg.573]    [Pg.113]    [Pg.658]    [Pg.275]    [Pg.227]    [Pg.317]    [Pg.471]    [Pg.90]    [Pg.84]    [Pg.1311]    [Pg.335]    [Pg.218]    [Pg.100]    [Pg.427]    [Pg.56]    [Pg.221]    [Pg.318]    [Pg.662]    [Pg.115]    [Pg.9]    [Pg.27]    [Pg.266]    [Pg.199]    [Pg.367]    [Pg.474]   


SEARCH



Artificial Neural Network

Artificial network

Artificial neural networks training

Neural artificial

Neural network

Neural networking

Training network

Training neural network

© 2024 chempedia.info