Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural networks history

There is a long history of efforts to find simple and interpretable /i and fi functions for various activities and properties (29, 30). The quest for predictive QSAR models started with Hammett s pioneer work to correlate molecular structures with chemical reactivities (30-32). However, the widespread applications of modern predictive QSAR and QSPR actually started with the seminal work of Hansch and coworkers on pesticides (29, 33, 34) and the developments of various powerful analysis tools, such as PLS (partial least squares) and neural networks, for multivariate analysis have fueled these widespread applications. Nowadays, numerous publications on guidelines, workflows, and... [Pg.40]

A neural network consists of many processing elements joined together. A typical network consists of a sequence of layers with full or random connections between successive layers. A minimum of two layers is required the input buffer where data is presented and the output layer where the results are held. However, most networks also include intermediate layers called hidden layers. An example of such an ANN network is one used for the indirect determination of the Reid vapor pressure (RVP) and the distillate boiling point (BP) on the basis of 9 operating variables and the past history of their relationships to the variables of interest (Figure 2.56). [Pg.207]

In the previous chapter a simple two-layer artificial neural network was illustrated. Such two-layer, feed-forward networks have an interesting history and are commonly called perceptrons. Similar networks with more than two layers are called multilayer perceptrons, often abbreviated as MLPs. In this chapter the development of perceptrons is sketched with a discussion of particular applications and limitations. Multilayer perceptron concepts are developed applications, limitations and extensions to other kinds of networks are discussed. [Pg.29]

An adaptation of the simple feed-forward network that has been used successfully to model time dependencies is the so-called recurrent neural network. Here, an additional layer (referred to as the context layer) is added. In effect, this means that there is an additional connection from the hidden layer neuron to itself. Each time a data pattern is presented to the network, the neuron computes its output function just as it does in a simple MLP. However, its input now contains a term that reflects the state of the network before the data pattern was seen. Therefore, for subsequent data patterns, the hidden and output nodes will depend on everything the network has seen so far. For recurrent neural networks, therefore, the network behaviour is based on its history. [Pg.2401]

Methods without models such as quantitative process history based methods (neural networks (Venkatasubramanian, et ah, 2003), statistical classifiers (Anderson, 1984)), or qualitative process history based methods (expert systems (Venkatasubramanian, et ah, 2003)),... [Pg.411]

The massive surveys both ground based as well as from space missions provide large number of stellar spectra covering distant components of Galaxy. To understand the complex evolutionary history of our Galaxy, rapid and accurate methods of stellar classification are necessary. A short review of the automated procedures are presented here. The most commonly used automated spectral classification methods are based on (a) Minimum Distance Method (MDM) (b) Gaussian Probability Method (GPM) (c) Principal Component Analysis (PCA) and (d) Artificial Neural Network (ANN). We chose to describe only two of them to introduce the automated approach of classification. [Pg.177]

The history of neural networks, at least in its popular version, has its angels and demons. One of them is Marvin Minsky, referred to as the devil. Minsky and Papert wrote a book entitled Perceptrons (1969) in which they showed, among other things, some of the limitations of the neural networks that were popular at the time. This was interpreted by some as a very restrictive limit on all kinds of neural networks and it was interpreted by some as being the end of an era of funding for the then popular neural networks in order to make room for the more classical AI approadies promoted by Minsky and others. [Pg.335]

Neural augmentation, 29-3-29-8 neural prostheses, 29-3-29-8 sensory prostheses, 29-6-29-8 Neural engineering, history and overview, 29-1-29-12 background, 29-1-29-3 Neural networks adaptive critics in, 12-6-12-7 backpropagation in, 12-4-12-5 basics, 12-2-12-3 in control systems, 12-3-12-7 direct inverse control in, 12-4 for physiological control, 12-1-12-18... [Pg.1542]

Marini, F., Bucci, R., Magri, A.L., Magii, A.D., 2008. Artificial neural networks in ehemomet-rics history, examples and perspectives. Microchem. J. 88, 178-185. [Pg.399]


See other pages where Neural networks history is mentioned: [Pg.275]    [Pg.1]    [Pg.222]    [Pg.246]    [Pg.82]    [Pg.149]    [Pg.329]    [Pg.326]    [Pg.268]    [Pg.913]    [Pg.35]    [Pg.63]    [Pg.100]    [Pg.369]    [Pg.318]    [Pg.159]    [Pg.136]   
See also in sourсe #XX -- [ Pg.3 , Pg.1813 ]




SEARCH



Brief History of Neural Networks

Neural network

Neural networking

© 2024 chempedia.info