Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Simple neural network

Anderson, J.A. (1972) A Simple Neural Network Generating an Interactive Memory, Mathematical Biosciences, 14, pp. 197-220. [Pg.428]

The purpose of this case study was to develop a simple neural network based model with the ability to predict the solvent activity in different polymer systems. The solvent activities were predicted by an ANN as a function of the binary type and the polymer volume frac-... [Pg.20]

Yeh and Spiegelman [24], Very good results were also obtained by using simple neural networks of the type described in Section 33.2.9 to derive a decision rule at each branching of the tree [25]. Classification trees have been used relatively rarely in chemometrics, but it seems that in general [26] their performance is comparable to that of the best pattern recognition methods. [Pg.228]

Schierle and Otto [63] used a two-layer perceptron with error back-propagation for quantitative analysis in ICP-AES. Also, Schierle et al. [64] used a simple neural network [the bidirectional associative memory (BAM)] for qualitative and semiquantitative analysis in ICP-AES. [Pg.272]

Milton, J., VanDerHeiden, U., Longtin, A., and Mackey, M., Complex dynamics and noise in simple neural networks with delayed mixed feedback, Biomedica Biochimica Acta, Vol. 49, No. 8-9, 1990, pp. 697-707. [Pg.420]

Artificial Neural Networks and Decision Trees. Figure 6.3 shows an example of a simple neural network that uses Ghose and Crippen atom types (43)to code the molecular... [Pg.247]

FIGURE 4.11 A simple neural network consists of input units that receive the incoming data, a hidden layer of neurons, and the output neurons that finally provide the results of processing. The weights are values between 0 and 1 representing the strength of connectivity between the neurons. Typically, all neurons are connected to all neurons of the next layer. [Pg.104]

Two-dimensional arrangements might be monolayers of clusters on a suitable substrate or two or more coupled ID arrays. While layers are accessible via self-assembly, LB, or electrodeposition, coupled arrays could be obtained by filling clusters into the parallel channels of a crystalline nanoporous solid. 2D networks of clusters might be precursors for simple neural networks, utilizing the Coulombic interaction between ballistic electrons in a 2D electron gas. This concept has been discussed by Naruse and in general introduces new possibilities for the interconnection approach in various fields, e.g. parallel processing and quantum functional devices. [Pg.1361]

Figure 33.13 shows the amplitude of reflection B in Figure 33.12 as a function of frequency for the three PVDF microfiltration membranes. One observes that the amplitude decreases (i.e., attenuation increases) as the nominal pore size increases. A similar trend was observed for the mixed cellulose-ester membranes. Ramaswamy et al. (2004) corroborated their UTDR measurements with SEM and porometry analyses that confirmed that there were substantive differences in the stmctuie of these membranes. They also developed a simple neural network model in which five characteristic frequencies in the UTDR waveform response were used to determine the mean pore size of a membrane. Hence, the... [Pg.893]

The recognition ratios achieved by CBR systems developed as part of this project could not be bettered by either neural-network classifiers or rule-based expert system classifiers. In addition, CBR systems should be mote reliable than simple classifiers as they are programmed to recognise unknown data. The knowledge acquisition necessary to build CBR systems is less expensive than for expert systems, because it is simpler to describe the knowledge how to distinguish between certain types of data than to describe the whole data contents. [Pg.103]

Association deals with the extraction of relationships among members of a data set. The methods applied for association range from rather simple ones, e.g., correlation analysis, to more sophisticated methods like counter-propagation or back-propagation neural networks (see Sections 9.5.5 and 9.5.7). [Pg.473]

Artificial Neural Networks (ANNs) attempt to emulate their biological counterparts. McCulloch and Pitts (1943) proposed a simple model of a neuron, and Hebb (1949) described a technique which became known as Hebbian learning. Rosenblatt (1961), devised a single layer of neurons, called a Perceptron, that was used for optical pattern recognition. [Pg.347]

Controller emulation A simple applieation in eontrol is the use of neural networks to emulate the operation of existing eontrollers. It may be that a nonlinear plant requires several tuned PID eontrollers to operate over the full range of eontrol aetions. Or again, an LQ optimal eontroller has diffieulty in running in real-time. Figure 10.28 shows how the eontrol signal from an existing eontroller may be used to train, and to finally be replaeed by, a neural network eontroller. [Pg.361]

Neural networks can also be classified by their neuron transfer function, which typically are either linear or nonlinear models. The earliest models used linear transfer functions wherein the output values were continuous. Linear functions are not very useful for many applications because most problems are too complex to be manipulated by simple multiplication. In a nonlinear model, the output of the neuron is a nonlinear function of the sum of the inputs. The output of a nonlinear neuron can have a very complicated relationship with the activation value. [Pg.4]

The specific volumes of all the nine siloxanes were predicted as a function of temperature and the number of monofunctional units, M, and difunctional units, D. A simple 3-4-1 neural network architecture with just one hidden layer was used. The three input nodes were for the number of M groups, the number of D groups, and the temperature. The hidden layer had four neurons. The predicted variable was the specific volumes of the silox-... [Pg.11]

Viscosities of the siloxanes were predicted over a temperature range of 298-348 K. The semi-log plot of viscosity as a function of temperature was linear for the ring compounds. However, for the chain compounds, the viscosity increased rapidly with an increase in the chain length of the molecule. A simple 2-4-1 neural network architecture was used for the viscosity predictions. The molecular configuration was not considered here because of the direct positive effect of addition of both M and D groups on viscosity. The two input variables, therefore, were the siloxane type and the temperature level. Only one hidden layer with four nodes was used. The predicted variable was the viscosity of the siloxane. [Pg.12]

A very simple 2-4-1 neural network architecture with two input nodes, one hidden layer with four nodes, and one output node was used in each case. The two input variables were the number of methylene groups and the temperature. Although neural networks have the ability to learn all the differences, differentials, and other calculated inputs directly from the raw data, the training time for the network can be reduced considerably if these values are provided as inputs. The predicted variable was the density of the ester. The neural network model was trained for discrete numbers of methylene groups over the entire temperature range of 300-500 K. The... [Pg.15]

Literature in the area of neural networks has been expanding at an enormous rate with the development of new and efficient algorithms. Neural networks have been shown to have enormous processing capability and the authors have implemented many hybrid approaches based on this technique. The authors have implemented an ANN based approach in several areas of polymer science, and the overall results obtained have been very encouraging. Case studies and the algorithms presented in this chapter were very simple to implement. With the current expansion rate of new approaches in neural networks, the readers may find other paradigms that may provide new opportunities in their area of interest. [Pg.31]

While, as mentioned at the close of the last section, it took more than 15 years following Minsky and Papert s criticism of simple perceptrons for a bona-fide multilayered variant to finally emerge (see Multi-layeved Perceptrons below), the man most responsible for bringing respectability back to neural net research was the physicist John J, Hopfield, with the publication of his landmark 1982 paper entitled Neural networks and physical systems with emergent collective computational abilities [hopf82]. To set the stage for our discussion of Hopfield nets, we first pause to introduce the notion of associative memory. [Pg.518]

Several nonlinear QSAR methods have been proposed in recent years. Most of these methods are based on either ANN or machine learning techniques. Both back-propagation (BP-ANN) and counterpropagation (CP-ANN) neural networks [33] were used in these studies. Because optimization of many parameters is involved in these techniques, the speed of the analysis is relatively slow. More recently, Hirst reported a simple and fast nonlinear QSAR method in which the activity surface was generated from the activities of training set compounds based on some predefined mathematical functions [34]. [Pg.313]

The basic component of the neural network is the neuron, a simple mathematical processing unit that takes one or more inputs and produces an output. For each neuron, every input has an associated weight that defines its relative importance, and the neuron simply computes the weighted sum of all the outputs and calculates an output. This is then modified by means of a transformation function (sometimes called a transfer or activation function) before being forwarded to another neuron. This simple processing unit is known as a perceptron, a feed-forward system in which the transfer of data is in the forward direction, from inputs to outputs, only. [Pg.688]

Numerous QSAR tools have been developed [152, 154] and used in modeling physicochemical data. These vary from simple linear to more complex nonlinear models, as well as classification models. A popular approach more recently became the construction of consensus or ensemble models ( combinatorial QSAR ) combining the predictions of several individual approaches [155]. Or, alternatively, models can be built by rurming the same approach, such as a neural network of a decision tree, many times and combining the output into a single prediction. [Pg.42]

There are many different methods for selecting those descriptors of a molecule that capture the information that somehow encodes the compounds solubility. Currently, the most often used are multiple linear regression (MLR), partial least squares (PLS) or neural networks (NN). The former two methods provide a simple linear relationship between several independent descriptors and the solubility, as given in Eq. (14). This equation yields the independent contribution, hi, of each descriptor, Di, to the solubility ... [Pg.302]


See other pages where Simple neural network is mentioned: [Pg.14]    [Pg.251]    [Pg.105]    [Pg.60]    [Pg.391]    [Pg.515]    [Pg.395]    [Pg.6]    [Pg.552]    [Pg.14]    [Pg.251]    [Pg.105]    [Pg.60]    [Pg.391]    [Pg.515]    [Pg.395]    [Pg.6]    [Pg.552]    [Pg.516]    [Pg.536]    [Pg.109]    [Pg.366]    [Pg.3]    [Pg.10]    [Pg.27]    [Pg.1]    [Pg.8]    [Pg.508]    [Pg.450]    [Pg.498]    [Pg.691]    [Pg.650]    [Pg.660]    [Pg.32]    [Pg.118]   
See also in sourсe #XX -- [ Pg.228 ]




SEARCH



Neural network

Neural networking

© 2024 chempedia.info