Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network implementation

Aloy, P., Cedano, J., Oliva, B., Aviles, F. X. Querol, E. (1997). TransMem a neural network implemented in Excel spreadsheets for predicting transmembrane domains of proteins. ComputAppl Biosci 13,231-4. [Pg.125]

F.C. Christo, A.R. Masri and E.M. Nebot, Artificial Neural Network Implementation of Chemistry with pdf Simulation of H2/CO2 Flames, Comb, and Flame 106 (1996) 406-427. [Pg.437]

In neural networks, implemented by electronic components, the connections are made by wires, and the connection strengths are determined by resistors. In a network implemented by macroscopic chemical kinetics the connections are made by mass transfer and the connection strengths are pumping rates. [Pg.40]

Botros, S.M. and Bruce, E.N. 1990. Neural network implementation of the three-phase model of respiratory rhythm generation. Biol. Cybem. 63 143. [Pg.187]

Another approach is to use algorithms derived from heuristics or through a well-trained neural network implemented in the control of the HESS to provide load power. The latter is used by optimizing the set of currents required of the HESS in various driving conditions while maintaining defined border conditions such as the supercapacitor SOC and operation of the battery within safety limits. [Pg.255]

Table 21.4 shows the topology and the results for the five best neural networks implemented. To evaluate the accuracy of the ANN models developed, we have used the RMSEs in validation (RMSE ). As it can be observed in Table 21.4, the neural networks with lower RMSEv is, in this case, the neural network with topology 5-(4)j-l. We choose this ANN because it presents a lower RMSEv and lower APD The best topology developed, 5-(4)j-l, consists in five input neurons, one middle layer with four neurons and one output neuron in the output layer. To train the ANNj j model, a maximum number of 750 training cycles was established the learning rate was set at 0.60 and the momentum value at 0.80. [Pg.454]

TABLE 21.4 Topology ol the Five Best Neural Networks Implemented for This Chapter... [Pg.455]

Frauel, Y., Pauliat, G., eVilling, A., and Roosen, G. (2001) High-capacity photorefractive neural network implementing a kohonen topological map. Appl. Opt., 40, 5162 5169. [Pg.218]

Chiisto, F.C., Masri, A.R., Nebot, E.M., Tmanyi, T. Utilising artificial neural network and repromodelling in turbulent cmnbustimi. Proc. IEEE Int. Craif. Neural Netw. 1, 911-916 (1995) Christo, F.C., Masri, AJt., Nebot, E.M. Artificial neural network implementation of chemistry with Simulation of H2/CO2 flames. Combust. Flame 106,406-427 (1996a)... [Pg.295]

Recently, a new approach called artificial neural networks (ANNs) is assisting engineers and scientists in their assessment of fuzzy information, Polymer scientists often face a situation where the rules governing the particular system are unknown or difficult to use. It also frequently becomes an arduous task to develop functional forms/empirical equations to describe a phenomena. Most of these complexities can be overcome with an ANN approach because of its ability to build an internal model based solely on the exposure in a training environment. Fault tolerance of ANNs has been found to be very advantageous in physical property predictions of polymers. This chapter presents a few such cases where the authors have successfully implemented an ANN-based approach for purpose of empirical modeling. These are not exhaustive by any means. [Pg.1]

Literature in the area of neural networks has been expanding at an enormous rate with the development of new and efficient algorithms. Neural networks have been shown to have enormous processing capability and the authors have implemented many hybrid approaches based on this technique. The authors have implemented an ANN based approach in several areas of polymer science, and the overall results obtained have been very encouraging. Case studies and the algorithms presented in this chapter were very simple to implement. With the current expansion rate of new approaches in neural networks, the readers may find other paradigms that may provide new opportunities in their area of interest. [Pg.31]

Kolmogorov s Theorem (Reformulated by Hecht-Nielson) Any real-valued continuous function f defined on an N-dimensional cube can be implemented by a three layered neural network consisting of 2N -)-1 neurons in the hidden layer with transfer functions from the input to the hidden layer and (f> from all of... [Pg.549]

Other approaches Hamming networks, pattern recognition, wavelets, and neural network learning systems are sometimes discussed but have not been commercially implemented. [Pg.498]

Use of multivariate approaches based on classification modelling based on cluster analysis, factor analysis and the SIMCA technique [98,99], and the Kohonen artificial neural network [100]. All these methods, though rarely implemented, lead to very good results not achievable with classical strategies (comparisons, amino acid ratios, flow charts) and, moreover it is possible to know the confidence level of the classification carried out. [Pg.251]

Compared with the artificial neural network (ANN) approach used in previous work to predict CN12 the linear regression model by QSAR is as good or better and easier to implement. The predicted CN values, some of which are tabulated in Table 1, will be employed below to evaluate the different catalytic strategies to optimize the fuel. [Pg.34]

Models of the form y =f(x) or v =/(x1, x2,..., xm) can be linear or nonlinear they can be formulated as a relatively simple equation or can be implemented as a less evident algorithmic structure, for instance in artificial neural networks (ANN), tree-based methods (CART), local estimations of y by radial basis functions (RBF), k-NN like methods, or splines. This book focuses on linear models of the form... [Pg.118]

A. Hjelmfelt, E. D. Weinberger, and J. Ross, Chemical implementation of neural networks and Turing machines, Proc. Natl. Acad. Sci. USA, 88, 10983-10987 (1991). [Pg.143]


See other pages where Neural network implementation is mentioned: [Pg.368]    [Pg.436]    [Pg.4]    [Pg.354]    [Pg.45]    [Pg.368]    [Pg.436]    [Pg.4]    [Pg.354]    [Pg.45]    [Pg.500]    [Pg.530]    [Pg.536]    [Pg.509]    [Pg.8]    [Pg.56]    [Pg.107]    [Pg.160]    [Pg.627]    [Pg.115]    [Pg.180]    [Pg.180]    [Pg.119]    [Pg.127]    [Pg.538]    [Pg.79]    [Pg.133]    [Pg.394]    [Pg.160]    [Pg.169]    [Pg.735]    [Pg.296]    [Pg.440]    [Pg.346]   
See also in sourсe #XX -- [ Pg.284 , Pg.285 ]




SEARCH



Neural network

Neural networking

© 2024 chempedia.info