Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Artificial neural networks defined

Also nonlinear methods can be applied to represent the high-dimensional variable space in a smaller dimensional space (eventually in a two-dimensional plane) in general such data transformation is called a mapping. Widely used in chemometrics are Kohonen maps (Section 3.8.3) as well as latent variables based on artificial neural networks (Section 4.8.3.4). These methods may be necessary if linear methods fail, however, are more delicate to use properly and are less strictly defined than linear methods. [Pg.67]

Artificial neural networks (ANNs) emulate some human brain characteristics, such as the ability to derive conclusions from fuzzy input data, the capacity to memorise patterns and a high potential to relate facts (samples). If we examine carefully those human (animal) abilities, they share a common basis they cannot be expressed through a classical well-defined algorithm rather, they are based on a common characteristic experience. Humans can solve situations according to their accumulated experience, rather on a conscious and strict reasoning procedure. [Pg.247]

In the early days of catalyst screening, speed was the only important matter. This meant collecting as much information as possible on a certain catalyst under defined process parameters. This approach produces a large number of non-interrelated single data points with a low degree of information. As soon as correlations between these data can be found, the information density increases. This is the case if reaction kinetics are derived from single data points or if a supervised artificial neural network has learned to predict relations between data points. [Pg.411]

The field of artificial neural networks is a new and rapidly growing field and, as such, is susceptible to problems with naming conventions. In this book, a perceptron is defined as a two-layer network of simple artificial neurons of the type described in Chapter 2. The term perceptron is sometimes used in the literature to refer to the artificial neurons themselves. Perceptrons have been around for decades (McCulloch Pitts, 1943) and were the basis of much theoretical and practical work, especially in the 1960s. Rosenblatt coined the term perceptron (Rosenblatt, 1958). Unfortunately little work was done with perceptrons for quite some time after it was realized that they could be used for only a restricted range of linearly separable problems (Minsky Papert, 1969). [Pg.29]

In most cases, the MFTA models are built using the Partial Least Squares Regression (PLSR) technique that is suitable for the stable modeling based on the excessive and/or correlated descriptors (under-defined data sets). However, the MFTA approach is not limited to the PLSR models and can successfully employ other statistical learning techniques such as the Artificial Neural Networks (ANN) supporting the detection of the nonlinear structure-activity relationships. ... [Pg.159]

An inexperienced user or sometimes even an avid practitioner of QSAR could be easily con-fiased by the multitude of methodologies and naming conventions used in QSAR studies. Two-dimensional (2D) and three-dimensional (3D) QSAR, variable selection and artificial neural network methods, comparative molecular field analysis (CoMFA), and binary QSAR present examples of various terms that may appear to describe totally independent approaches, which cannot be even compared to each other. In fact, any QSAR method can be generally defined as the application of mathematical and statistical methods to the problem of finding empirical relationships (QSARmod-els)of the form, D . D ), where... [Pg.51]

Optimization can be simplified by employing the predictive capabilities of an artificial neural network (ANN). This multivariate approach has been shown to require minimal number of experiments that allow construction of an accurate experimental response surface (5, 6). The apposite model created from an experimental design should effectively relate the experimental parameters to the output values, which can be used to create an ANN with a strong predictive capacity for any conditions defined within the experimental space (4). [Pg.170]

Using artificial neural networks (ANN) the reaction system, including intrinsic reaction kinetics but also internal mass transfer resistances, is considered as a black-box and only input-output signals are analysed. With this approach the conversion rate of the i-th reactant into the j-th species can be expressed in a general form as a complex function, being a mathematical superposition of all above mentioned functional dependencies. This function includes also a contribution of the internal diffusion resistances. So each of the rate equations of Eq. 5 can be described with the following function based on the vanables which uniquely define the state of the system ... [Pg.382]

Neural network models in artificial intelligence definitions are usually referred to as artificial neural networks (ANNs) these are essentially simple mathematical models defining a function or a distribution over or both and, but sometimes models are also intimately associated with a particular learning algorithm or learning rule. A common use of the phrase ANN model really means the definition of a class of such functions (where members of the class are obtained by varying parameters, connection weights, or specifics of the architecture such as the number of neurons or their connectivity). [Pg.914]

Artificial neural networks are tools that allow meaning to be extracted from very large quantities of data. Neural networks (NN) are organized in the form of layers, within which there are one or more processing elements called neurons. The first layer is the input layer, and the number of neurons in this layer is equal to the munber of input parameters. The last layer is the output layer and the munber of neurons in this layer is equal to the number of output parameters to be predicted. The layers between the input and output layers are the hidden layers, consisting of a number of nem-ons to be defined in configuring the NN. Neurons in each layer receive... [Pg.358]

Artificial neural networks (ANNs) are computer-based systems that can learn from previously classified known examples and can perform generalized recognition - that is, identification - of previously unseen patterns. Multilayer perceptions (MLPs) are supervised neural networks and, as such, can be trained to model the mapping of input data (e.g. in this study, morphological character states of individual specimens) to known, previously defined classes. [Pg.208]

The constants a, b, and c define the sigmoid curve, which has the asymptotic values zero for x —> —X and a for x 4-Y (a and c are expected to have positive values). The position and steepness of the curve between the two asymptotic values depend on the parameters b and c. A logistic function is the result of a logistic regression analysis, where b + cx is replaced by a linear combination of all x-variables. With values 1, 0, and 1 for a, b, and c, respectively, the logistic function is often used as a transfer function for back-propagation networks, a type of artificial neural network. See Neural Networks in Chemistry. [Pg.1519]


See other pages where Artificial neural networks defined is mentioned: [Pg.516]    [Pg.662]    [Pg.199]    [Pg.47]    [Pg.415]    [Pg.467]    [Pg.172]    [Pg.179]    [Pg.106]    [Pg.244]    [Pg.311]    [Pg.722]    [Pg.205]    [Pg.377]    [Pg.359]    [Pg.3650]    [Pg.301]    [Pg.291]    [Pg.214]    [Pg.218]    [Pg.75]    [Pg.156]    [Pg.218]    [Pg.450]    [Pg.713]    [Pg.24]    [Pg.192]    [Pg.71]    [Pg.43]    [Pg.335]    [Pg.282]    [Pg.391]    [Pg.215]    [Pg.329]    [Pg.55]    [Pg.58]    [Pg.146]   
See also in sourсe #XX -- [ Pg.84 ]




SEARCH



Artificial Neural Network

Artificial network

Neural artificial

Neural network

Neural networking

© 2024 chempedia.info