Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Connectionist models

In the human brain, it is the combined efforts of many neurons acting in concert that creates complex behavior this is mirrored in the structure of an ANN, in which many simple software processing units work cooperatively. It is not just these artificial units that are fundamental to the operation of ANNs so, too, are the connections between them. Consequently, artificial neural networks are often referred to as connectionist models. [Pg.13]

An ANN is a systematic procedure for data processing inspired by the nervous system function in animals. It tries to reproduce the brain logical operation using a collection of neuron-like entities to perform processing of input data. Furthermore, it is an example of a connectionist model, in which many small logical units are connected in a network [52]. Let us consider the proposed case, a multidetermination application employing an array of ISEs as input data. In a rather complex system,... [Pg.726]

Fahlman, S. E. (1988). Fast learning variations on back-propagation An empirical study. In Proceedings of the 1988 Connectionist Models Summer School (ed. Hinton, G. E., Sejnowski, T. J. Touretzky, D. S.), pp. 38-51. Morgan Kaufmann, San Mateo, CA. [Pg.100]

Perrone, M. P. (1994). General averaging results for convex optimization. In Proceedings 1993 Connectionist Models Summer School (ed. Mozer, M. C., et al.), pp. 364-71. Lawrence Erlbaum, Hillsdale, NJ. [Pg.101]

Despite this last observation, for this type of simulation and modelling research, two main means of evolution remain the first consists in enlarging the library with new and newly coded models for unit operations or apparatuses (such as the unit processes mentioned above multiphase reactors, membrane processes, etc.) the second is specified by the sophistication of the models developed for the apparatus that characterizes the unit operations. With respect to this second means, we can develop a hierarchy dividing into three levels. The first level corresponds to connectionist models of equilibrium (frequently used in the past). The second level involves the models of transport phenomena with heat and mass transfer kinetics given by approximate solutions. And finally, in the third level, the real transport phenomena the flow, heat and mass transport are correctly described. In... [Pg.99]

At the beginning of this chapter, we introduced statistical models based on the general principle of the Taylor function decomposition, which can be recognized as non-parametric kinetic model. Indeed, this approximation is acceptable because the parameters of the statistical models do not generally have a direct contact with the reality of a physical process. Consequently, statistical models must be included in the general class of connectionist models (models which directly connect the dependent and independent process variables based only on their numerical values). In this section we will discuss the necessary methodologies to obtain the same type of model but using artificial neural networks (ANN). This type of connectionist model has been inspired by the structure and function of animals natural neural networks. [Pg.451]

The models of pattern recognition, commonly called connectionist models or neural nets, are networks of small, highly related pieces of information that are processed simultaneously. That is, the collection of features that identify an object, such as a letter or a word, are processed at the same time by the model rather than one by one, and it is the presence or absence of the set of features -together with the ways in which they are linked - that determines recognition. An interesting characteristic of these models is that each learner can develop a unique recognition system that depends on different subsets of features of a pattern. [Pg.174]

Chapters 13, 14, and 15 describe particular models developed from the schema theory and problem-solving studies discussed in previous chapters. Chapter 13 contains details about a full computer simulation of the first experiment of Chapter 7. Chapter 14 describes a back propagation connectionist model for the same conditions. And, finally, chapter 15 contains the full hybrid model of schema implementation. [Pg.315]

Relatively few attempts have been made to model schema acquisition and use. To be sure, the interpretation of a number of models depends upon the concept of a schema, but the structure of the schema itself is not part of the model. Consequently, to look at existing models of schemas we must broaden our view so that it encompasses not only explicit schema models but also models that are similar to those presented in the next few chapters but that are not focused on schemas. These latter models are models of learning, performance, and recognition. They tend to be either production system models or connectionist models. [Pg.317]

The terms neural networks, connectionist models, and PDP models are essentially synonymous in the research literature. There appears to be a slight preference overall for the use of neural networks to describe such models, probably because the term implies some fidelity with neurobiology. This may be a misleading implication, however, because many neural nets are not in the least similar to what is currently understood of the human brain. It may be true that neural networks were originally inspired by the nervous system, but this distinction has been lost as the models have become widely used in many different applications. [Pg.326]

The connectionist models of greatest interest here are those that are similar to the one in Figure 12.3, namely, models with at least three layers an input layer, one or more layers of hidden units, and an output layer. As I will describe in the following two chapters, this middle layer is extremely important for modeling schemas because it is here that schema knowledge resides. [Pg.330]

When we consider the function of most supervised models of learning, it is evident that their common objective is to recognize patterns. They do so by processing in parallel the many different features that are available as input. Unlike production system models, connectionist models are influenced by the entire collection of features, not the presence or absence of a particular one. [Pg.330]

Connectionist models are attractive for modeling schemas for a number of reasons. First, the underlying structure of a schema is hypothesized to be a network, and it seems only appropriate to use a network model to simulate it. More specifically, identification knowledge in a schema involves pattern recognition, and the main attraction of neural nets lies in their ability to recognize patterns. Additionally, aspects of elaboration knowledge also involve parallel processing and are well modeled by connectionist models. [Pg.330]

Only a few production system models have attained the status of complete models of cognition, for example, ACT and SOAR. No commensurate connectionist models of cognition have yet been developed, although Rumelhart and McClelland clearly have such aspirations in their PDP work. Most connectionist models are still relatively narrow in scope. For the most part, they lag behind the production system models. The lag can be explained in terms of time. Production systems have been a major part of artificial intelligence research for many years and have been the focus of much research in cognitive science. Connectionist models are just now becoming accepted. [Pg.333]

A particular strength of connectionist models is the flexibility allowed for inputs. Because the models depend on the pattern of weights over a great many units, the presence or absence of any single unit is usually not by itself the deciding factor. Any input typically is characterized by a great many units. [Pg.335]

Schneider and Oliver s approach exhibits several innovations. One is the decomposition of complex tasks, which facilitates sequential controlled processing. The end result is performance in a reasonable time frame. Most connectionist models take unreasonably long to learn simple patterns. The Schneider and Oliver model works very quickly. A second innovation is the generation and use of rules to operate on the data networks. Thus, learning occurs in two ways in this model, by back propagation following multiple presentations of stimuli and by direct instruction from the controller network. [Pg.338]

The simulation is implemented as a feedforward connectionist model having four layers of units an input layer, two layers representing student knowledge, and an output layer. Figure 13.4 illustrates the model. [Pg.344]

Figure 13.4. The connectionist model used in the simulation (Reprinted from Marshall, 1993b, with permission from Kluwer Academic Publishers)... Figure 13.4. The connectionist model used in the simulation (Reprinted from Marshall, 1993b, with permission from Kluwer Academic Publishers)...
The performance model is extraordinarily simple. It qualifies as a connectionist model because it depends critically on the ways in which activation is passed from the input units through the knowledge nodes and on to the output units. No learning occurs in this model its function is to mimic the performance of students. It corresponds to the final form of a connectionist model that has stabilized its learning and is no longer modifying its connections. The... [Pg.360]

As in most connectionist models, each of the input units connects to each of the units in the layer immediately above it (i.e., the hidden unit layer), and each connection has its own unique weight. When an input vector is presented to the model, activation spreads from the input units to the hidden units. The amount of activation spread is determined in part by the strength (i.e., weight) of the connection. The initial values of the input-to-hidden weights are randomly determined. [Pg.367]

The connectionist model of Figure 15.1 is the model described as the learning model in chapter 14. It is a three-layer feedforward network with 27 input nodes, 14 hidden units, and 5 output... [Pg.379]

The model is implemented with input to the connectionist model in the lower portion of Figure 15.1. The input consists of a single binary vector representing all information in the multistep problem (as shown in Table 15.1). Because the vector contains all the information from all the situations represented in the problem, pointers to more than one situation typically occur. The connectionist network identifies the most salient situation and passes that information to the elaboration production system. Using the output from the connectionist network together with the clause information, this part of the model determines the best configuration of problem data to represent the selected situation. The production system selects a subset of clauses to represent each configuration. [Pg.383]

Combo 1 Combo 2 Combo 3 The possible configurations. indicates the one that yields the highest activation value (found via the production system and evaluated with connectionist model). [Pg.385]

Dyer, M. (1991). Symbolic neuroengineering for natural language processing A multilevel research approach. In J. A. Barnden J. B. Pollack (Eds.), Advances in connectionist and neural computation theory High-level connectionist models (Vol. 1, pp. 32-86). Norwood, NJ Ablex Publishing. [Pg.408]


See other pages where Connectionist models is mentioned: [Pg.537]    [Pg.751]    [Pg.12]    [Pg.283]    [Pg.33]    [Pg.60]    [Pg.179]    [Pg.317]    [Pg.319]    [Pg.326]    [Pg.328]    [Pg.330]    [Pg.335]    [Pg.336]    [Pg.339]    [Pg.359]    [Pg.361]    [Pg.362]    [Pg.364]    [Pg.377]    [Pg.382]    [Pg.383]    [Pg.385]    [Pg.385]   
See also in sourсe #XX -- [ Pg.13 ]




SEARCH



© 2024 chempedia.info