Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Learning networks

Barron, A. R., and Barron, R. L., Statistical learning networks A unifying view. In Symposium on the Interface Statistics and Computing Science. p. 192. Reston, VA, 1988. [Pg.204]

Stonham TJ, Aleksander I, Camp M, Pike WT, Shaw MA (1975) Classification of mass spectra using adaptive digital learning networks. Anal Chem 47 1817... [Pg.287]

Although a TLU has features in common with a neuron, it is incapable of acting as the building block of a versatile computer-based learning network. The reason is that the output from it is particularly uninformative. The output... [Pg.18]

The error between the actual mismatch (obtained from the simulation results) and that predicted by the network is used as the error signal to train the network (see Figure 12.3). This is a classical supervised learning problem, where the system provides target values directly to the output co-ordinate system of the learning network. [Pg.369]

Over recent years, an increasing amount of project evaluation has been conducted. The founding of the Active Learning Network for Accountability and Performance (ALNAP) in 1997 provided a central repository for project evaluations and reports. ALNAP produces an annual report based on the evaluations, and this information should be used to learn lessons from and improve the quality of care and disaster response. [Pg.579]

Seme and Muller (1987) describe attempts to hnd statistical empirical relations between experimental variables and the measured sorption ratios (R(js). Mucciardi and Orr (1977) and Mucciardi (1978) used linear (polynomial regression of first-order independent variables) and nonlinear (multinomial quadratic functions of paired independent variables, termed the Adaptive Learning Network) techniques to examine effects of several variables on sorption coefficients. The dependent variables considered included cation-exchange capacity (CEC) and surface area (S A) of the solid substrate, solution variables (Na, Ca, Cl, HCO3), time, pH, and Eh. Techniques such as these allow modelers to constmct a narrow probability density function for K s. [Pg.4764]

The organisation of knowledge supports learning and retention of knowledge learned. Various theories have been propounded. Gagne s structuralist theory which is based on Blooms Taxonomy of Educational objectives has played a major role. Science, A Process Approach (SAPA) developed by American Association for the Advancement of Science used this theory to construct learning network of hierchies. [Pg.169]

Biological methodology in our everyday IT promises a kind of uniformity and interoperability that will encompass both medical and nonmedical computing, and mediation with implanted chips. This biology in IT is a methodology apart from neural nets, which are smart learning networks based on brain principles, represented in software. The changes of interest here are at the hardware level. It is probable that all traditional computer system architecture, medical and otherwise, will soon make the first steps to a transition to a deeper... [Pg.394]

In the following, we consider models of unsupervised learning networks. [Pg.318]

Hiltz, S., Turoff, M. (2002). What makes learning networks effective Communications of the... [Pg.202]

Astatke, Y, Mack, P. L. (1998b). Are our students ready for asynchronous learning networks (ALN). ASEE middle Atlantic section regional conference, Howard University, Washington, D.C., November 6-7. [Pg.257]

Cie, C., Joseph, E., 2010. New dimensions sustainability in digital design and print for textiles. In Ceschin, E., Vezzoli, C., Zhang, J. (Eds.), Challenges and Opportunities for Design Research, Education and Practice in the XXI Century, vol. 1. The Learning Network on Sustainability [LENS], Bangalore, India, pp. 702-708. [Pg.176]

This classification procedure was originally developed by Bledsoe and Browning C3721 and first applied to chemical problems by Stonham/ Aleksander, et.al. C2843. The learning network has some similarities to the perceptron, especially in the random combination of features 066, 3673. [Pg.74]

A "digital learning network" consists of a group of memory elements. For example, if patterns with 100 features are to be classified and n = 4, then a set of 25 16-bit-memory elements are necessary (with no feature being sampled twice). All elements are initialized to 0 and connected randomly to the pattern components. [Pg.74]

To train a digital learning network for a particular class, only patterns belonging to that class are presented to the network. Each n-tuple of features selects a storage location. If the selected storage already contains a 1 nothing happens, otherwise a 1 is written (an alternative training method is described below). [Pg.74]

The digital learning network may be implemented with hardware or simulated with a computer program. Several characteristics of the method have been investigated by Stonham et.al. C282 - 285H. [Pg.75]

The method of binary template matching is equivalent to a learning network with n = 1. A binary template of a class is the superposition (Logical "and -function) of all binary encoded patterns of that class. [Pg.75]

FIGURE 36. Generation of a reduced optimum training set for the adaptive digital Learning network E2851. [Pg.76]

The digital learning network is a simple and fast method for the classification of binary encoded patterns. The method suffers especially from the fact that a larger data set gives less satisfactory results. [Pg.77]

The digital learning network classifier may offer advantages when few carefully selected training patterns are available. Most work has been carried out with random connections of features. Significant improvements may be expected if physically meaningful connections of features are found and realized in a classification network. [Pg.77]

Early optimism by Stonham, Aleksander, et. al. C282, 283, 284, 2853 about the usefulness of adaptive digital learning networks could not be confirmed by Wilkins et. al. C2803. [Pg.153]

As with the supervised learning categorization networks, there are a few items that need to be discussed for the unsupervised case. Scaling or preprocessing of input data is still important. The number of output PEs is usually arbitrary unless you have reason to believe your data should fall into a certain number of categories. The issue of whether a network can be trained at the near-100% level is irrelevant because you do not know what the correct answers are. It is possible to use an unsupervised learning network to classify data whose correct classifications are known. In this case you can talk about percent correct again, see Chapter 10 of Ref. 19. [Pg.68]

Develop tools to enhance collaborative planning capacity and establish shared learning networks [44]. [Pg.287]

Bessant, J. andG. Tsekouras, 2001. Developing learning networks. A.I. and Society 15,82-98. [Pg.400]

Meert, K. (1998) A real-time recurrent learning network structure for data reconciliation. Artificial Intelligence in Engineering, 12 (3), 213-8. [Pg.380]


See other pages where Learning networks is mentioned: [Pg.465]    [Pg.340]    [Pg.142]    [Pg.110]    [Pg.44]    [Pg.699]    [Pg.311]    [Pg.82]    [Pg.74]    [Pg.75]    [Pg.154]    [Pg.160]    [Pg.67]    [Pg.75]    [Pg.132]    [Pg.46]    [Pg.174]    [Pg.51]   
See also in sourсe #XX -- [ Pg.74 ]




SEARCH



© 2024 chempedia.info