Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random neural network

In a standard back-propagation scheme, updating the weights is done iteratively. The weights for each connection are initially randomized when the neural network undergoes training. Then the error between the target output and the network predicted output are back-propa-... [Pg.7]

In previous chapters, we have examined a variety of generalized CA models, including reversible CA, coupled-map lattices, reaction-diffusion models, random Boolean networks, structurally dynamic CA and lattice gases. This chapter covers an important field that overlaps with CA neural networks. Beginning with a short historical survey, chapter 10 discusses zissociative memory and the Hopfield model, stocheistic nets, Boltzman machines, and multi-layered perceptrons. [Pg.507]

Two models of practical interest using quantum chemical parameters were developed by Clark et al. [26, 27]. Both studies were based on 1085 molecules and 36 descriptors calculated with the AMI method following structure optimization and electron density calculation. An initial set of descriptors was selected with a multiple linear regression model and further optimized by trial-and-error variation. The second study calculated a standard error of 0.56 for 1085 compounds and it also estimated the reliability of neural network prediction by analysis of the standard deviation error for an ensemble of 11 networks trained on different randomly selected subsets of the initial training set [27]. [Pg.385]

Ladunga, I., Czako, F., Csabai, I., and Geszti, T. (1991). Improving signal peptide prediction accuracy by simulated neural network. Comput. Appl. Biosci. 7, 485-487. Landolt-Marticorena, C., Williams, K., Deber, C., and Reithmeier, R. (1993). Non-random distribution of amino acids in the ransmembrane segments of human type I single span membrane proteins. J. Mol. Biol. 229, 602-608. [Pg.337]

Recently, Jung et al. [42] developed two artificial neural network models to discriminate intestinal barrier-permeable heptapeptides identified by the peroral phage display experiments from randomly generated heptapeptides. There are two kinds of descriptors one is binary code of amino acid types (each position used 20 bits) and the other, which is called VHSE, is a property descriptor that characterizes the hydrophobic, steric, and electronic properties of 20 coded amino acids. Both types of descriptors produced statistically significant models and the predictive accuracy was about 70%. [Pg.109]

Gross GW, Kowalski J (1991) Experimental and theoretical analysis of random nerve cell network dynamics. In Antognetti P and Milutinovic V. (eds) Neural Networks Concepts, Applications, and Implementations. Englewood City, New Jersey, Prentice-Hall, p 47... [Pg.160]

Cherkasov, 2005 a (79) for descriptors only Artificial neural networks (ANN)3 44 (77) Random peptides chosen according to two amino acid frequency distributions Sets A and B contained 933 and 500 peptides, respectively (see text for details, unpublished data) Training and validation within one set, independent testing on second set 1433 Set A models predicted activity with up to 83% accuracy on Set B Set B models predicted up to 43% accuracy on Set A (see text for details) nd... [Pg.146]

A neural network consists of many processing elements joined together. A typical network consists of a sequence of layers with full or random connections between successive layers. A minimum of two layers is required the input buffer where data is presented and the output layer where the results are held. However, most networks also include intermediate layers called hidden layers. An example of such an ANN network is one used for the indirect determination of the Reid vapor pressure (RVP) and the distillate boiling point (BP) on the basis of 9 operating variables and the past history of their relationships to the variables of interest (Figure 2.56). [Pg.207]

Properties such as thermodynamic values, sequence asymmetry, and polymorphisms that contribute to RNA duplex stability are taken into account by these databases (Pei and Tuschl 2006). In addition, artificial neural networks have been utilized to train algorithms based on the analysis of randomly selected siRNAs (Huesken et al. 2005). These programs siphon significant trends from large sets of RNA sequences whose efficacies are known and validated. Certain base pair (bp) positions have a tendency to possess distinct nucleotides (Figure 9.2). In effective nucleotides, position 1 is preferentially an adenosine (A) or uracil (U), and many strands are enriched with these nucleotides along the first 6 to 7 bps of sequence (Pei and Tuschl 2006). The conserved RISC cleavage site at nucleotide position 10 favors an adenosine, which may be important, while other nucleotides are... [Pg.161]

The fundamental idea behind training, for all neural network architectures, is this pick a set of weights (often randomly), apply the inputs to the network and see how the network performs with this set of weights. If it doesn t perform well, then modify the weights by some algorithm (specific to each architecture) and repeat the procedure. This iterative process is continued until some pre-specified stopping criterion has been reached. [Pg.51]

Granjeon and Tarroux (1995) studied the compositional constraints in introns and exons by using a three-layer network, a binary sequence representation, and three output units to train for intron, exon, and counter-example separately. They found that an efficient learning required a hidden layer, and demonstrated that neural network can detect introns if the counter-examples are preferentially random sequences, and can detect exons if the counter-examples are defined using the probabilities of the second-order Markov chains computed in junk DNA sequences. [Pg.105]

By selectively changing sequences in E. coli translation initiation region with randomized calliper inputs and observing the corresponding neural network performance, Nair (1997) analyzed the importance of the initiation codon and the Shine-Dalgamo sequence. [Pg.109]

Nair, T. M. (1997). Calliper randomization an artificial neural network based analysis of E. coli ribosome binding sites. J Biotnol Struct Dyn 15,611-7. [Pg.113]

As a full-scale family classification system, more than 1200 MOTIFIND neural networks were implemented, one for each ProSite protein group. The training set for the neural networks consisted of both positive (ProSite family members) and negative (randomly selected non-members) sequences at a ratio of 1 to 2. ProClass groups non-redundant SwissProt and PIR protein sequence entries into families as defined collectively by PIR superfamilies and ProSite patterns. By joining global and motif similarities in a single classification scheme, ProClass helps to reveal domain and family relationships, and classify multi-domained proteins. [Pg.138]


See other pages where Random neural network is mentioned: [Pg.500]    [Pg.527]    [Pg.536]    [Pg.274]    [Pg.443]    [Pg.799]    [Pg.481]    [Pg.454]    [Pg.27]    [Pg.462]    [Pg.484]    [Pg.370]    [Pg.378]    [Pg.137]    [Pg.213]    [Pg.312]    [Pg.331]    [Pg.269]    [Pg.82]    [Pg.130]    [Pg.132]    [Pg.256]    [Pg.132]    [Pg.213]    [Pg.176]    [Pg.346]    [Pg.38]    [Pg.91]    [Pg.122]    [Pg.134]    [Pg.149]    [Pg.154]    [Pg.175]    [Pg.13]    [Pg.47]   
See also in sourсe #XX -- [ Pg.16 ]




SEARCH



Neural network

Neural networking

Random networks

© 2024 chempedia.info