Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Training data

Another important consideration in selecting the training targets is the question of what it means to fit an experiment. Not every experimental data point has to be taken as a target of model optimization. The experience shows that creating summarizing properties is sufficient [Pg.244]


The three introduced network structures were trained with the training data set and tested with the test dataset. The backpropagation network reaches its best classification result after 70000 training iterations ... [Pg.465]

The Fuzzy-ARTMAP network reaehes the best learning rate of the training data set. This is recognised perfectly with 100% correctness. The exact results are presented in the next table ... [Pg.466]

As we have mentioned, the particular characterization task considered in this work is to determine attenuation in composite materials. At our hand we have a data acquisition system that can provide us with data from both PE and TT testing. The approach is to treat the attenuation problem as a multivariable regression problem where our target values, y , are the measured attenuation values (at different locations n) and where our input data are the (preprocessed) PE data vectors, u . The problem is to find a function iy = /(ii ), such that i), za jy, based on measured data, the so called training data. [Pg.887]

Neural networks model the functionality of the brain. They learn from examples, whereby the weights of the neurons are adapted on the basis of training data. [Pg.481]

Model building consists of three steps training, evaluation, and testing. In the ideal case the whole training data set is divided into three portions, the training, the evaluation set, and the test set. A wide variety of statistical or neural network... [Pg.490]

Breindl et. al. published a model based on semi-empirical quantum mechanical descriptors and back-propagation neural networks [14]. The training data set consisted of 1085 compounds, and 36 descriptors were derived from AMI and PM3 calculations describing electronic and spatial effects. The best results with a standard deviation of 0.41 were obtained with the AMl-based descriptors and a net architecture 16-25-1, corresponding to 451 adjustable parameters and a ratio of 2.17 to the number of input data. For a test data set a standard deviation of 0.53 was reported, which is quite close to the training model. [Pg.494]

When the structure is submitted its 3D coordinates arc calculated and the structure is shown at the left-hand side in the form of a 2D structure as well as a rotatable 3D structure (see Figure 10.2-11). The simulation can then be started the input structure is coded, the training data are selected, and the network training is launched. After approximately 30 seconds the simulation result is given as shown in Figure 10,2-11. [Pg.532]

This reaction data set of 626 reactions was used as a training data set to produce a knowledge base. Before this data set is used as input to a neural Kohonen network, each reaction must be coded in the form of a vector characterizing the reaction event. Six physicochemical effects were calculated for each of five bonds at the reaction center of the starting materials by the PETRA (see Section 7.1.4) program system. As shown in Figure 10,3-3 with an example, the physicochemical effects of the two regioisomeric products arc different. [Pg.546]

Figure 10.1-3. Two regioisomeric products of the training data set and their corresponding physicochemical effects used as coding vectors bo bond order difference in tr-electro-... Figure 10.1-3. Two regioisomeric products of the training data set and their corresponding physicochemical effects used as coding vectors bo bond order difference in tr-electro-...
The abihty to generalize on given data is one of the most important performance charac teristics. With appropriate selection of training examples, an optimal network architec ture, and appropriate training, the network can map a relationship between input and output that is complete but bounded by the coverage of the training data. [Pg.509]

They have the ability to generalize from given training data to unseen data. [Pg.348]

Providing input/output data is available, a neural network may be used to model the dynamies of an unknown plant. There is no eonstraint as to whether the plant is linear or nonlinear, providing that the training data eovers the whole envelope of plant operation. [Pg.358]

Rieliter et al. (1997) used this teehnique to model the dynamie eharaeteristies of a ship. The vessel was based on the Mariner Hull and had a length of 161m and a displaeement of 17 000 tonnes. The training data was provided by a three degree-of-freedom (forward veloeity, or surge, lateral veloeity, or sway and turn, or... [Pg.358]

Fig. 10.26 Training and trained data fora neural network model of a Ship s Hull. Fig. 10.26 Training and trained data fora neural network model of a Ship s Hull.
Drish, W. F., and Singh, S. (1992). Train Energy Model Validation Using Revenue Service Mixed Intermodal Train Data. Chicago Association of American Railroads. [Pg.975]

Multi-channel data provide the best picture of the relationship between measurement points on a machine-train. Data are acquired simultaneously from all measurement points on the machine-train. With this type of data, the analyst can establish the relationship between machine dynamics and vibration profile of the entire machine. [Pg.687]

CPU time. In response to these slow and rigorous calculations, many fast heuristic approaches have been developed that are based on intuitive concepts such as docking [10], matching pharmacophores [19], or linear free energy relationships [20], A disadvantage of many simple heuristic approaches is their susceptibility to generalization error [17], where accuracy of the predictions is limited to the training data. [Pg.326]

Fio. 13, Comparison of model predictions (solid line) with training data (dashed line) for identification example. [Pg.197]

The procedure for generating a decision tree consists of selecting the variable that gives the best classification, as the root node. Each variable is evaluated for its ability to classify the training data using an information theoretic measure of entropy. Consider a data set with K classes, Cj, I = Let M be the total number of training examples, and let... [Pg.263]

Equations (24) and (25) are adequate for designing decision trees. The feature that minimizes the information content is selected as a node. This procedure is repeated for every leaf node until adequate classification is obtained. Techniques for preventing overfitting of training data, such as cross validation are then applied. [Pg.263]

NN can be used to select descriptors and to produce a QSPR model. Since NN models can take into account nonlinearity, these models tend to perform better for log S prediction than those refined using MLR and PLS. However, to train nonlinear behavior requires significantly more training data that to train linear behavior. Another disadvantage is their black-box character, i.e. that they provide no insight into how each descriptor contributes to the solubility. [Pg.302]


See other pages where Training data is mentioned: [Pg.101]    [Pg.107]    [Pg.107]    [Pg.112]    [Pg.494]    [Pg.494]    [Pg.547]    [Pg.721]    [Pg.540]    [Pg.350]    [Pg.359]    [Pg.5]    [Pg.5]    [Pg.5]    [Pg.9]    [Pg.9]    [Pg.12]    [Pg.58]    [Pg.327]    [Pg.337]    [Pg.338]    [Pg.338]    [Pg.451]    [Pg.263]    [Pg.301]    [Pg.363]    [Pg.21]   
See also in sourсe #XX -- [ Pg.326 ]

See also in sourсe #XX -- [ Pg.156 ]

See also in sourсe #XX -- [ Pg.61 ]

See also in sourсe #XX -- [ Pg.58 ]

See also in sourсe #XX -- [ Pg.10 , Pg.12 ]

See also in sourсe #XX -- [ Pg.298 ]

See also in sourсe #XX -- [ Pg.244 , Pg.272 ]




SEARCH



Computational training data sets

Influence of the Training Data

Material safety data sheet training

Network training data

Neural network training data

Selection of Training Data

Training and Test Data

Training data picking

Training data set

© 2024 chempedia.info