Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sampling patterns

Shortcomings of Wang s method like limited pitch of the spiral and blurring in the vertical direction can be improved by the CFBP-algorithm [10], where gaps in the spiral sampling pattern are filled using X-rays measured from the opposite side. [Pg.494]

Finally, a word on terminology. Many AI methods learn by inspecting examples, which come in a variety of forms. They might comprise a set of infrared spectra of different samples, the abstracts from a large number of scientific articles, a set of solid materials defined by their composition and their emission spectrum at high temperature, or the results from a series of medical tests. In this text, we refer to these examples, no matter what their nature, as "sample patterns."... [Pg.7]

The ability of an ANN to learn is its greatest asset. When, as is usually the case, we cannot determine the connection weights by hand, the neural network can do the job itself. In an iterative process, the network is shown a sample pattern, such as the X, Y coordinates of a point, and uses the pattern to calculate its output it then compares its own output with the correct output for the sample pattern, and, unless its output is perfect, makes small adjustments to the connection weights to improve its performance. The training process is shown in Figure 2.13. [Pg.21]

With the introduction of a hidden layer, training becomes trickier because, although the target responses for the output nodes are still available from the database of sample patterns, there are no target values in the database for hidden nodes. Unless we know what output a hidden node should be generating, it is not possible to adjust the weights of the connections into it in order to reduce the difference between the required output and that which is actually delivered. [Pg.30]

The degree to which it has learned general rules rather than simply learned to recognize specific sample patterns is then difficult to assess. In using a network on a new dataset, it is, therefore, important to try to estimate the complexity of the data (in essence, the number of rules that will be necessary to satisfactorily describe it) so that a network of suitable size can be used. If the network contains more hidden nodes than are needed to fit the rules that describe the data, some of the power of the network will be siphoned off into the learning of specific examples in the training set. [Pg.40]

If sample patterns in a large database are each defined by just two values, a two-dimensional plot may reveal clustering that can be detected by the eye (Figure 3.1). However, in science our data often have many more than two dimensions. An analytical database might contain information on the chemical composition of samples of crude oil extracted from different oilfields. Oils are complex mixtures containing hundreds of chemicals at detectable levels thus, tire composition of an oil could not be represented by a point in a space of two dimensions. Instead, a space of several hundred dimensions would be needed. To determine how closely oils in the database resembled one another, we could plot the composition of every oil in this high-dimensional space, and then measure the distance between the points that represent two oils the distance would be a measure of the difference in composition. Similar oils would be "close together" in space,... [Pg.51]

To determine whether sample patterns in a database are similar to one another, the values of each of the n parameters that define the sample must be compared. Because it is not possible to check whether points form clusters by actually looking into the n-dimensional space—unless n is very small— some mathematical procedure is needed to identify clustered points. Points that form clusters are, by definition, close to one another, therefore, provided that we can pin down what we mean by "close to one another," it should be possible to spot mathematically any clusters and identify the points that comprise them. [Pg.54]

Select a sample pattern at random from the database. [Pg.60]

Calculate how similar the sample pattern is to the weights vector at each node in turn, by determining the Euclidean distance between the sample pattern and the weights vector. [Pg.60]

Select the winning node, which is the node whose weights vector most strongly resembles the sample pattern. [Pg.60]

Update the weights vector at the winning node to make it slightly more like the sample pattern. [Pg.60]

Now that the SOM has been constructed and the weights vectors have been filled with random numbers, the next step is to feed in sample patterns. The SOM is shown every sample in the database, one at a time, so that it can learn the features that characterize the data. The precise order in which samples are presented is of no consequence, but the order of presentation is randomized at the start of each cycle to avoid the possibility that the map may learn something about the order in which samples appear as well as the features within the samples themselves. A sample pattern is picked at random and fed into the network unlike the patterns that are used to train a feedforward network, there is no target response, so the entire pattern is used as input to the SOM. [Pg.62]

Each value in the chosen sample pattern is compared in turn with the corresponding weight at the first node to determine how well the pattern and weights vector match (Figure 3.9). A numerical measure of the quality of the match is essential, so the difference between the two vectors, dpq, generally defined as the squared Euclidean distance between the two, is calculated ... [Pg.62]

The input pattern is compared with the weights vector at every node to determine which set of node weights it most strongly resembles. In this example, the height, hair length, and waistline of each sample pattern will be compared with the equivalent entries in each node weight vector. [Pg.63]

The next step is to adjust the vector at the winning node so that its weights become more like those of the sample pattern. [Pg.64]

As the adjustments at the winning node move each of its weights slightly toward the corresponding element of the sample vector, the node learns a little about this sample pattern and, thus, is more likely to again be the... [Pg.64]

This question of reproducibility is an important one we can understand why the lack of reproducibility is not the problem it might seem to be by considering how the SOM is used as a diagnostic tool. A sample pattern that the map has not seen before is fed in and the winning node (the node to which that sample "points") is determined. By comparing the properties of the unknown sample with patterns in the database that point to the same winning node or to one of the nodes nearby, we can learn the type of samples in the database that the unknown sample most strongly resembles. [Pg.69]

We conclude that the absolute position of the node on the map to which the sample pattern points is not important neither of the maps in Figure 3.14 and Figure 3.15 is better than the other. It is the way that samples are clustered on the map that is significant. It is, in fact, common to discover when using a SOM that there are several essentially equivalent, but visually very different, clusterings that can be generated. [Pg.71]

In the Mexican Hat function (Figure 3.20), the weights of the winning node and its close neighbors are adjusted to increase their resemblance to the sample pattern (an excitatory effect), but the weights of nodes that are... [Pg.75]

The SOM displays intriguing behavior if the input data are drawn from a two-dimensional distribution and the SOM weights are interpreted as Cartesian coordinates so that the position of each node can be plotted in two dimensions. In Example 5, the sample pattern consisted of data points taken at random from within the range [x = 0 to 1, y = 0 to 1], In Figure 3.21, we show the development of that pattern in more detail from a different random starting point. [Pg.76]

A familiar shape from the laboratory, learned by a rectangular SOM. Sample patterns are drawn from the outline of an Erlenmeyer flask. [Pg.80]

With this number of factors influencing the development of the map, it is not possible to specify precisely what their values should be in all cases. The most suitable values will depend on how many features are present in the sample patterns and how diverse they are, but some general guidelines can be given. In the large majority of applications, a two-dimensional map is used these are more flexible than one-dimensional maps, yet are simple... [Pg.80]

At the start of this chapter, we noted that the SOM has two roles to cluster data and to display the result of that clustering. The finished map should divide into a series of regions, each corresponding to a different class of sample pattern. [Pg.81]

This asymmetry may have an effect on the development of the map. If there are few examples of a particular class in the dataset or if the characteristics of some sample patterns are markedly different from the characteristics of most other samples, development of the map may be eased if these unusual samples find their way to the edge of the map where they have fewer neighbors. The remaining samples, which share a wider range of characteristics, then have the whole of the rest of the map to themselves and they can spread out widely to reveal the differences between them to the maximum degree permitted by the size of the map. [Pg.86]


See other pages where Sampling patterns is mentioned: [Pg.388]    [Pg.418]    [Pg.176]    [Pg.388]    [Pg.727]    [Pg.22]    [Pg.23]    [Pg.30]    [Pg.35]    [Pg.36]    [Pg.40]    [Pg.59]    [Pg.59]    [Pg.60]    [Pg.60]    [Pg.63]    [Pg.64]    [Pg.66]    [Pg.67]    [Pg.67]    [Pg.68]    [Pg.71]    [Pg.73]    [Pg.75]    [Pg.76]    [Pg.82]    [Pg.93]   
See also in sourсe #XX -- [ Pg.5 , Pg.6 ]

See also in sourсe #XX -- [ Pg.33 ]




SEARCH



Exposure assessment sampling pattern

Pattern sample

Pattern sample

Pattern-Recognition Importance Sampling Minimization (PRISM)

Pattern-recognition importance sampling

Pattern-recognition importance sampling minimization

Random sampling pattern

Sample application pattern

Sample holders diffraction patterns

Systematic sampling pattern

© 2024 chempedia.info