Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Finite data sampling

An important feature in the formulation of the tomographic reconstruction process is the assumption of linearity attached to the various operations involved. This leads to the concept of a spatially invariant point-spread function that is a measure of the performance of a given operation. In practice the transformation associated with an operation involves the convolution of the input wiA the point-spread function in the spatial domain to provide the output. This is recognised as a cumbersome mathematical process and leads to an alternative representation that describes the input in terms of sinusoidal functions. The associated transformation is now conducted in the frequency domain and with the transfer function described by the Fourier integral. In discussing these principles, the functions in the spatial domain and the frequency domain are considered to be continuous in their respective independent variables. Howevm, for practical applications the relevant processes involve dis te and finite data sampling. This has a significant effect on the accuracy of the rccon struction, and in this respect certain conditions are imposed on the type and amount of data in order to improve the result. [Pg.654]

In our particular example we find for the 100 three-point samples a minimum standard deviation of 0.12, and a maximum of 1.91, quite a spread around the theoretical value of 1.00 (The specific numerical values you will find will of course be different from the example given here, but the trends are likely to be similar.) For the 30 ten-point samples we obtain the extreme values 0.35 and 1.45, and for the 10 thirty-point samples 0.73 and 1.23. While taking more samples improves matters, even with thirty samples our estimate of the standard deviation can be off by more than 20%. And this is for by-the-book, synthetic Gaussian noise. From now on, take all standard deviations you calculate with an appropriate grain of salt for any finite data set, the standard deviations are themselves estimates subject to chance. [Pg.51]

However, we are usually dealing with a small, finite subset of measurements, not 20 or more so the standard deviation that should be reported is the sample standard deviation s. For a small finite data set, the sample standard deviation s differs from a in two respects. Look at the equations given in the definitions. The equation for sample standard deviation s contains the sample mean, not the population mean, and uses — 1 measurements instead of N, the total number of measurements. The term — 1 is called the degrees of freedom. [Pg.33]

In many geophysical applications we have a finite data set consisting of a scalar vector 7 = (71, , 7n) of observations measured at corresponding known locations zi,..., zjv, and we want to find an analytical or discrete representation of a surface / which interpolates or approximates these sampling values at the given nodes. The finite node set Z = zi,..., z contains points of some prescribed domain f C K where d= 1,2,3,... denotes the dimension. [Pg.389]

Before any attempt to establish a correlation between the surface structure of the oxidized alloys and their CO conversion activity one must stress that the surface composition of the samples under reaction conditions may not necessarily be Identical to that determined from ESCA data. Moreover, surface nickel content estimates from ESCA relative Intensity measurements are at best seml-quantlta-tlve. This can be readily rationalized If one takes Into consideration ESCA finite escape depth, the dependence of ESCA Intensity ratio... [Pg.312]

For an approximate determination of the sample composition it is often sufficient to measure the peak height of the core level. In general, however, core level structures are asymmetric peaks above a finite background and sometimes accompanied by satellite structures. These structures originate from the many-body character of the emission process. Therefore a peak integration including satellites and asymmetric tails is much more reliable. Due to the above difficulties quantitative analysis of XPS data should be taken as accurate to only within about 5-10%. [Pg.81]

Figure 8.47. SRSAXS raw data (open symbols) and model fit (solid line) for a nano structured material using a finite lattice model. The model components are demonstrated absorption factor Asr, density fluctuation background Ipu smooth phase transition/. The solid monotonous line demonstrates the shape of the Porod law in the raw data. At sq the absorption is switching from fully illuminated sample to partial illumination of the sample... Figure 8.47. SRSAXS raw data (open symbols) and model fit (solid line) for a nano structured material using a finite lattice model. The model components are demonstrated absorption factor Asr, density fluctuation background Ipu smooth phase transition/. The solid monotonous line demonstrates the shape of the Porod law in the raw data. At sq the absorption is switching from fully illuminated sample to partial illumination of the sample...
Very few of the references in Tables 1-3 attempt any quantitative modelling of their NMR data in terms of cell microstructure or composition. Such models would be extremely useful in choosing the optimum acquisition pulse sequences and for rationalising differences between sample batches, varieties and the effects of harvesting times and storage conditions. The Numerical Cell Model referred to earlier is a first step in this direction but more realistic cell morphologies could be tackled with finite element and Monte Carlo numerical methods. [Pg.117]

Often, in actual applications, a finite, discrete set of values on the boundary are available through direct measurement. Inevitably, therefore, solving for the general solution proves almost impossible. The use of discretely sampled data, however, suggests the need for specialized techniques in order to develop solutions. At the outset, then, we abandon analytic solutions as being interesting as guides, but rarely useful. [Pg.253]

Variance A more statistically meaningful quantity for expressing data quality is the variance. For a finite number of samples, it is defined as ... [Pg.21]

The classical, frequentist approach in statistics requires the concept of the sampling distribution of an estimator. In classical statistics, a data set is commonly treated as a random sample from a population. Of course, in some situations the data actually have been collected according to a probability-sampling scheme. Whether that is the case or not, processes generating the data will be snbject to stochastic-ity and variation, which is a sonrce of uncertainty in nse of the data. Therefore, sampling concepts may be invoked in order to provide a model that accounts for the random processes, and that will lead to confidence intervals or standard errors. The population may or may not be conceived as a finite set of individnals. In some situations, such as when forecasting a fnture value, a continuous probability distribution plays the role of the popnlation. [Pg.37]


See other pages where Finite data sampling is mentioned: [Pg.666]    [Pg.666]    [Pg.115]    [Pg.41]    [Pg.61]    [Pg.371]    [Pg.275]    [Pg.251]    [Pg.189]    [Pg.343]    [Pg.319]    [Pg.195]    [Pg.518]    [Pg.121]    [Pg.92]    [Pg.517]    [Pg.60]    [Pg.413]    [Pg.192]    [Pg.638]    [Pg.14]    [Pg.379]    [Pg.36]    [Pg.362]    [Pg.202]    [Pg.218]    [Pg.193]    [Pg.72]    [Pg.567]    [Pg.77]    [Pg.124]    [Pg.156]    [Pg.291]    [Pg.61]    [Pg.252]    [Pg.195]    [Pg.288]   
See also in sourсe #XX -- [ Pg.14 , Pg.26 ]




SEARCH



Data sampling

Finite sampling

Sampled data

© 2024 chempedia.info