Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian estimation theory

Maximum entropy method is a powerful numerical technique, which is based on Bayesian estimation theory and is often applied to derive the most... [Pg.497]

M. Meier, R. Preuss, V. Dose Interaction of CH3 and H with amorphous hydrocarbon surfaces Estimation of reaction cross-sections using Bayesian probability theory. New J. Phys. 5, 133 (2003)... [Pg.284]

Jeffreys (1961) advanced Bayesian theory by giving an unprejudiced prior density p(6, S) for suitably differentiable models. His result, given in Chapter 5 and used below, is fundamental in Bayesian estimation. [Pg.141]

The residual difference after a successful DDM refinement or/and decomposition can be considered as a scattering component of the powder pattern free of Bragg diffraction. The separation of this component would facilitate the analysis of the amorphous fraction of the sample, the radial distribution function of the non-crystalline scatterers, the thermal diffuse scattering properties and other non-Bragg features of powder patterns. The background-independent profile treatment can be especially desirable in quantitative phase analysis when amorphous admixtures must be accounted for. Further extensions of DDM may involve Bayesian probability theory, which has been utilized efficiently in background estimation procedures and Rietveld refinement in the presence of impurities.DDM will also be useful at the initial steps of powder diffraction structure determination when the structure model is absent and the background line cannot be determined correctly. The direct space search methods of structure solution, in particular, may efficiently utilize DDM. [Pg.295]

After a first paper in 1988, " Bretthorst paved the way for the application of Bayesian probability theory (BPT) in a series of three papers.- The first presents the connection between the theory and the case of NMR phenomena and discusses parameter estimation and detection of quadrature signals. The second " shows the ability of Bayesian methods to measure the quality of a model. The third " provides examples of applications to experimental data where decaying sinusoids are assumed. A fourth paper, produced during the following two years, discusses computer time requirements and noise, and it is shown to be important to include knowledge in the analysis," " while a fifth publication is devoted to amplitude estimations for multiplets of well-separated resonances. [Pg.182]

Modern theory is often called Bayesian probability theory after Thomas Bayes, F.R.S. (1702-1761) who was a minister of the Presbyterian church. The theorem attributed to his name is central to the modern interpretation, but according to Maistrov, it appears nowhere in his writings, and was first mentioned by Laplace though it was only expressed in words. The theorem enables an updating of a probability estimate, in the light of new information. For a set of mutually exclusive collectively exhaustive events Bi, B. ., B then P A) can be expressed. Fig. 5.4, as... [Pg.77]

The loss-function is the cost for estimating with estimator 0 when the true parameter value is 9. The posterior mean is the Bayesian estimator that minimizes the squared-error loss-function, while the posterior median is the Bayesian estimator that minimizes the absolute value loss-function. One of the strengths of Bayesian statistics, is that we could decide on any particular loss function, and find the estimator that minimizes it. This is covered in the field of statistical decision theory. We will not pursue this topic further in this book. Readers are referred to Berger (1980) and DeGroot (1970). [Pg.50]

One challenge in applying this approach, which relies on prior estimates of method prediction reliability, is how to deal with differences between future compounds to be tested and the universe of all compounds on which the collected experience of R D process effectiveness has been based. If new active compounds fall within the space previously sampled, then knowledge of chemical properties is just another kind of conditioning within a Bayesian network if they fall outside this space, then the initial model of both outcomes and predictions has an unpredictable error. The use of sampling theory and models of diversity [16] are therefore promising extensions of the above approach. [Pg.271]

We begin with a model for the shape of the SSD. For the sake of argument, we will assume that the SSD of B is approximately normal. That is, the histogram of the LC50 values for pesticide B looks approximately like a normal density with mean pg and variance o. We may reasonably expect the SSD of A also to be normal with unknown mean Pa But the same variance, oi = a. Standard statistical theory tells us how to estimate p and oi from the few species that have been tested with A. But Bayesian statistics goes a bit further by telling us also how to use the information about pesticide B. [Pg.80]

Bias The systematic or persistent distortion of an estimate from the true value. From sampling theory, bias is a characteristic of the sample estimator of the sufficient statistics for the distribution of interest. Therefore, bias is not a function of the data, but of the method for estimating the population statistics. For example, the method for calculating the sample mean of a normal distribution is an unbiased estimator of the true but unknown population mean. Statistical bias is not a Bayesian concept, because Bayes theorem does not relay on the long-term frequency expections of sample estimators. [Pg.177]

Chapter 3 provides an introduction to the identification of mathematical models for reactive systems and an extensive review of the methods for estimating the relevant adjustable parameters. The chapter is initiated with a comparison between Bayesian approach and Poppers falsificationism. The aim is to establish a few fundamental ideas on the reliability of scientific knowledge, which is based on the comparison between alternative models and the experimental results, and is limited by the nonexhaustive nature of the available theories and by the unavoidable experimental errors. [Pg.4]

Snodgrass and Kitanidis [61] also used a probabilistic approach combining Bayesian theory and geostatistical techniques. In their method, the source function to be estimated is discretized into components that are assigned a known stochastic structure with unknown stochastic parameters. The method incor-... [Pg.82]

Bayesian analysis provides another alternative (also computationally intensive) to deal with very weak signals and avoid FT artifacts. This method, which uses probability theory to estimate the value of spectral parameters, permits prior information to be included in the analysis, such as the number of spectral lines when known or existence of regular spacings from line splittings due to spin coupling. Commercial software is available for Bayesian analysis, and the technique is useful in certain circumstances. [Pg.75]

Obtaining parametric maps necessarily requires estimating the vector of the parameter 0 from K-noised samples. The general theory of estimation59,60 provides solutions that can be applied in the domain of quantitative MRI. In practice, the ML approach is the most commonly used, because it concerns the estimation of non-random parameters, unlike the Bayesian approach, which is mostly applied to segment the images.61 The LS approaches defined by... [Pg.226]

Unfortunately there is nothing in statistical theory that allows determination of the correct error estimate taking into account model uncertainty. Bayesian model averaging has been advocated as one such solution, but this methodology is not without criticism and is not an adequate solution at this point in time. At this point the only advice that can be given is to be aware of model uncertainty and recognize it as a limitation of iterative model development. [Pg.28]

Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimizes the cost criterion. There are numerous algorithms available for training neural network models most of them can be viewed as a straightforward application of optimization theory and statistical estimation. Recent developments in this field use particle swarm optimization and other swarm intelligence techniques. [Pg.917]

The method of maximum likelihood is the standard estimation procedure in statistical inference. Whether one looks at the inference problem from the point of view of classical repeated-sampling theory or Bayesian theory or straightforward likelihood theory, maximizing the likelihood emerges as the preferred procedure. There really is no dispute about this in regular estimation problems, and phylogenetic inference does seem to be unexceptional from a statistical point of view, even though it took a little while for the initial difficulties in the application of maximum likelihood to be sorted out. This was mainly done by Felsenstein (1968) and Thompson (1974) in their Ph.D. dissertations and subsequent publications. [Pg.186]

The lack of data to support claims for failure rates is an issue which is widely investigated by data uncertainty analyses. For example, Hauptmanns, 2008 compares the use of reliability data stemming from different sources on probabilistic safety calculations, and tends to prove that results do not differ substantially. Wang, 2004 discusses and identifies the inputs that may lead to SIL estimation changes. Propagation of error, Monte Carlo, and Bayesian methods (Guerin, 2003) are quite common. Fuzzy set theory is also often used to handle data uncertainties, especially into fault tree analyses (Tanaka, 1983, Singer, 1990). Other approaches are based on evidence, possibihty, and interval analyses (Helton, 2004). [Pg.1476]

In mathematical statistics theory it is well known that Bayesian method allows a combination of two kinds of information prior (for instance, generic statistic data, subjective option of experts) and measurements or observations (Bernardo et al, 2003 Berthold et al, 2003). Bayesian method allows updating estimates of all parameters in the model with a single new obtained observation, i.e. Bayesian method does not require to have new information on the values of all factors involved in the created model. [Pg.394]

In order to estimate the information quality of a selected system you should model the uncertainty the reverse of which could indicate IQ. Uncertainty can be modelled by using, among others, Bayesian networks. However, in this case, the data come from various sources and to simplify the modelling, and thereby calculation, could be used the theory mathematical evidence created by Dempster and Shafer. However, since we assess information from computer systems (modern detectors and... [Pg.1909]


See other pages where Bayesian estimation theory is mentioned: [Pg.122]    [Pg.122]    [Pg.272]    [Pg.6432]    [Pg.277]    [Pg.6431]    [Pg.120]    [Pg.279]    [Pg.458]    [Pg.9]    [Pg.318]    [Pg.318]    [Pg.131]    [Pg.182]    [Pg.138]    [Pg.154]    [Pg.854]    [Pg.265]    [Pg.2182]    [Pg.2185]    [Pg.86]    [Pg.94]    [Pg.244]    [Pg.104]    [Pg.155]    [Pg.228]    [Pg.231]    [Pg.2]    [Pg.88]   
See also in sourсe #XX -- [ Pg.497 ]




SEARCH



Bayesian

Bayesian estimation

Bayesian theory

Bayesians

Estimation theory

© 2024 chempedia.info