Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Bayesian interval estimation

The posterior density function is the key to Bayesian parameter estimation, both for single-response and multiresponse data. Its mode gives point estimates of the parameters, and its spread can be used to calculate intervals of given probability content. These intervals indicate how well the parameters have been estimated they should always be reported. [Pg.165]

The overall goal of Bayesian inference is knowing the posterior. The fundamental idea behind nearly all statistical methods is that as the sample size increases, the distribution of a random sample from a population approaches the distribution of the population. Thus, the distribution of the random sample from the posterior will approach the true posterior distribution. Other inferences such as point and interval estimates of the parameters can be constructed from the posterior sample. For example, if we had a random sample from the posterior, any parameter could be estimated by the corresponding statistic calculated from that random sample. We could achieve any required level of accuracy for our estimates by making sure our random sample from the posterior is large enough. Existing exploratory data analysis (EDA) techniques can be used on the sample from the posterior to explore the relationships between parameters in the posterior. [Pg.20]

However, in most cases the handling ofpg 2(0 z) and its integration in Equation 18.20 is analytically intractable and the sophisticated but computationally demanding numerical simulation techniques called Markov chain Monte Carlo must be used to determine Bayesian parameter estimates and their confidence intervals see, for example, Pillonetto et al. (2002) and Magni et al. (2001) for two recent applications. [Pg.364]

The Bayesian equivalent to the frequentist 90% confidence interval is delineated by the 5th and 95th percentiles of the posterior distribntion. Bayesian confidence intervals for SSD (Figures 5.4 to 5.5), 5th percentile, i.e., HC5 and fraction affected (Figures 5.4 to 5.6) were calculated from the posterior distribution. Thns, the nncer-tainties of both HC and FA are established in 1 consistent mathematical framework FA estimates at the logio HC lead to the intended protection percentage, i.e., M °(logio HCf) = p where p is a protection level. Further full distribution of HC and FA uncertainty can be very easily extracted from posterior distribntion for any level of protection and visualized (Figures 5.5 to 5.7). [Pg.83]

Often, maximum-likelihood and maximum-posterior estimates are quite consistent and provide good predictive models for many practical applications (Needham et al., 2008). The confidence intervals for each parameter can also be obtained based either on maximum-likelihood or maximum-posterior estimators. Note again that the parameters for Bayesian networks may be learned even when the data information is incomplete using statistical techniques such as the EM algorithm and the Markov chain Monte Carlo (MCMC) technique (Murphy and Mian, 1999). [Pg.265]

The Bayesian spectral density approach for parametric identification and model updating regression analysis are applied. During the monitoring period, four typhoons flitted over Macao. The structural behavior under such violent wind excitation is treated as discordance and the measurements obtained under these events are not taken into account for the analysis. By excluding these fifteen days of measurements, there are 168 pairs of identified squared fundamental frequency and measured temperature in the data set. Figure 2.28(a) shows the variation of the identified squared fundamental frequencies with their associated uncertainties represented by a confidence interval that is bounded by the plus or minus three standard derivations from the estimated values. It is noticed that this confidence interval contains 99.7% of the probability. Since the confidence intervals are narrow compared with the variation... [Pg.66]

Chen, M.-H. and Shao, Q.-M. (1999) Monte Carlo estimation of Bayesian credible and HPD intervals, Journal of Computational and Graphical Statistics 8,69-92. [Pg.37]

A practical challenge of Bayesian meta-analysis for rare AE data is that noninformative priors may lead to convergence failure due to very sparse data. Weakly informative priors may be used to solve this issue. In the example of the previous Bayesian meta-analysis with piecewise exponential survival models, the following priors for log hazard ratio (HR) (see Table 14.1) were considered. Prior 1 assumes a nonzero treatment effect with a mean log(HR) of 0.7 and a standard deviation of 2. This roughly translates to that the 95% confidence interval (Cl) of HR is between 0.04 and 110, with an estimate of HR to be 2.0. Prior 2 assumes a 0 treatment effect, with a mean log(HR) of 0 and a standard deviation of 2. This roughly translates to the assumption that we are 95% sure that the HR for treatment effect is between 0.02 and 55, with an estimate of the mean hazard of 1.0. Prior 3 assumes a nonzero treatment effect that is more informative than that of Prior 1, with a mean log(HR) of 0.7 and a standard deviation of 0.7. This roughly translates to the assumption that we are 95% sure that the HR... [Pg.256]

The lack of data to support claims for failure rates is an issue which is widely investigated by data uncertainty analyses. For example, Hauptmanns, 2008 compares the use of reliability data stemming from different sources on probabilistic safety calculations, and tends to prove that results do not differ substantially. Wang, 2004 discusses and identifies the inputs that may lead to SIL estimation changes. Propagation of error, Monte Carlo, and Bayesian methods (Guerin, 2003) are quite common. Fuzzy set theory is also often used to handle data uncertainties, especially into fault tree analyses (Tanaka, 1983, Singer, 1990). Other approaches are based on evidence, possibihty, and interval analyses (Helton, 2004). [Pg.1476]

We consider the first update (/ = 1). By the approach in step 3, we estimate the rate of DU-failures by either Xdu.i or Xdu.i- In the following, assume that we have chosen to use the Bayesian estimate Xdu.i-(If we use the empirical estimate, we get the same formulas). Next, determine the 90% confidence (or credibility) interval for Xdu.i We then calculate the ratio XDU.0/Xdu.i - This ratio indicates the fractional change in failure rate and thus the allowed change of the test interval. By using eq. (1), an updated test interval t can now be calculated as ... [Pg.1627]

In method 1, the probabilities of exceedance shown in The FN-curve are given a frequentist interpretation they should be interpreted as a relative frequencies. The intervals around the probability of exceedance and the number of fatalities are the result ofthe uncertainty related to the estimates of the relative frequencies and consequences of floods. An important disadvantage of this method is that its imderlying philosophy differs from the one that imderlies the existing curves (FLORIS-1). In the FLORIS-project, probabilities are interpreted in a Bayesian (subjective) sense, rather than as relative frequencies. [Pg.1985]

Galiatsatou (Fig 38.5) estimates median and 95% confidence intervals of retiu n level estimates for wave heights, when extreme value parameters are estimated with (a) maximum likelihood (ML), (b) Bayesian with noninformative prior distributions, and (c) L-moments (LM) estimation procedures. [Pg.1047]

PiUonetto, G., Sparacino, G., Magni, P., BeUazzi, R., and Cobelli, C. 2002. Minimal model S(1)=0 problem in NIDDM subjects nonzero Bayesian estimates with credible confidence intervals. Am. J. Physiol. [Pg.177]

So far, the discussion here has centered on retrieving quantified date information for seismic events. However, in many instances, it is equally important to get information about the interval between events, especially when estimating risk. Fortunately, the Bayesian models allow this information to be extracted and summarized. [Pg.2028]

This section discusses the calculation of the uncertainty in the damage quantification stage, by estimating the uncertainty in the value of damage parameter. The classical statistics-based approach calculates statistical confidence intervals on the value of damage parameter, while the Bayesian statistics-based approach directly calculates the probability distribution of the value of the damage parameter. [Pg.3831]

Damage quantification using the classical statistics procedure yields a 90 % confidence interval of (24,509 NIm, 24,612 NIni). The Bayesian approach directly calculates the probability distribution of ki as explained in section Uncertainty in Damage Estimation. The overall uncertainty in diagnosis can be calculated as in section Overall Uncertainty in Diagnosis, and the corresponding probability density function is shown in Fig. 4. From Fig. 4, it can be seen that... [Pg.3834]


See other pages where Bayesian interval estimation is mentioned: [Pg.51]    [Pg.55]    [Pg.51]    [Pg.55]    [Pg.50]    [Pg.173]    [Pg.47]    [Pg.163]    [Pg.124]    [Pg.145]    [Pg.112]    [Pg.86]    [Pg.51]    [Pg.345]    [Pg.508]    [Pg.510]    [Pg.854]    [Pg.221]    [Pg.92]    [Pg.39]    [Pg.159]    [Pg.224]    [Pg.228]    [Pg.233]    [Pg.274]    [Pg.1047]    [Pg.1874]    [Pg.2]    [Pg.22]    [Pg.179]    [Pg.214]    [Pg.245]    [Pg.247]    [Pg.2021]    [Pg.3827]    [Pg.3835]   


SEARCH



Bayesian

Bayesian estimation

Bayesians

Interval estimate

Interval estimation

© 2024 chempedia.info