Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Best estimate approach

The analytical studies of accidents can be performed either by a conservative approach or by a best estimate approach. In the first case, conservative assumptions are adopted for initial and boundary conditions and for the various elements of the evaluation (correlations, parameters, equipment availability, etc.). Apart from the obvious advantages (for safety) of this approach, it, however, frequently leads to a completely unrealistic description of the real accident sequence, with a distorted timing of the events and the masking of interesting phenomena (see also Chapter 27). Because of these shortcomings and the current maturity of best estimate codes, they should be used in a safety analysis in combination with a reasonably conservative selection of input data... [Pg.96]

Best estimate approach Best estimate approach to safety evaluation or best estimate codes are those which are based on a faithful representation of the plant behaviour they should be used in a safety analysis in combination with a reasonably conservative selection of input data and a sufficient evaluation of the uncertainties of the results this approach is accepted by regulatory bodies it may also be acceptable to use a combination of a best estimate code and realistic assumptions on initial and boundary conditions. The best estimate approach is the opposite of a conservative approach. [Pg.423]

Conservative approach Conservative approach to safety evaluation or conservative code analyses are those where every assumption is chosen in a conservative way, in the light of the phenomenon to be evaluated. This approach is the opposite of the best estimate approach. [Pg.423]

The PSA should set out to determine all significant contributors to risk from the plant and should evaluate the extent to which the design of the overall system configuration is well balanced, there are no risk outliers and the design meets basic probabilistic targets. The PSA should preferably use a best estimate approach. [Pg.34]

Guidelines for best estimate approach to accident analysis of WWER NPPs, IAEA, WWER-SC-133, 1995. [Pg.258]

Guidelines for accident analysis of WWER nuclear power plants, IAEA, EBP-WWER-01,1995. Guidelines for best estimate approach to accident analysis of WWER NPPs, IAEA, WWER-SC-133, 1995. [Pg.260]

ATWS analysis were carried out in a broad range for all credible anticipated operational occurrences in order to tune the functional design of a Diverse Protection System (DPS) which is being implemented at Temelin NPP (WWER-1000 NPP). The ATWS analyses and evaluations were performed in accordance with NUREG-0460 guidelines, using a best-estimate approach. [Pg.267]

This target covers the individual risk of death to a worker on the site, from all on-site accident sequences that result in exposure to ionising radiation. It requires the calculation of the maximum efiective dose to the worker potentially most exposed to ionising radiation for each sequence this willbe done using a best estimate approach. [Pg.152]

Severe accidents will be considered on a best estimate basis. Part of this best-estimate approach includes the development of realistic source terms for each advanced design. [Pg.14]

Implicit with the implementation of a bounding best-estimate approach is the acceptance of very limited, probabilistic core damage. The state of the knowledge is commensurate with a best-estimate modeling of the accident and treatment of uncertainties, although some parameters are conservatively... [Pg.566]

The key characteristic of a RM is that the properties of interest are measured and certified on the basis of accuracy. The means of attaining the true value are varied, and several different philosophies have been utilized in the quest for the best estimate of the true value. The goal of all approaches is arrival at the best possible estimate of the true value a reliable and unassailable numerical value of the concentration of the chemical constituent, under constraints of economics, state-of-the-art analytical technologies, availability of (new and old) methods, analyst competence, availability of analysts and RM end-use requirement. The basic requirement for producing reliable data is appropriate methodology, adequately calibrated and properly used. [Pg.51]

Calorimetry. Radioactive decay produces heat and the rate of heat production can be used to calculate half-life. If the heat production from a known quantity of a pure parent, P, is measured by calorimetry, and the energy released by each decay is also known, the half-life can be calculated in a manner similar to that of the specific activity approach. Calorimetry has been widely used to assess half-lives and works particularly well for pure a-emitters (Attree et al. 1962). As with the specific activity approach, calibration of the measurement technique and purity of the nuclide are the two biggest problems to overcome. Calorimetry provides the best estimates of the half lives of several U-series nuclides including Pa, Ra, Ac, and °Po (Holden 1990). [Pg.15]

Risk assessment pertains to characterization of the probability of adverse health effects occurring as a result of human exposure. Recent trends in risk assessment have encouraged the use of realistic exposure scenarios, the totality of available data, and the uncertainty in the data, as well as their quality, in arriving at a best estimate of the risk to exposed populations. The use of "worst case" and even other single point values is an extremely conservative approach and does not offer realistic characterization of risk. Even the use of arithmetic mean values obtained under maximum use conditions may be considered to be conservative and not descriptive of the range of exposures experienced by workers. Use of the entirety of data is more scientific and statistically defensible and would provide a distribution of plausible values. [Pg.36]

This approach discriminates factors to a large extent in order to distinguish between the single adjustments and to separate best estimates from uncertainty. It should be noted that the ECETOC approach does not mention the establishment of an overall factor and although they mention that all discriminated aspects introduce uncertainties, they do not give guidance on how to account for this. It could also be questioned here whether a nonscientific factor should be discussed in a scientific risk assessment. [Pg.220]

The best estimate a2 of rr2 is related to the magnitude of the discrepancies V. The value of d2 is an average of the weighted squares of the discrepancies, taking into account that the fit will progressively improve as the number of unknowns m approaches the number of observations n. When n = m the solution of the observational equations is exact, but the variances and covariances are indeterminate. The best estimate d2 is obtained from... [Pg.77]

N. Wiener s solution was originally derived in the frequency domain for time-invariant systems with stationary statistics. In what follows, a mtrix solution derived from such approach but developed in the time domain for time-varying systems and non-stationary statistics will be presented (22-23). An expression for the required transformation H in Equation 7 will be obtained. In all that follows, we shall denote with the best estimate of l.e. an estimate such that ... [Pg.290]

Still unexplained, at least to this writer, is the large distance behind the detonation front at which EMV-measured u approaches uCJ (Fig 7) Fisson Brochet (Refs 9 10) used the EMV technique to determine uCJ in Nitro-methane (NM) and Isopropyl Nitrate. For NM their results are shown in Fig 10. Their best estimate for uCj is 1.70km/sec, which, somewhat unexpectedly, is a little larger than uCJ determined by the FSV method. Recall that for solid expls (Refs 1,3,5 13 and Table 1) the FSV method gives higher values of uCJ than the EMV method. LASL s theoretically computed uCJ for NM (BKW EOS) is 1.78km/sec... [Pg.238]

Quantification of ORD and CD Data. In principle ORD and CD can be used to calculate the amounts of a, / , and random conformations in protein, but in practice such estimates are subject to large errors. The Moffitt-Yang plot is probably the best estimate of percentage of a-helicity, but it is unable to distinguish between the ft and random structures. A detailed analysis of CD bands and their resultant Cotton effects, combined with infrared data, is the most promising approach even here the limits of error are large (82). Traditional estimates have been based on combinations of a-helix and random coil, and attention has been centered upon estimation of helical content. Consideration of j3 structure has been introduced more recently. The technique must be calibrated empirically with synthetic polypeptides of known conformation, and the proper choice of reference is not obvious. The /3 structure seems to be particularly variable in its rotational properties (27, 82). [Pg.281]

Simulation. One approach is to assume the sample mean and standard deviation are the true population mean and standard deviation, to provide a best estimate of the true probability of passing. This has the advantage that it can provide estimates of the probability of passing at any stage and can handle the nonsymmetric potency shelf life limits in the content uniformity test. The disadvantage is that it does not provide a bound on the probability with high assurance and is not a function of sample size. It can provide a good summary statistic of the content uniformity data, however. [Pg.717]

There are many tricks and shortcuts to this process. For example, rather than compiling all of the transformation rate equations (or conducting the actual kinetic experiments yourself), there are many sources of typical chemical half-lives based on pseudo-first-order rate expressions. It is usually prudent to begin with these best estimates of half-lives in air, water, soil, and sediment and perform a sensitivity analysis with the model to determine which processes are most important. One can return to the most important processes to assess whether more detailed rate expressions are necessary. An illustration of this mass balance approach is given in Figure 27.5 for benzol a Ipyrene. This approach allows a first-order evaluation of how chemicals enter the environment, what happens to them in the environment, and what the exposure concentrations will be in various environmental media. Thus the chemical mass balance provides information relevant to toxicant exposure to both humans and wildlife. [Pg.498]


See other pages where Best estimate approach is mentioned: [Pg.518]    [Pg.103]    [Pg.17]    [Pg.557]    [Pg.559]    [Pg.518]    [Pg.103]    [Pg.17]    [Pg.557]    [Pg.559]    [Pg.118]    [Pg.476]    [Pg.276]    [Pg.88]    [Pg.17]    [Pg.173]    [Pg.47]    [Pg.105]    [Pg.668]    [Pg.248]    [Pg.12]    [Pg.245]    [Pg.219]    [Pg.151]    [Pg.290]    [Pg.546]    [Pg.82]    [Pg.123]    [Pg.438]    [Pg.275]    [Pg.351]    [Pg.709]    [Pg.80]    [Pg.187]    [Pg.157]   


SEARCH



© 2024 chempedia.info