Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Defaults estimation

A factor of 2-10 is often used as a default estimate of upper-bound uncertainties associated with ambient air quality modelling of exposures to particulates or gaseous pollutants. [Pg.32]

This program will give variance estimates for each of the precision components along with two-sided 95% confidence intervals for the population variance component for each expected mass. SAS PROC MIXED will provide ANOVA estimates, maximum likelihood estimates, and REML estimates the default estimation, used here, is REML. [Pg.33]

Crude fibre Weende methods, based on acid hydrolysis followed by alkaline hydrolysis, such as AFNOR NFV03-40 (1993). Crude fibre is a default estimate of the cell wall content, which is actually 2 to 4 times higher. The crude fibre residue includes variable proportions of different cell wall constituents, such as lignin. [Pg.18]

Second card FORMAT(8F10.2), control variables for the regression. This program uses a Newton-Raphson type iteration which is susceptible to convergence problems with poor initial parameter estimates. Therefore, several features are implemented which help control oscillations, prevent divergence, and determine when convergence has been achieved. These features are controlled by the parameters on this card. The default values are the result of considerable experience and are adequate for the majority of situations. However, convergence may be enhanced in some cases with user supplied values. [Pg.222]

PRCG cols 21-30 the maximum allowable change in any of the parameters when LMP = 1, default value is 1000. Limiting the change in the parameters prevents totally unreasonable values from being attained in the first several iterations when poor initial estimates are used. A value of PRCG equal to the magnitude of that anticipated for the parameters is usually appropriate. [Pg.223]

T temperature (K) of isothermal flash for adiabatic flash, estimate of flash temperature if known, otherwise set to 0 to activate default initial estimate. [Pg.320]

An estimate of the hybridization state of an atom in a molecule can be obtained from the group of the periodic table that the atom resides in (which describes the number of valence electrons) and the connectivity (coordination of the atom). The HyperChem default scheme uses this estimate to assign a hybridization state to all atoms from the set (null, s, sp, sp, sp -, and sp ). The special... [Pg.207]

The critical characteristic on each component was analysed, calculated from the analysis and the value obtained was plotted against the process capability indices, Cpk and Cp, for the characteristic in question. See Appendix V for descriptions of the 21 components analysed, including the values of Cp and Cp from the SPC data supplied. Note that some components studied have a zero process capability index. This is a default value given if the process capability index calculated from the SPC data had a mean outside either one of the tolerance limits, which was the case for some of the components submitted. Although it is recognized that negative process capability indices are used for the aim of process improvement, they have little use in the analyses here. A correlation between positive values (or values which are at least within the tolerance limits) will yield a more deterministic relationship between design capability and estimated process capability. [Pg.57]

There are some systems for which the default optimization procedure may not succeed on its own. A common problem with many difficult cases is that the force constants estimated by the optimization procedure differ substantially from the actual values. By default, a geometry optimization starts with an initial guess for the second derivative matrix derived from a simple valence force field. The approximate matrix is improved at each step of the optimization using the computed first derivatives. [Pg.47]

There are two practical reasons to err toward the correction of the ICso of an antagonist to estimate the Kb- The first is that an overestimation of antagonist potency will only result in a readjustment of values upon rigorous measurement of antagonism in subsequent analysis. However, more importantly, if the correction is not applied then there is a risk of not detecting weak but still useful antagonism due to an underestimation of potency (due to nonapplication of the correction factor). These reasons support the application of the correction in all cases as a default. [Pg.218]

The Kaplan-Meier survival estimates plots are instantiated by specifying PLOTS = (S) in the PROC LIFETEST statement. To show just the line itself, CENSOREDSYMBOL = NONE is specified to hide the censored observations in the plot. EVENTSYMBOL = NONE is specified here to hide the event points, although this is the default setting for... [Pg.239]

The Rvalues are partition coefficients. The assumption that these are real constants Is seldom completely true, of course, because equilibrium Is rarely achieved and because the equilibrium ratios generally are not the same for all concentration levels. Moreover, It Is difficult to find the needed Information, and one must often accept a single literature value as typical of a given Intermedia transfer. When the organic content of the soli Is known or can be accurately estimated, one can usually derive Kgw from a compound s aqueous solubility, S, or Its octanol/water partition coefficient, KQW (14) Values of Kpa, namely "bloconcentratlon factors" between feed and meat animals (15,16), can also be derived from S or KQW. Bloconcentratlon factors between water and fish are well documented (14) A considerable weakness exists In our perception of the proper estimates to use for partition coefficients between soli and edible crop materials. Thus, at one time, two of the present authors used a default value of Kgp = 1 for munitions compounds that are neither very soluble In water nor very Insoluble (4) at another time, a value of was assumed for compounds with very low values of Ksw, l.e., polybromoblphenyls (6). [Pg.271]

For the studies summarized in Table I and discussed in the following sections of the text, physicochemical properties (including partition coefficients) were collected from a variety of reference documents or estimated according to available equations (Table III). Acceptable daily doses were calculated from toxicological data (Table III). When more than one equation was available, judgment was used to determine which to apply. Table III excludes those contaminants footnoted in Table I. A default value of 1.0 was adopted for K for the first nine compounds of Table III (4). For PBBs, the value of log KQC was calculated from the solubility in creek water (7.96 x 10-4 pM), according to the equation (1,3)1... [Pg.272]

A worktable that can be used to calculate a cumulative exposure estimate on a site-specific basis is provided in Table 2. To use the table, environmental levels for outdoor air, indoor air, food, water, soil, and dust are needed. In the absence of such data (as may be encountered during health assessment activities), default values can be used. In most situations, default values will be background levels unless data are available to indicate otherwise. Based on the U.S. Food and Drug Administration s (FDA s) Total Diet Study data, lead intake from food for infants and toddlers is about 5 pg/day (Bolger et al. 1991). In some cases, a missing value can be estimated from a known value. For example, EPA (1986) has suggested that indoor air can be considered 0.03 x the level of outdoor air. Suggested default values are listed in Table 3. [Pg.618]

Data on additive production if such data are missing, they must be collected, there is no way around that. In some cases, however, proxy energy consumption and proxy emission data may be estimated if the synthesis way and the default emission factors for the technology used are known. [Pg.20]

As an alternative to these calculations, the registrant may choose to make a generic release estimate. Here, conservative default values are used for identifying waste amounts and fractions entering into the three main waste streams. Furthermore, generic exposure scenarios can be selected containing default release factors and assumptions on implemented risk management in the processes. [19]. [Pg.146]

However, many statistical modeling techniques do not, in a simple and straightforward way, enable by default the estimation of whether a prediction is an interpolation to the model (thus rendering the prediction more credible), or is an extrapolation to the model (in which case the prediction must be evaluated with greater care). [Pg.400]

Equilibrium factors were estimated from simultaneous measurements of radon gas and daughters when available. A default equilibrium factor of 0.3 was used when simultaneous gas and daughter values were not available. The default equilibrium factor is based on reported data for Salt Lake City (EPA, 1974) and on data obtained in Edgemont, South Dakota (Jackson et al., 1985), for climatic conditions similar to those in Salt Lake City. [Pg.518]

To address the first problem, reasonable initial estimates of certain quantities are pro-vided. By default, E-Z Solve initially sets all values to one. Instead, we use other values for the following ... [Pg.618]

A simple example might make this clearer. Suppose it were known that a 100 mg dose of chemical Z produced an extra 10% incidence of liver tumors in rats. Suppose further that we studied the pharmacokinetics of compound Z and discovered that, at the same 100 mg dose, 10 mg of the carcinogenic metabolite of Z was present in the liver. The usual regulatory default would instruct us to select the 100 mg dose as the point-of-departure for low dose extrapolation, and to draw a straight line to the origin, as in Figure 8.1. We are then further instructed to estimate the upper bound on risk at whatever dose humans are exposed to - let us say 1 mg. If the extra risk is 10% at 100 mg, then under the simple linear no-threshold model the extra risk at 1 mg should be 10% 100 = 0.1% (an extra risk of... [Pg.252]

Cumulative distributions of the logarithms of NOELs were plotted separately for each of the stmcmral classes. The 5th percentile NOEL was estimated for each stmctural class and this was in mrn converted to a human exposure threshold by applying the conventional default safety factor of 100 (Section 5.2.1). The stmcmre-based, tiered TTC values established were 1800 p,g/person/ day (Class I), 540 pg/person/day (Class II), and 90 pg/person/day (Class III). Endpoints covered include systemic toxicity except mutagenicity and carcinogenicity. Later work increased the number of chemicals in the database from 613 to 900 without altering the cumulative distributions of NOELs (Barlow 2005). [Pg.198]

Q2] is employed when extrapolating data from subchronic studies to estimate risk from lifelong exposures. The default value 10, indicating great uncertainty in estimating the NOAELchronic from... [Pg.219]

U] is used to account for residual uncertainty in estimates of [S], [I], and [R]. The default value is 10 indicating very great overall uncertainty, which has not already been accounted for in [Qi 3]. [Pg.219]


See other pages where Defaults estimation is mentioned: [Pg.215]    [Pg.69]    [Pg.114]    [Pg.247]    [Pg.215]    [Pg.69]    [Pg.114]    [Pg.247]    [Pg.222]    [Pg.320]    [Pg.326]    [Pg.330]    [Pg.686]    [Pg.112]    [Pg.239]    [Pg.102]    [Pg.254]    [Pg.98]    [Pg.103]    [Pg.182]    [Pg.274]    [Pg.279]    [Pg.618]    [Pg.428]    [Pg.432]    [Pg.333]    [Pg.191]    [Pg.210]    [Pg.228]    [Pg.243]    [Pg.107]   
See also in sourсe #XX -- [ Pg.700 , Pg.701 , Pg.702 ]




SEARCH



© 2024 chempedia.info