Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Confidence intervals transformations

Figure 1.17. The 95% confidence intervals for v and Xmean are depicted. The curves were plotted using the approximations given in Section 5.1.2 the /-axis was logarithmically transformed for a better overview. Note that solid curves are plotted as if the number of degrees of freedom could assume any positive value this was done to show the trend / is always a positive integer. The ordinates are scaled in units of the standard deviation. Figure 1.17. The 95% confidence intervals for v and Xmean are depicted. The curves were plotted using the approximations given in Section 5.1.2 the /-axis was logarithmically transformed for a better overview. Note that solid curves are plotted as if the number of degrees of freedom could assume any positive value this was done to show the trend / is always a positive integer. The ordinates are scaled in units of the standard deviation.
If this procedure is followed, then a reaction order will be obtained which is not masked by the effects of the error distribution of the dependent variables If the transformation achieves the four qualities (a-d) listed at the first of this section, an unweighted linear least-squares analysis may be used rigorously. The reaction order, a = X + 1, and the transformed forward rate constant, B, possess all of the desirable properties of maximum likelihood estimates. Finally, the equivalent of the likelihood function can be represented b the plot of the transformed sum of squares versus the reaction order. This provides not only a reliable confidence interval on the reaction order, but also the entire sum-of-squares curve as a function of the reaction order. Then, for example, one could readily determine whether any previously postulated reaction order can be reconciled with the available data. [Pg.160]

If a single reaction order must be selected, an examination of the 95 % confidence intervals (not shown) indicates that the two-thirds order is a reasonable choice. For this order, however, estimates of the forward rate constants deviate somewhat from an Arrhenius relationship. Finally, some trend of the residuals (Section IV) of the transformed dependent variable with time exists for this reaction order. [Pg.161]

We will describe an accurate statistical method that includes a full assessment of error in the overall calibration process, that is, (I) the confidence interval around the graph, (2) an error band around unknown responses, and finally (3) the estimated amount intervals. To properly use the method, data will be adjusted by using general data transformations to achieve constant variance and linearity. It utilizes a six-step process to calculate amounts or concentration values of unknown samples and their estimated intervals from chromatographic response values using calibration graphs that are constructed by regression. [Pg.135]

Table IX. Confidence Intervals for the Predicted Response from Inverse Transformed Data. a 0.025. Table IX. Confidence Intervals for the Predicted Response from Inverse Transformed Data. a 0.025.
The Bandwidth is essentially a normalized half confidence band. The confidence interval bandwidths for 9 data sets using inverse transformed data are given in Table X. The bandwidths are approximately the vertical widths of response from the line to either band. The best band was found for chlorpyrifos, 1.5%, at the minimum width (located at the mean value of the response) and 4.9% at the minimum or lowest point on the graph. Values for fenvalerate and chlorothalonil were slightly higher, 2.1-2.2% at the mean level. The width at the lowest amount for the former was smaller due to a lower scatter of its points. The same reason explains the difference between fenvalerate and Dataset B. Similarly, the lack of points in Dataset A produced a band that was twice as wide when compared to Dataset B. Dataset C gave a much wider band when compared to Dataset B. [Pg.153]

Table X. Confidence Interval Bandwidths from the Regression of Transformed Data Sets. Inverse Transformed Data. Table X. Confidence Interval Bandwidths from the Regression of Transformed Data Sets. Inverse Transformed Data.
Construction of an Approximate Confidence Interval. An approxi-mate confidence interval can be constructed for an assumed class of distributions, if one is willing to neglect the bias introduced by the spline approximation. This is accomplished by estimation of the standard deviation in the transformed domain of y-values from the replicates. The degrees of freedom for this procedure is then diminished by one accounting for the empirical search for the proper transformation. If one accepts that the distribution of data can be approximated by a normal distribution the Student t-distribution gives... [Pg.179]

Although the values obtained for J and K minimize the variance, we gain more insight into the meaning of the numbers in Table I by describing them in terms of an error bound for estimating asbestos level. A 95% confidence interval for the mean of the log-transformed data is Y + 1.96 SD(Y). In terms of untransformed data the confidence bounds are exp(Y - 1.96 SD(Y)), exp(Y + 1.96 SD(Y)). These limits determine a confidence interval for the median of the untransformed data. The error bound is calculated as... [Pg.195]

ML is the approach most commonly used to fit a distribution of a given type (Madgett 1998 Vose 2000). An advantage of ML estimation is that it is part of a broad statistical framework of likelihood-based statistical methodology, which provides statistical hypothesis tests (likelihood-ratio tests) and confidence intervals (Wald and profile likelihood intervals) as well as point estimates (Meeker and Escobar 1995). MLEs are invariant under parameter transformations (the MLE for some 1-to-l function of a parameter is obtained by applying the function to the untransformed parameter). In most situations of interest to risk assessors, MLEs are consistent and sufficient (a distribution for which sufficient statistics fewer than n do not exist, MLEs or otherwise, is the Weibull distribution, which is not an exponential family). When MLEs are biased, the bias ordinarily disappears asymptotically (as data accumulate). ML may or may not require numerical optimization skills (for optimization of the likelihood function), depending on the distributional model. [Pg.42]

An approach that is sometimes helpful, particularly for recent pesticide risk assessments, is to use the parameter values that result in best fit (in the sense of LS), comparing the fitted cdf to the cdf of the empirical distribution. In some cases, such as when fitting a log-normal distribution, formulae from linear regression can be used after transformations are applied to linearize the cdf. In other cases, the residual SS is minimized using numerical optimization, i.e., one uses nonlinear regression. This approach seems reasonable for point estimation. However, the statistical assumptions that would often be invoked to justify LS regression will not be met in this application. Therefore the use of any additional regression results (beyond the point estimates) is questionable. If there is a need to provide standard errors or confidence intervals for the estimates, bootstrap procedures are recommended. [Pg.43]

Confidence intervals nsing freqnentist and Bayesian approaches have been compared for the normal distribntion with mean p and standard deviation o (Aldenberg and Jaworska 2000). In particnlar, data on species sensitivity to a toxicant was fitted to a normal distribntion to form the species sensitivity distribution (SSD). Fraction affected (FA) and the hazardons concentration (HC), i.e., percentiles and their confidence intervals, were analyzed. Lower and npper confidence limits were developed from t statistics to form 90% 2-sided classical confidence intervals. Bayesian treatment of the uncertainty of p and a of a presupposed normal distribution followed the approach of Box and Tiao (1973, chapter 2, section 2.4). Noninformative prior distributions for the parameters p and o specify the initial state of knowledge. These were constant c and l/o, respectively. Bayes theorem transforms the prior into the posterior distribution by the multiplication of the classic likelihood fnnction of the data and the joint prior distribution of the parameters, in this case p and o (Fignre 5.4). [Pg.83]

While thep-value allows us the ability to judge statistical significance, the clinical relevance of the finding is difficult to evaluate from the calculated confidence interval because this is now on the log scale. It is usual to back-transform the lower and upper confidence limits, together with the difference in the means on the log scale, to give us something on the original data scale which is more readily interpretable. The back-transform for the log transformation is the anti-log. [Pg.164]

Clearly the main advantage of a non-parametric method is that it makes essentially no assumptions about the underlying distribution of the data. In contrast, the corresponding parametric method makes specific assumptions, for example, that the data are normally distributed. Does this matter Well, as mentioned earlier, the t-tests, even though in a strict sense they assume normality, are quite robust against departures from normality. In other words you have to be some way off normality for the p-values and associated confidence intervals to be become invalid, especially with the kinds of moderate to large sample sizes that we see in our trials. Most of the time in clinical studies, we are within those boundaries, particularly when we are also able to transform data to conform more closely to normality. [Pg.170]

If there is no theory available to determine a suitable transformation, statistical methods can be used to determine a transformation. The Box-Cox transformation [18] is a common approach to determine if a transformation of a response is needed. With the Box-Cox transformation the response, y, is taken to different powers A, (e.g. -2transformed response can be fitted by a predefined (simple) model. Both an optimal value and a confidence interval for A can be estimated. The transformation which results in the lowest value for the residual variance is the optimal value and should give a combination of a homoscedastical error structure and be suitable for the predefined model. When A=0 the trans-... [Pg.249]

Using log-transformed data, bioequivalence is established by showing that the 90% confidence interval of the ratio of geometric mean responses (usually AUC and Cmax) of the two formulations is contained within the limits of 0.8 to 1.25 [22]. Equivalently, it could be said that bioequivalence is established if the hypothesis that the ratio of geometric means is less than or equal to 0.8 is rejected with... [Pg.199]

An equivalence approach has been and continues to be recommended for BE comparisons. The recommended approach relies on (1) a criterion to allow the comparison, (2) a confidence interval (Cl) for the criterion, and (3) a BE limit. Log-transformation of exposure measures before statistical analysis is recommended. BE studies are performed as single-dose, crossover studies. To compare measures in these studies, data have been analyzed using an average BE criterion. This guidance recommends continued use of an average BE criterion to compare BA measures for replicate and nonreplicate BE studies of both immediate- and modihed-release products. [Pg.142]

Figure 5.12 The 95 per cent confidence interval for mean pesticide content calculated (a) directly or (b) via Log transformation... Figure 5.12 The 95 per cent confidence interval for mean pesticide content calculated (a) directly or (b) via Log transformation...
If appropriate, pharmacokinetic parameters were compared descriptively between age groups (with/ without stratification), between genders (with/without stratification), and between fasted and fed subjects (with/without stratification and individually). Although not intending to show bioequivalence, the 90 % confidence intervals (Cl) for the differences in the log transformed exposure measurements were calculated. [Pg.668]

An exploratory analysis was performed using a four-factor ANOVA model, with treatment, period, and sequence as fixed factors and subject within sequence as random factor. The results from the ANOVA were used to calculate the back-transformed 90 % confidence intervals (Cl) for the differences between the fed and fasted condition in the log-transformed exposure measurements (Cmax, AUCo-t and AUCo-cc). For Cmax the difference between fasting and fed conditions was found to be statistically significant while this was not the case for the AUC parameters. [Pg.670]

Based on the planned Analysis of variance on log-transformed data, 90% confidence intervals for AUC ratios ethinylestradiol + Drug XYZ and ethinylestradiol alone, 20 subjects had to complete the study as planned. [Pg.678]

Analysis of variance was performed on the log-transformed AUCo-24 of ethinylestradiol to estimate intra-subject variability. The intra-subject variability was subsequently used to estimate the 90 % confidence interval of the AUCEE+Drug xyz/AUCee ratio. [Pg.678]

The interpretation of the pharmacokinetic variables Cmax, AUCs and MRT of insulin glulisine was based on 95 % confidence intervals, after ln-transformation of the data. These 95 % confidence intervals were calculated for the respective mean ratios of pair-wise treatment comparisons. In addition, the test treatment was compared to the reference treatment with respect to the pharmacokinetic variables using an ANOVA with subject, treatment and period effects, after ln-transformation of the data. The subject sum of squares was partitioned to give a term for sequence (treatment by period interaction) and a term for subject within sequence (a residual term). Due to the explorative nature of the study, no adjustment of the a-level was made for the multiple testing procedure. [Pg.687]

Point estimates and 95% confidence intervals for the ratio oftreatment means, based on ln-transformed data. [Pg.689]

The relationship between age and pharmacokinetics were assessed by an analysis of variance (ANOVA) on AUCs, MRT and Cmax with adjustments for treatment, period, sequence and subject within sequence effects by age class using the natural log transformed values to compare treatments within age class. Point estimates and 95 % confidence intervals were calculated for me treatment ratios per age class. [Pg.705]

Point estimates and 95% confidence intervals for the ratio of treatment means, based on (In) transformed data 2Point estimates and 95% confidence intervals for the respective median differences from non-parametric data analysis Median... [Pg.708]

PK data The PK parameters of ABC4321 in plasma were determined by individual PK analyses. The individual and mean concentrations of ABC4321 in plasma were tabulated and plotted. PK variables were listed and summarized by treatment with descriptive statistics. An analysis of variance (ANOVA) including sequence, subject nested within sequence, period, and treatment effects, was performed on the ln-transformed parameters (except tmax). The mean square error was used to construct the 90% confidence interval for treatment ratios. The point estimates were calculated as a ratio of the antilog of the least square means. Pairwise comparisons to treatment A were made. Whole blood concentrations of XYZ1234 were not used to perform PK analyses. [Pg.712]

The primary parameter AUCo-oo was subjected to an analysis of variance (ANOVA) including sequence, subject nested within sequence (subject (sequence)), period and treatment (non-fasting/fasting) effects. The sequence effect was tested using the subject (sequence) mean square from the ANOVA as an error term. All other main effects were tested against the residual error (error mean square) from the ANOVA. The ANOVA was performed on ln-transformed data. Lor ratios 90 % confidence intervals were constructed. The point estimates and confidence limits were calculated as antilogs and were expressed as percentages. The... [Pg.718]

The purpose of this section is to show the calculation of the confidence interval for the variance in an actual example. The statistical data used for this example are given in Table 5.3. In this table, the statistically measured real input concentrations and the associated output reactant transformation degrees are given for five proposed concentrations of the limiting reactant in the reactor feed. Table 5.3 also contains the values of the computed variances for each statistical selection. The confidence interval for each mean value from Table 5.3 has to be calculated according to the procedure established in steps 6-10 from the algorithm shown in Section 5.2.2.1. In this example, the number of measurements for each experiment is small, thus the estimation of the mean value is difficult. Therefore, we... [Pg.346]


See other pages where Confidence intervals transformations is mentioned: [Pg.48]    [Pg.453]    [Pg.133]    [Pg.164]    [Pg.175]    [Pg.175]    [Pg.218]    [Pg.64]    [Pg.695]    [Pg.716]   
See also in sourсe #XX -- [ Pg.164 ]




SEARCH



Confidence

Confidence interval transformed data

Confidence intervals

© 2024 chempedia.info