Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Transformed data confidence intervals

If this procedure is followed, then a reaction order will be obtained which is not masked by the effects of the error distribution of the dependent variables If the transformation achieves the four qualities (a-d) listed at the first of this section, an unweighted linear least-squares analysis may be used rigorously. The reaction order, a = X + 1, and the transformed forward rate constant, B, possess all of the desirable properties of maximum likelihood estimates. Finally, the equivalent of the likelihood function can be represented b the plot of the transformed sum of squares versus the reaction order. This provides not only a reliable confidence interval on the reaction order, but also the entire sum-of-squares curve as a function of the reaction order. Then, for example, one could readily determine whether any previously postulated reaction order can be reconciled with the available data. [Pg.160]

We will describe an accurate statistical method that includes a full assessment of error in the overall calibration process, that is, (I) the confidence interval around the graph, (2) an error band around unknown responses, and finally (3) the estimated amount intervals. To properly use the method, data will be adjusted by using general data transformations to achieve constant variance and linearity. It utilizes a six-step process to calculate amounts or concentration values of unknown samples and their estimated intervals from chromatographic response values using calibration graphs that are constructed by regression. [Pg.135]

Table IX. Confidence Intervals for the Predicted Response from Inverse Transformed Data. a 0.025. Table IX. Confidence Intervals for the Predicted Response from Inverse Transformed Data. a 0.025.
The Bandwidth is essentially a normalized half confidence band. The confidence interval bandwidths for 9 data sets using inverse transformed data are given in Table X. The bandwidths are approximately the vertical widths of response from the line to either band. The best band was found for chlorpyrifos, 1.5%, at the minimum width (located at the mean value of the response) and 4.9% at the minimum or lowest point on the graph. Values for fenvalerate and chlorothalonil were slightly higher, 2.1-2.2% at the mean level. The width at the lowest amount for the former was smaller due to a lower scatter of its points. The same reason explains the difference between fenvalerate and Dataset B. Similarly, the lack of points in Dataset A produced a band that was twice as wide when compared to Dataset B. Dataset C gave a much wider band when compared to Dataset B. [Pg.153]

Table X. Confidence Interval Bandwidths from the Regression of Transformed Data Sets. Inverse Transformed Data. Table X. Confidence Interval Bandwidths from the Regression of Transformed Data Sets. Inverse Transformed Data.
Construction of an Approximate Confidence Interval. An approxi-mate confidence interval can be constructed for an assumed class of distributions, if one is willing to neglect the bias introduced by the spline approximation. This is accomplished by estimation of the standard deviation in the transformed domain of y-values from the replicates. The degrees of freedom for this procedure is then diminished by one accounting for the empirical search for the proper transformation. If one accepts that the distribution of data can be approximated by a normal distribution the Student t-distribution gives... [Pg.179]

Although the values obtained for J and K minimize the variance, we gain more insight into the meaning of the numbers in Table I by describing them in terms of an error bound for estimating asbestos level. A 95% confidence interval for the mean of the log-transformed data is Y + 1.96 SD(Y). In terms of untransformed data the confidence bounds are exp(Y - 1.96 SD(Y)), exp(Y + 1.96 SD(Y)). These limits determine a confidence interval for the median of the untransformed data. The error bound is calculated as... [Pg.195]

ML is the approach most commonly used to fit a distribution of a given type (Madgett 1998 Vose 2000). An advantage of ML estimation is that it is part of a broad statistical framework of likelihood-based statistical methodology, which provides statistical hypothesis tests (likelihood-ratio tests) and confidence intervals (Wald and profile likelihood intervals) as well as point estimates (Meeker and Escobar 1995). MLEs are invariant under parameter transformations (the MLE for some 1-to-l function of a parameter is obtained by applying the function to the untransformed parameter). In most situations of interest to risk assessors, MLEs are consistent and sufficient (a distribution for which sufficient statistics fewer than n do not exist, MLEs or otherwise, is the Weibull distribution, which is not an exponential family). When MLEs are biased, the bias ordinarily disappears asymptotically (as data accumulate). ML may or may not require numerical optimization skills (for optimization of the likelihood function), depending on the distributional model. [Pg.42]

Confidence intervals nsing freqnentist and Bayesian approaches have been compared for the normal distribntion with mean p and standard deviation o (Aldenberg and Jaworska 2000). In particnlar, data on species sensitivity to a toxicant was fitted to a normal distribntion to form the species sensitivity distribution (SSD). Fraction affected (FA) and the hazardons concentration (HC), i.e., percentiles and their confidence intervals, were analyzed. Lower and npper confidence limits were developed from t statistics to form 90% 2-sided classical confidence intervals. Bayesian treatment of the uncertainty of p and a of a presupposed normal distribution followed the approach of Box and Tiao (1973, chapter 2, section 2.4). Noninformative prior distributions for the parameters p and o specify the initial state of knowledge. These were constant c and l/o, respectively. Bayes theorem transforms the prior into the posterior distribution by the multiplication of the classic likelihood fnnction of the data and the joint prior distribution of the parameters, in this case p and o (Fignre 5.4). [Pg.83]

While thep-value allows us the ability to judge statistical significance, the clinical relevance of the finding is difficult to evaluate from the calculated confidence interval because this is now on the log scale. It is usual to back-transform the lower and upper confidence limits, together with the difference in the means on the log scale, to give us something on the original data scale which is more readily interpretable. The back-transform for the log transformation is the anti-log. [Pg.164]

Clearly the main advantage of a non-parametric method is that it makes essentially no assumptions about the underlying distribution of the data. In contrast, the corresponding parametric method makes specific assumptions, for example, that the data are normally distributed. Does this matter Well, as mentioned earlier, the t-tests, even though in a strict sense they assume normality, are quite robust against departures from normality. In other words you have to be some way off normality for the p-values and associated confidence intervals to be become invalid, especially with the kinds of moderate to large sample sizes that we see in our trials. Most of the time in clinical studies, we are within those boundaries, particularly when we are also able to transform data to conform more closely to normality. [Pg.170]

Using log-transformed data, bioequivalence is established by showing that the 90% confidence interval of the ratio of geometric mean responses (usually AUC and Cmax) of the two formulations is contained within the limits of 0.8 to 1.25 [22]. Equivalently, it could be said that bioequivalence is established if the hypothesis that the ratio of geometric means is less than or equal to 0.8 is rejected with... [Pg.199]

An equivalence approach has been and continues to be recommended for BE comparisons. The recommended approach relies on (1) a criterion to allow the comparison, (2) a confidence interval (Cl) for the criterion, and (3) a BE limit. Log-transformation of exposure measures before statistical analysis is recommended. BE studies are performed as single-dose, crossover studies. To compare measures in these studies, data have been analyzed using an average BE criterion. This guidance recommends continued use of an average BE criterion to compare BA measures for replicate and nonreplicate BE studies of both immediate- and modihed-release products. [Pg.142]

Based on the planned Analysis of variance on log-transformed data, 90% confidence intervals for AUC ratios ethinylestradiol + Drug XYZ and ethinylestradiol alone, 20 subjects had to complete the study as planned. [Pg.678]

The interpretation of the pharmacokinetic variables Cmax, AUCs and MRT of insulin glulisine was based on 95 % confidence intervals, after ln-transformation of the data. These 95 % confidence intervals were calculated for the respective mean ratios of pair-wise treatment comparisons. In addition, the test treatment was compared to the reference treatment with respect to the pharmacokinetic variables using an ANOVA with subject, treatment and period effects, after ln-transformation of the data. The subject sum of squares was partitioned to give a term for sequence (treatment by period interaction) and a term for subject within sequence (a residual term). Due to the explorative nature of the study, no adjustment of the a-level was made for the multiple testing procedure. [Pg.687]

Point estimates and 95% confidence intervals for the ratio oftreatment means, based on ln-transformed data. [Pg.689]

Point estimates and 95% confidence intervals for the ratio of treatment means, based on (In) transformed data 2Point estimates and 95% confidence intervals for the respective median differences from non-parametric data analysis Median... [Pg.708]

PK data The PK parameters of ABC4321 in plasma were determined by individual PK analyses. The individual and mean concentrations of ABC4321 in plasma were tabulated and plotted. PK variables were listed and summarized by treatment with descriptive statistics. An analysis of variance (ANOVA) including sequence, subject nested within sequence, period, and treatment effects, was performed on the ln-transformed parameters (except tmax). The mean square error was used to construct the 90% confidence interval for treatment ratios. The point estimates were calculated as a ratio of the antilog of the least square means. Pairwise comparisons to treatment A were made. Whole blood concentrations of XYZ1234 were not used to perform PK analyses. [Pg.712]

The primary parameter AUCo-oo was subjected to an analysis of variance (ANOVA) including sequence, subject nested within sequence (subject (sequence)), period and treatment (non-fasting/fasting) effects. The sequence effect was tested using the subject (sequence) mean square from the ANOVA as an error term. All other main effects were tested against the residual error (error mean square) from the ANOVA. The ANOVA was performed on ln-transformed data. Lor ratios 90 % confidence intervals were constructed. The point estimates and confidence limits were calculated as antilogs and were expressed as percentages. The... [Pg.718]

The purpose of this section is to show the calculation of the confidence interval for the variance in an actual example. The statistical data used for this example are given in Table 5.3. In this table, the statistically measured real input concentrations and the associated output reactant transformation degrees are given for five proposed concentrations of the limiting reactant in the reactor feed. Table 5.3 also contains the values of the computed variances for each statistical selection. The confidence interval for each mean value from Table 5.3 has to be calculated according to the procedure established in steps 6-10 from the algorithm shown in Section 5.2.2.1. In this example, the number of measurements for each experiment is small, thus the estimation of the mean value is difficult. Therefore, we... [Pg.346]

For regression analysis a nonUnear relation often can be transformed into a linear one by plotting a simple function such as the logarithm, square root, or reciprocal of one or both of the variables. Nonlinear transformations should be used with caution because the transformation will convert a distribution from gaussian to nongaussian. Calculations of confidence intervals usually are based on data having a gaussian distribution. [Pg.553]

The probit method is perhaps the most widely used method for calculating toxicity vs. concentration or dose. As its name implies, the method used a probit transformation of the data. A probit is a unit of divergence from the mean of a normal distribution equal to one standard deviation. The central value of a probit would be 5.0, representing the median effect of the toxicity test. A disadvantage of the method is that it requires two sets of partial kills. However, a confidence interval is easily calculated and can then be used to compare toxicity results. There are several programs available for the calculation, and as discussed below, they provide comparable results. [Pg.51]

The parametric method is much more complicated than the simple nonparametric method and requires computer software. The method is presented here under separate headings for testing of type of distribution, transformation of data, and the estimation of percentiles and their confidence intervals. [Pg.438]

General estimates for the 100a and 100(1 - a) percentiles and their 0.90 confidence intervals can be determined by the following method, provided that data (original or transformed) fit the Gaussian distribution ... [Pg.441]

This Is an issue where consensus now seems to have been achieved. An ingenious theorem due to Fieller (Fieller, 1940, 1944) enables one to calculate a confidence interval for the ratio of two means. The approach does not require transformation of the original data. (Edgar C. Fieller, 1907—1960, is an early example of a statistician employed in the pharmaceutical industry. He worked for the Boots company in the late 1930s and 1940s.) For many years this was a common approach to making inferences about the ratio of the two mean AUCs in the standard bioequivalence experiment (Locke, 1984). [Pg.368]

The current evaluation criteria are based on the two one-sided test approach, also commonly referred to as the Confidence Interval Approach or Average Bioequivalence, which determines whether the average values for the pharmacokinetic parameters measured after the administration of test and reference products are comparable. This approach involves the calculation of a 90% confidence interval about the ratio of the averages of T and R products for AUC and values. To establish bioequivalence, the AUC and of the T product should not be less than 0.80 (80%) or greater than 1.25 (125%) of the R product based on log-transformed data (i.e., a bioequivalence limit of 80 to 125%). For some time prior to the use of log-transformed data, the nontransformed data were used to assess bioequivalence. In 1989, it was realized that log transformation of the data enables a comparison based on the ratio of the two averages rather than the difference between the averages in an additive manner." Moreover, most biological data correspond to a log-normal distribution rather than to a normal distribution. [Pg.108]

Statistical Analysis. Means and 95% confidence Intervals were calculated for the recovered monosaccharides (Table III). Data was treated by analysis of variance fitting diets and subjects. In some cases transformations were needed to stabilize the variances between diets. Significant differences In the composition of feces as a result of different diets are clearly Indicated and are consistent with the previous discussion. [Pg.235]


See other pages where Transformed data confidence intervals is mentioned: [Pg.48]    [Pg.453]    [Pg.133]    [Pg.152]    [Pg.164]    [Pg.218]    [Pg.64]    [Pg.716]    [Pg.174]    [Pg.1893]    [Pg.3988]    [Pg.55]    [Pg.439]    [Pg.381]    [Pg.360]    [Pg.62]    [Pg.297]    [Pg.140]    [Pg.50]   
See also in sourсe #XX -- [ Pg.154 ]




SEARCH



Confidence

Confidence intervals

Confidence intervals transformations

Data confidence

Data transformation

Interval data

© 2024 chempedia.info