Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Tolerance intervals

The sampling variance of the material determined at a certain mass and the number of repetitive analyses can be used for the calculation of a sampling constant, K, a homogeneity factor, Hg or a statistical tolerance interval (m A) which will cover at least a 95 % probability at a probability level of r - a = 0.95 to obtain the expected result in the certified range (Pauwels et al. 1994). The value of A is computed as A = k 2R-s, a multiple of Rj, where is the standard deviation of the homogeneity determination,. The value of fe 2 depends on the number of measurements, n, the proportion, P, of the total population to be covered (95 %) and the probability level i - a (0.95). These factors for two-sided tolerance limits for normal distribution fe 2 can be found in various statistical textbooks (Owen 1962). The overall standard deviation S = (s/s/n) as determined from a series of replicate samples of approximately equal masses is composed of the analytical error, R , and an error due to sample inhomogeneity, Rj. As the variances are additive, one can write (Equation 4.2) ... [Pg.132]

A statistical tolerance interval for img sample mass can be calculated according to Equation (4.6). [Pg.134]

Of course, it would be unreasonable to expect that values of the parameters will be exactly the same in all analyses, even if a taxon really exists. We therefore set tolerance intervals —a predetermined limit on the degree to which estimates can deviate from one another—on each value being tested. If any estimate falls outside the interval, we should conclude that analyses... [Pg.43]

We believe that only large scale simulation studies can truly advance the discipline by helping us establish acceptable tolerance intervals. However, individual (parallel) simulations such as those just described can also be useful. These simulations can serve as suitability tests that is, they can tell the researcher whether a particular research data set can, in principle, answer the questions of interest. In other words, if taxonic and dimensional data are generated to simulate the research data and the researcher finds few differences between the simulated sets (e.g., they yielded the same number of taxonic plots), then there is little sense in analyzing the research data because it is unlikely to give a clear answer. With suitability testing, a modest simulation study (e.g., 20 taxonic and 20 continuous data sets) is preferred to individual simulations because it would yield clearer and more reliable results. [Pg.45]

The grouping approach accepts a certain error on the monthly planning level compared to the exact operations level. On the operations level, all transportation lanes - also location-internal transfers e.g. in pipelines -have a transportation time > 0. Conceptually, it is required to make a clear cut between planning and operations and to define a planning tolerance interval e.g. 10% of the total period time - in this case 3 days -, where transportation times are set equal to 0. Otherwise, the planner always would miss 3% of volume in the same planned period due to the transportation time lag of 3 days leading to complexity in the plan. [Pg.173]

Relative and absolute MIP gap mixed integer programming parameter for controlling optimization accuracy e g. MIP gap of 1% leads to an algorithm stop, if the objective value cannot be improved within a tolerance interval of 1%. [Pg.210]

The basis of all performance criteria are prediction errors (residuals), yt - yh obtained from an independent test set, or by CV or bootstrap, or sometimes by less reliable methods. It is crucial to document from which data set and by which strategy the prediction errors have been obtained furthermore, a large number of prediction errors is desirable. Various measures can be derived from the residuals to characterize the prediction performance of a single model or a model type. If enough values are available, visualization of the error distribution gives a comprehensive picture. In many cases, the distribution is similar to a normal distribution and has a mean of approximately zero. Such distribution can well be described by a single parameter that measures the spread. Other distributions of the errors, for instance a bimodal distribution or a skewed distribution, may occur and can for instance be characterized by a tolerance interval. [Pg.126]

Note that z can be larger than the number of objects, n, if for instance repeated CV or bootstrap has been applied. The bias is the arithmetic mean of the prediction errors and should be near zero however, a systematic error (a nonzero bias) may appear if, for instance, a calibration model is applied to data that have been produced by another instrument. In the case of a normal distribution, about 95% of the prediction errors are within the tolerance interval 2 SEP. The measure SEP and the tolerance interval are given in the units of v, and are therefore most useful for model applications. [Pg.127]

Hoffmann, D., Kringle, R. Two-sided tolerance intervals for balanced and unbalanced random effects models. J. Biopharm. Stat., 15, 2005, 283-293. [Pg.41]

Inserting tolerance intervals of the chromatographic responses in this equation results in rugged intervals for the factors. By comparing these intervals with the inaccuracy of the settings of the experimental conditions a statement about the ruggedness of the method is made. The tolerance intervals of the responses are defined by the experimentator, e.g. 2.5% difference in the area response between two independent analyses is considered acceptable in reference [16], i.e. a value of 0.025 0.307=0.0076 for the above mean response. The rugged interval for the injection temperature is then obtained from equation (29) ... [Pg.137]

A tolerance interval is an interval that contains at least a specified proportion P of the population with a specified degree of confidence, 100(1 - a)%. This allows a manufacturer to specify that at a certain confidence level at least a fraction of size P of the total items manufactured will lie within a given interval. The form of the equation is... [Pg.704]

Note for any stated confidence level, the confidence interval about the mean is the narrowest interval, the prediction interval for a single future observation is wider, and the tolerance interval (to contain 95% of the population) is the widest.]... [Pg.705]

Tolerance interval tency limits Easy to calculate rewarded Not tied directly to USP 25... [Pg.713]

Confidence Interval. Confidence intervals are not recommended for evaluating content uniformity data. An approach that is less restrictive than tolerance intervals for evaluating dissolution data, however, is to base the acceptance limits on meeting the second and third stage of the USP 25 dissolution test. Both the second and third stages require that the sample mean be less than... [Pg.717]

Meeting the foregoing criterion should not be interpreted to mean that an individual composite potency assay will meet the in-house limits with high assurance. If this is desired, a prediction interval for a single future observation, or better yet, a tolerance interval, should be used. The validation specialist should be cautioned that additional composite assays might need to be tested to meet either one of these criteria with high confidence. [Pg.718]

For the tolerance interval approach, a 90% coverage is used, since capsules are being evaluated. (See Sec. III.A.) The 90% two-sided tolerance interval to capture 90% of the individual content uniformity results is 97.76 2.406 = (91.41, 104.11). Since the interval is completely contained within the 85-115% range, the criterion is met. [Pg.720]

Note as mentioned in Sec. III.A., if the coverage level associated with tablets (96.7%) was used instead of the coverage level associated with capsules (90.0%), the tolerance factor would be 3.112 and the tolerance interval would be (89.54, 105.98). This, too, would meet the criterion.]... [Pg.720]

Using a criterion based on passing stage 1 of the USP 25 dissolution test, a lower one-sided 90% tolerance interval to capture 99.1% of the individual dissolution values is 93.84 - 2.930(3.47) = 83.67. Using this criterion, dissolution would fail, since the lower bound is less than <2 + 5, which is 90. [Pg.722]

To apply the tolerance interval, SDPI, and CuDAL approaches, it is necessary to compute the following variance components. [Pg.723]

To use the tolerance interval approach, the Satterthwaite approximate degrees of freedom (d.f.) is 21.48. The 90% tolerance interval to capture 90% of the individual capsule content uniformity results is 97.50 +/- 2.112(4.356) = (88.30, 106.70). The tolerance factor was determined using linear interpolation. This would meet the criterion, since the interval is completely contained within the interval 85-115%. As mentioned in Sec. III.A, if the coverage level associated with tablets (96.7%) was used instead of the coverage level associated with capsules (90.0%), the tolerance factor would be 2.731 and the tolerance interval would be (85.60, 109.40). This would just barely meet the acceptance criterion. [Pg.723]

A 90% tolerance interval to capture 90% of the individual content uniformity results using the Satterthwaite approximation of 21.56 d.f. is 98.20+/-2.111(2.407) = (93.12, 103.28). The tolerance interval indicates that the capsules have good content uniformity. [Pg.725]


See other pages where Tolerance intervals is mentioned: [Pg.1131]    [Pg.761]    [Pg.89]    [Pg.44]    [Pg.46]    [Pg.47]    [Pg.68]    [Pg.141]    [Pg.231]    [Pg.151]    [Pg.23]    [Pg.128]    [Pg.29]    [Pg.382]    [Pg.96]    [Pg.511]    [Pg.525]    [Pg.526]    [Pg.94]    [Pg.704]    [Pg.712]    [Pg.715]    [Pg.715]    [Pg.715]    [Pg.717]    [Pg.721]   
See also in sourсe #XX -- [ Pg.43 ]

See also in sourсe #XX -- [ Pg.759 ]

See also in sourсe #XX -- [ Pg.391 ]

See also in sourсe #XX -- [ Pg.30 ]




SEARCH



© 2024 chempedia.info