Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Assay acceptance criteria

Experience with CE method transfer in the biotech/pharmaceutical industry over the past 10—20 years has demonstrated that training is a key element that requires special attention for CE methods. A training video with troubleshooting examples can be very useful. Tips and hints should also be shared during the method transfer process. Other key elements for a successful transfer include selection of the proper testing strategy and assay acceptance criteria. [Pg.390]

Figure 1.12. Calibration curves lor loratadine (SCH 29851) and desloratadine (SCH 34117) obtained under unit- and enhanced-resolution conditions. The precision and accuracy under both unit- and enhanced-resolution conditions met the assay acceptance criteria, correlation coefficients at enhanced resolution (0.993) were lower than those obtained at unit resolution (0.999). The lower correlation coefficients under enhanced-resolution conditions might have resulted from a slight mass window shift during the long overnight 17-h run. (Reprinted with permission from Yang et al., 2002.)... Figure 1.12. Calibration curves lor loratadine (SCH 29851) and desloratadine (SCH 34117) obtained under unit- and enhanced-resolution conditions. The precision and accuracy under both unit- and enhanced-resolution conditions met the assay acceptance criteria, correlation coefficients at enhanced resolution (0.993) were lower than those obtained at unit resolution (0.999). The lower correlation coefficients under enhanced-resolution conditions might have resulted from a slight mass window shift during the long overnight 17-h run. (Reprinted with permission from Yang et al., 2002.)...
Calibrator Precision A calibration curve containing a calibrator with unacceptable replicate precision (e.g., CV >20%) may still pass the overall assay acceptance criteria. If there is an insufficient number of acceptable calibrators from LLOQ to ULOQ, or if adj acent c alibrators fail precision, consider whether the curve is too flat at the points of failure. If replicate %CV of the signal (OD, in the case of ELIS A) is... [Pg.72]

Assay acceptance criteria for biomarkers do not solely depend on the deliverable method accuracy and precision, as is the general rule in bioanalytical laboratories. Instead, the consideration should be on the assay suitability for the intended applications considering three major factors (a) the intended use of the data during various stages of drug development, (b) the nature of the assay methodology and the type of data that the assay provides, and (c) the biological variability of the biomarker that exists within and between populations. Thus, the first factor helps shape the assay tolerance or acceptance criteria for biomarkers. [Pg.157]

The stability of the analyte(s) in matrix during extended freezer storage should be determined by analyzing stored stability QC samples at appropriately selected time intervals and at temperatures that reflect the intended storage conditions and anticipated storage periods for study samples. For many applications, the compound is determined to be stable as long as the calculated concentration of the analyte is within 15 % of the nominal concentration, or the concentration that was established for the same batch of QCs when analyzed immediately after preparation (time zero). Freshly prepared QC samples, or QC samples prepared and stored within an established period of stability, should be used as analytical QCs in the same set that the stability QCs are analyzed to confirm run acceptance. If the analytical QC samples do not meet assay acceptance criteria, the run should be rejected and the stability interval should be repeated. Incurred samples or sample pools may also be analyzed for assessment of stability in a similar manner. [Pg.547]

The details of the assessment of stability data are under intense discussion within the scientific community. A majority of laboratories evaluate data with acceptance criteria relative to the nominal concentration of the spiked sample. The rationale for this is that it is not feasible to introduce more stringent criteria for stability evaluations than that of the assay acceptance criterion. Another common approach is to compare data against a baseline concentration (or day zero concentration) of a bulk preparation of stability samples established by repeated analysis, either during the accuracy and precision evaluations, or by other means. This evaluation then eliminates any systematic errors that may have occurred in the preparation of the stability samples. A more statistically acceptable method of stability data evaluations would be to use confidence intervals or perform trend analysis on the data [24]. In this case, when the observed concentration or response of the stability sample is beyond the lower confidence interval (as set a priori), the data indicate a lack of analyte stability under the conditions evaluated. [Pg.102]

For autosampler precision, 10 consecutive lO-pL injections of an eth-ylparaben solution (20 J,g/mL) are used (Figure 6). A Waters Symmetry column packed with 5- J,m particles is used. The manufacturer s specification for peak area precision at 0.5% RSD is adopted as the acceptance criterion. This stringent precision criterion is required for precise assay testing of drug substances typically specified at 98-102% purity. The linearity test is performed by single injections of 5, 10, 40, and 80 pL of the... [Pg.296]

The obtained results from both analysts are grouped together to determine whether this additive precision is acceptable or not. For example, if each analyst prepared two sample preparations API at target concentration for intermediate precision, then a total of four values are pooled together (additive precision) as stated in Table 9-3 of Assay, Precision rei < 2.0%, n > 4. In addition to an additive precision requirement, some laboratories also include an acceptance criterion (for example, absolute mean difference <2%) for mean value. For example, if analysts 1 and 2 prepare three sample preparations each, then additive precision is calculated from a total of six values (three from each analyst). In addition, the mean value obtained by analyst 1 (n = 3) is compared against the mean value obtained from analyst 2 (n = 3), in which it must pass an absolute difference (between the two means) of <2.0%. [Pg.487]

For each mass in the tested range observed, the total %CV must be less than 10%. [These are hypothetical acceptance criteria. Note that the precision acceptance criterion can be on the standard deviation or some other metric of variability, preferably a metric that is somewhat consistent in value over the range of the assay.]... [Pg.12]

For each assay, the coefficient of determination must be greater than 0.975. [These are hypothetical acceptance criteria. Other metrics for linearity could be used for the acceptance criterion. Note that for precise assays, a significant lack-of-fit may not be a meaningful lack-of-fit, due to a slight but consistent curve or other artifact in the data. If lack-of-fit is used for the acceptance criterion, the requirement should be placed on absolute deviation or percent departure from linearity rather than statistical significance. Often the acceptance criteria are based on design criteria for the development of the method or development data.]... [Pg.12]

During prestudy validation, the performance of the assay with respect to specificity and selectivity is confirmed with the most relevant compounds and matrices. Selectivity is expressed as acceptable recovery, using the same criteria that are applied during the assessment of accuracy. The recommended target acceptance criterion for selectivity is that acceptable recovery (e.g., 80 120% relative to buffer control) is obtained in at least 80% of the matrices evaluated. [Pg.90]

There may be situations during sample analysis in which the LLOQ or the ULOQ standard point is removed due to a technical error or other a priori determined acceptance criterion. The practical outcome is a truncated standard curve, that is, in the case where the LLOQ standard is revised, then the next highest level standard becomes the LLOQ for this ran. This revised standard curve should then be used in the evaluation of the study samples from that particular run, rejecting (for repeat analysis) any study samples read below the new LLOQ. In general, the study samples that fall between the revised LLOQ and the original assay LLOQ are repeated in a separate run to avoid having samples reported against two LLOQ values in the... [Pg.98]

The properties of such a decision rule need to be examined carefully from the perspective of both the sponsor/laboratory and client [28]. Indeed, the probability of accepting a run with respect to the quality level n of the assay depends on its performance criteria, bias and precision (<5M) and (erM), as seen in Equation 5.2. As performance of the assay deteriorates, a smaller proportion of results is expected to fall within the pre-specified acceptance limits. Then, from both the sponsor and regulatory perspectives, it would be better to have an acceptance criterion that is more likely to reject the runs when the expected proportion n of results within the acceptance limits... [Pg.123]

As for immunoassays for pharmaceutical proteins, in-study validation of biomarker assays should include one set of calibrators to monitor the standard curve as well as a set of QC samples at three concentrations analyzed in duplicate for the decision to accept or reject a specific run. Recommended acceptance criterion is the 6-4-30 rule, but even more lenient acceptance criteria may be justified based on statistical rationale developed from experimental data [14]. [Pg.625]

The data comparison described above in Figs. 13.2 and 13.3 may be performed statistically in several ways. The statistical approach takes data variability into account when setting limits. Therefore, a single acceptance criterion for OOT identification can be set for different types of assays. Three such procedures for normally distributed data are described in a review by the PhRMA CMC Statistics and Stability Expert Teams [3]. Each of these approaches has its own advantages and disadvantages, a summary of which is provided in the paragraphs below. [Pg.267]

Statistical methods provide an approach that yields quantitative estimates of the random uncertainties in the raw data measurements themselves and also in the conclusions drawn from them. Statistical methods do not detect systematic errors (e.g. bias) present in an assay nor do they give a clear-cut answer to the question as to whether or not a particular experimental result is acceptable. An acceptability criterion must be chosen a priori based on the underlying assumption that the data follow a Gaussian (normal) distribution. A common acceptability criterion is the 95 % confidence level, corresponding to a p-value of 0.05. Because work is with small data sets in trace quantitative analyses, as opposed to the infinitely large data sets required for idealized statistical theory, use is mode of tools and tests based on the (t) distribution (Sudent s t distribution) developed specifically for the statistical analysis of small data sets. [Pg.453]

According to ICH QIE, an appropriate approach to retest period estimation is to analyze a quantitative attribute (e.g., assay, degradation products) by determining the earliest time at which the 95% confidence limit for the mean intersects the proposed acceptance criterion. For an attribute known to decrease with time, the lower one-sided 95% confidence limit should be compared to the acceptance criterion. For an attribute known to increase with time, the upper one-sided 95% confidence limit should be compared to the acceptance criterion. For an attribute that can either increase or decrease, or whose direction of change is not known, two-sided 95% confidence limits should be calculated and compared to the upper and lower acceptance criteria. If the data show that the batch-to-batch variability is small, it may be worthwhile to combine the data into the overall estimate. The appropriate statistical modeling is used to analyze the data. ... [Pg.489]

To evaluate the stability data for a quantitative attribute and establishing a retest period or shelf life, regression analysis is considered an appropriate approach. To estimate the retest period or shelf life it is acceptable to analyze a quantitative attribute (e.g., assay, degradation product) by using the earliest time at which the 95% confidence limit for the mean intersects the proposed acceptance criterion. [Pg.501]

Another criterion for linearity is that the y-intercept of the calibration curve (after the response of the blank has been subtracted from each standard) should be close to 0. An acceptable degree of closeness to 0 might be 2% of the response for the target value of analyte. For the assay of impurities, which are present at concentrations lower than that of the major component, an acceptable value of R2 might be 0.98 for the range 0.1 to 2 wt% and the y-intercept should be 10% of the response for the 2 wt% standard. [Pg.84]

Biological assays, in particular potency assays, are meant to demonstrate a relationship between the product and the desired biological effect. The correlation between the activity measured by the potency assay and the resulting clinical effect is an important criterion in both dosing and the determination of efficacy. International or other accepted reference standards, when available, should be incorporated in the assay. [Pg.169]

TEER is usually measured at the beginning (0 h) and end of the assay (lh). Monolayers with TEER values below an acceptance value should rejected (for example, TEER < 250 Ohm/cm2 at Oh could be used as criterion to exclude cell monolayers from experiment). TEER acceptance criteria should also be defined (for example +/- 30% change in TEER at lh from TEER value at time zero means that the experiment can not be accepted). [Pg.446]

A common criterion for accepting a run/plate is based on the proportion of QC sample results that fall within pre-specified limits. Adequate numbers of QC samples are prepared at different concentration levels, typically at the low, middle, and high levels of the assay dynamic range. These samples are analyzed in each run/plate and the concentration values are estimated. The proportion of results falling within the prespecified acceptance limits is then calculated at each concentration level and also across the entire concentration range. [Pg.122]

Edge effects and other plate trends are investigated by a whole plate imprecision experiment (Fig. 7.2). This is done by placing the same solution of standard/quality control samples in all the wells of the plate. The standard/QC solution used should produce a reasonable response value in the assay, for example, >1.0 optical density (OD) for a colorimetric method. The OD response values for the whole plate, individual columns and rows are averaged and CVs calculated. Trends in the CV data indicate the potential for edge and other plate effects. In terms of acceptance criteria for whole plate precision, this will depend on the specific requirements of the study but a suggested criterion is a CV of less than 5%. [Pg.185]

Values of 20% (25% at lower limit of quantification [LLOQ]) are recommended as default acceptance criteria for accuracy and inter-batch precision for ligand binding in practical use. Precision and accuracy should be established by analyzing four sets of QC samples at LLOQ, low, medium, and high levels in duplicate in six different batches during method development. In addition, a second proposed criterion for method acceptance takes into account the sum of inter-batch precision and the absolute value of accuracy be <30%. During practical application of such assays, the 4-6-30-rule may be applied - that is, for each batch four out of six QC samples must be within 30% of nominal concentration, but the two failed QC samples may not be at the same level. [Pg.1575]

Primary assays are used for the first round of testing drugs which may or may not have activity. It is important that secondary, or validation, assays be established. These secondary screens will allow a second criterion (or more) for accepting a compound as active. These assays are often related to the primary assay, but contain more of the entire cellular system. Often, secondary assays utilize live cells in which the enzyme and/ or receptor are crucial for a specific cellular response. [Pg.41]

In any event, a strategy for demonstrating selectivity should be developed and included as a part of every method validation and the results should be presented and discussed in the final validation report (Section 10.3.3). The amount of tolerable interference will depend on the analytical objectives and on the needs of the end user of the data. For a bioanalytical assay, background interference of less than 20 % of the response at the LLOQ is often considered to be acceptable this criterion is related to the recommendation (Viswanathan 2007) that the analyte response at the LLOQ should be at least five times the response due to blank matrix. Laboratory SOPs will dictate how this is measured in practice. If just a single measurement was made for each, the ratio of area (or height) of the LLOQ vs that of the blank is used, but if rephcate measurements were made for the cahbration curve at each concentration (as in bioanalytical practice), the lowest of the responses at the LLOQ would most likely be used for this purpose. [Pg.542]


See other pages where Assay acceptance criteria is mentioned: [Pg.357]    [Pg.390]    [Pg.12]    [Pg.243]    [Pg.12]    [Pg.254]    [Pg.345]    [Pg.70]    [Pg.570]    [Pg.473]    [Pg.752]    [Pg.402]    [Pg.97]    [Pg.3491]    [Pg.1704]    [Pg.203]    [Pg.232]    [Pg.141]    [Pg.142]    [Pg.1131]    [Pg.1632]    [Pg.362]    [Pg.1345]    [Pg.16]    [Pg.120]    [Pg.126]    [Pg.49]    [Pg.3]    [Pg.105]    [Pg.252]    [Pg.313]    [Pg.34]    [Pg.117]    [Pg.103]   
See also in sourсe #XX -- [ Pg.390 ]




SEARCH



© 2024 chempedia.info