Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

False positive decision

If an analytical test results in a lower value x, < x0, then the customer may reject the product as to be defective. Due to the variation in the results of analyses and their evaluation by means of statistical tests, however, a product of good quality may be rejected or a defective product may be approved according to the facts shown in Table 4.2 (see Sect. 4.3.1). Therefore, manufacturer and customer have to agree upon statistical limits (critical values) which minimize false-negative decisions (errors of the first kind which characterize the manufacturer risk) and false-positive decisions (errors of the second kind which represent the customer risk) as well as test expenditure. In principle, analytical precision and statistical security can be increased almost to an unlimited extent but this would be reflected by high costs for both manufacturers and customers. [Pg.116]

Finally, a third type of argument is possible especially for the choices of disciplines that use and depend on costly experimental techniques (e.g., the 2-year bioassays of toxicology). The concern about false positive decisions implied by the requirement for small values of alpha regardless of the consequences probably reflects where toxicologists collectively strike a balance between the need for a broad front advance and scarcity of resources. [Pg.246]

Type I error (alpha error) An incorrect decision resulting from rejecting the null hypothesis when the null hypothesis is true. A false positive decision. [Pg.182]

Alternative condition is true (Ha i < Ca) False rejection decision error False positive decision error Type I decision error Probability a Risk, error rate 100 x u. Correct decision The probability of making a correct decision (1 —ft)... [Pg.28]

The decision as to the significance of the effects should not be left to a mere algorithm. According to Jimidar [18], a 1% probability of error should be selected in order to reduce the possibility of false positive decisions during the determination of the significance of effects. [Pg.662]

For the reasons described, no specific test will be advanced here as being superior, but Huber s model and the classical one for z = 2 and z = 3 are incorporated into program HUBER the authors are of the opinion that the best recourse is to openly declare all values and do the analysis twice, once with the presumed outliers included, and once excluded from the statistical analysis in the latter case the excluded points should nonetheless be included in tables (in parentheses) and in graphs (different symbol). Outliers should not be labeled as such solely on the basis of a fixed (statistical) rule the decision should primarily reflect scientific experience. The justification must be explicitly stated in any report cf. Sections 4.18 and 4.19. If the circumstances demand that a mle be set down, it is best to use a robust model such as Huber s its sensitivity for the problem at hand, and the typical rate for false positives, should be investigated by, for example, a Monte... [Pg.59]

A new tumor marker is evaluated using the same criteria used for many diagnostic tests (i.e., sensitivity, specificity, and accuracy). The diagnostic sensitivity and specificity are best represented by a receiver operating characteristic (ROC) curve. The ROC curve is constructed with the true-positive rate versus false-positive rate at various decision levels. As a test improves in its diagnostic performance, it shifts upward and to the left as the true-positive rate increases and the false-positive rate decreases. [Pg.186]

False positive and false negative decisions result from unreliabilities that can be attributed to uncertainties of quantitative tests. According to Fig. 4.9 the belonging of test values to the distributions p(yLSp) or p(yscR)> respectively, may be affected by the risks of error a and (see Sect. 4.3.1) which corresponds to false positive (a) and false negative (/3) test results. [Pg.114]

Decision rule (the number of false positives one will accept). [Pg.122]

Another approach to controlling the false positive rate in carcinogenicity studies was proposed by Haseman (1983). Under this rule, a compound would be declared a carcinogen if it produced an increase significant at the 1% level in a common tumor or an increase significant at the 5% level in a rear tumor. A rare neoplasm was defined as a neoplasm that occurred with a frequency of less than 1% in control animals. The overall false positive rate associated with this decision rule was found to be not more that 7-8%, based on control tumor incidences from NTP studies in rats and mice. This false positive rate compares favorably with the expected rate of 5%, which is the probability at which one would erroneously... [Pg.313]

As in any classification problem, there is a tradeoff between the rate of recall, or proportion of correct substructures detected, and the reliability, or avoidance of false positive assertions. It is rather the exception than the rule for an observation to have a single, unequivocal explanation. When reasonable alternative interpretations are possible, a decision must be made about what to report. At one extreme, all possibilities could be asserted, ensuring 100% recall (i.e. no substructure which is actually present will fail to be detected) at the cost of a high rate of false positives. [Pg.352]

Because of the Importance of their decisions and the need for statistical Justification of their results, monitoring statisticians and chemometrlclans are being asked by their customers to use hypothesis testing with Its attention to false positives and false negatives. [Pg.184]

Consider with stakeholders the uncertainties in risks, costs and benehts, and the consequences of false positives and false negatives when establishing decision rules. [Pg.167]

The decision about which error is more important is not a scientific question that can be resolved through technical analysis. It is a value choice. Of course, better scientific knowledge reduces the probability of making incorrect inferences abouf healfh effects. But even in situations of high certainty, the choice between false-positive and false-negative errors remains. And people invariably weigh the trade-offs differenfly. [Pg.68]

Finally we hope to see that more validation studies are conducted to compare any new search method with the reference exhaustive search (of course on a smaller validation virtual space of 104-106). Only through this type of rigorous validation studies, one can truly probe the rates of false positives and false negatives as well as the fold increase in search speed. This in turn allows end users to make informed decisions on which search method will be a best match for their specific tasks. [Pg.274]

The decision on what type of screen to use in FBDD is affected by many different factors availability of protein for screening, compound selection, throughput, turnaround and rate of false positives and negatives. The resolution of these questions from target assessment directly impact the possibilities in this section. If there is not sufficient protein for a biophysical screen, a biochemical screen is the only choice. Protein that is not stable for... [Pg.19]


See other pages where False positive decision is mentioned: [Pg.24]    [Pg.339]    [Pg.270]    [Pg.113]    [Pg.114]    [Pg.394]    [Pg.568]    [Pg.24]    [Pg.339]    [Pg.270]    [Pg.113]    [Pg.114]    [Pg.394]    [Pg.568]    [Pg.180]    [Pg.268]    [Pg.576]    [Pg.107]    [Pg.37]    [Pg.46]    [Pg.53]    [Pg.4]    [Pg.276]    [Pg.197]    [Pg.51]    [Pg.70]    [Pg.184]    [Pg.193]    [Pg.15]    [Pg.287]    [Pg.59]    [Pg.172]    [Pg.457]    [Pg.91]    [Pg.745]    [Pg.238]    [Pg.778]    [Pg.27]    [Pg.237]   


SEARCH



False position

False positives

© 2024 chempedia.info