Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Equal error rate

The number of features in the maximal feature vector, of order of hundreds of thousands, is too big to be useful in practice, due to such issues like data transmission through the net, data storage in databases, templates comparison made in smart card processors or biometric standalone devices, etc. To reduce the number of features, we will look for a feature vector that leads to minimum sample equal error rate determined on available iris images database. [Pg.268]

Keywords— Speaker Verification, Gaussian Mixture Model, Equal Error Rate. [Pg.560]

Imposter Model Client Model Equal Error Rate ... [Pg.562]

In some instances, distinct polymorphic forms can be isolated that do not interconvert when suspended in a solvent system, but that also do not exhibit differences in intrinsic dissolution rates. One such example is enalapril maleate, which exists in two bioequivalent polymorphic forms of equal dissolution rate [139], and therefore of equal free energy. When solution calorimetry was used to study the system, it was found that the enthalpy difference between the two forms was very small. The difference in heats of solution of the two polymorphic forms obtained in methanol was found to be 0.51 kcal/mol, while the analogous difference obtained in acetone was 0.69 kcal/mol. These results obtained in two different solvent systems are probably equal to within experimental error. It may be concluded that the small difference in lattice enthalpies (AH) between the two forms is compensated by an almost equal and opposite small difference in the entropy term (-T AS), so that the difference in free energy (AG) is not sufficient to lead to observable differences in either dissolution rate or equilibrium solubility. The bioequivalence of the two polymorphs of enalapril maleate is therefore easily explained thermodynamically. [Pg.369]

The numerator of Bayes theorem merely describes cell a (the tme-positive results). The probability of being in cell a is equal to the prevalence times the sensitivity, where XD+) is the prevalence (the probability of being in the effected column) and where XT + D+) is the sensitivity (the probability of being in the top row, given the fact of being in the effected column). The denominator of Bayes theorem consists of two terms, the first of which once again describes cell a (the true-positive results) and the second of which describes cell b (the false-positive error rate, or X I + D—), is multiplied by the prevalence of noneffected animals, or... [Pg.954]

The considerations so far are based on the presumption that the type I error rate is divided equally across all of the comparisons. This does not always make sense and indeed it is not a requirement that it be done in this way. For example, with two comparisons there would be nothing to prevent having a 4 per cent type I error rate for one of the comparisons and a 1 per cent type I error rate for the other, providing this methodology is clearly set down in the protocol. We will see a setting below, that of interim analysis, where it is usually advantageous to divide up the error rate unequally. Outside of interim analysis, however, it is rare to see anything other than an equal subdivision. [Pg.152]

Pocock (1977) developed a procedure which divides the type I error rate of 5 per cent equally across the various analyses. In the example above with two interim looks and a final analysis, Bonferroni would suggest using an adjusted significance level of 0.017 (= 0.05 4- 3). The Pocock method however gives us the correct adjusted significance level as 0.022 and this exactly preserves the overall 5 per cent type I error rate. [Pg.153]

One area that we briefly mentioned was interim analysis, where we are looking at the data in the trial as it accumulates. The method due to Pocock (1977) was discussed to control the type I error rate across the series of interim looks. The Pocock methodology divided up the 5 per cent type I error rate equally across the analyses. So, for example, for two interim looks and a final analysis, the significance level at each analysis is 0.022. For the O Brien and Fleming (1979) method most of the 5 per cent is left over for the final analysis, while the first analysis is at a very stringent level and the adjusted significance levels are 0.00052, 0.014 and 0.045. [Pg.213]

Restriction of the information content due to the conditions Qm > 1 - The upper limit is given by vmax = In (0/(1 - qm). It corresponds roughly to the reciprocal of the average error rate (1 - qm), as long as crm is sufficiently larger than unity. If the information content vm of the wild-type approaches the upper limit vmax, Qm becomes approximately equal to a . The proportion of wild-type in the total population is then very small ... [Pg.129]

Figure 11. The error threshold of replication and mutation in genotype space. Asexually reproducing populations with sufficiently accurate replication and mutation, approach stationary mutant distributions which cover some region in sequence space. The condition of stationarity leads to a (genotypic) error threshold. In order to sustain a stable population the error rate has to be below an upper limit above which the population starts to drift randomly through sequence space. In case of selective neutrality, i.e. the case of equal replication rate constants, the superiority becomes unity, Om = 1, and then stationarity is bound to zero error rate, pmax = 0. Polynucleotide replication in nature is confined also by a lower physical limit which is the maximum accuracy which can be achieved with the given molecular machinery. As shown in the illustration, the fraction of mutants increases with increasing error rate. More mutants and hence more diversity in the population imply more variability in optimization. The choice of an optimal mutation rate depends on the environment. In constant environments populations with lower mutation rates do better, and hence they will approach the lower limit. In highly variable environments those populations which approach the error threshold as closely as possible have an advantage. This is observed for example with viruses, which have to cope with an immune system or other defence mechanisms of the host. Figure 11. The error threshold of replication and mutation in genotype space. Asexually reproducing populations with sufficiently accurate replication and mutation, approach stationary mutant distributions which cover some region in sequence space. The condition of stationarity leads to a (genotypic) error threshold. In order to sustain a stable population the error rate has to be below an upper limit above which the population starts to drift randomly through sequence space. In case of selective neutrality, i.e. the case of equal replication rate constants, the superiority becomes unity, Om = 1, and then stationarity is bound to zero error rate, pmax = 0. Polynucleotide replication in nature is confined also by a lower physical limit which is the maximum accuracy which can be achieved with the given molecular machinery. As shown in the illustration, the fraction of mutants increases with increasing error rate. More mutants and hence more diversity in the population imply more variability in optimization. The choice of an optimal mutation rate depends on the environment. In constant environments populations with lower mutation rates do better, and hence they will approach the lower limit. In highly variable environments those populations which approach the error threshold as closely as possible have an advantage. This is observed for example with viruses, which have to cope with an immune system or other defence mechanisms of the host.
If the observed error rate in the sample is equal to or less than the predefined acceptable level, no further action is required. Nevertheless, it is recommended that the opportunity be taken to correct any errors found and to investigate any commonalties between the errors, to identify any root cause that might affect the rest of the data population. [Pg.353]

For the calculation of stationary mutant distributions we restrict attention to a uniform error rate per digit (1 — ) and assume equal degradation rate coefficients Dy = D2= =D = D. Since the addition of a constant to all diagonal elements of a matrix just shifts the spectrum of eigenvalues and has no influence on the eigenvectors, we need only consider the case D = 0 without loss of generality. Then the elements of the matrix W are determined by the replication rate coefficients (as in Section III.2) and are of the form... [Pg.199]

This means that the probability of rejecting at least one of c hypotheses is less than or equal to (thus the term "inequality") the sum of the probabilities of rejecting each hypothesis. This inequality is true even if the events, in this case rejecting one of c null hypotheses, are not independent. Recall from Section 6.2 that, when events are not independent, the probability of intersecting events should be subtracted. Using Bonferroni s method, testing each pair of means with an a level of a = will ensure that the overall type I error rate does not exceed the desired value of a. It follows that the probability of rejecting at least one of c null hypotheses can be expressed as follows ... [Pg.160]

The Type I error (rejection of the reduced model in favor of the full model) that would result from the use of the theoretical critical value was assessed for each of the designs considered, and for three alternative NONMEM linearization methods first-order (FO), first-order conditional estimation (FOCE), and first-order conditional estimation with interaction (FOCEI). Type I error rates were assessed by empirical determination of the probability of rejection of the reduced model, given that the reduced model was the correct model. Data sets were simulated with the reduced model (FO, 1000 data sets FOCE/FOCEI, 200 data sets) and fitted using the full and reduced models. The empirical Type I error was determined as the percentage of simulated data sets for which a LRT statistic of 3.84 or greater was obtained. The 3.84 critical value for the LRT statistic corresponds to a significance level of 5%, for a distribution with 1 degree of freedom (for the one extra parameter in the full model). The LRT statistic was calculated as the difference between the NONMEM objective function values of the reduced and full models. The results of these simulations were also used to determine an empirical critical value that would result in the Type I error rate equal to the nominal 5% value. [Pg.319]

Now, if the reason for this rule is simply that we want a type I error rate of 1/1600, we can, in fact, replace it by a more efficient one. This is (assuming the trials are of equal precision) to require that the mean of the two z-statistics is greater than 2.28 (or, equivalently to require that their sum is greater than 4.56). This mean will have a variance of 1/2 and hence a standard error of l/V, and hence the standardized value of the Normal distribution corresponding to the critical value of 2.28 for the mean is 2.28/(l/.y/2) = 2.28 x a 2 = 3.23 which corresponds to a tail area of the Normal distribution of 1/1600. Thus, this test also has the required type I error rate. Let us call a requirement that this test be significant the pooled- trials rule. [Pg.188]

As an example to help understand the effect of equivalence on sample size, consider a case where we wish to show that the difference in FEVi between two treatments is not greater than 200 ml and where the standard deviation is 350 ml for conventional type I and type II errors rates of 0.05 and 0.2. If we assume that the drugs are in fact exactly identical, the sample size needed (using a Normal approximation) is 53. If we allow for a true difference of 50 ml this rises to 69. On the other hand, if we wished to demonstrate superiority of one treatment over another for a clinically relevant difference of 200 ml with the other values as before, a sample size of 48 would suffice. Thus, in the best possible case a rise of about 10% in the sample size is needed (from 48 to 53). This difference arises because we have two one-sided tests each of which must be significant in order to prove efficacy. To have 80% power each must (in the case of exact equality) have approximately 90% power(because 0.9 x 0.9 = 0.8). The relevant sum of z-values for the power calculation is thus 1.2816-1-1.6449 = 2.93 as opposed to for a conventional trial 0.8416 -I-1.9600 = 2.8. The ratio of the square of 2.93 to 2.8 is about 1.1 explaining the 10% Increase in sample size. [Pg.242]


See other pages where Equal error rate is mentioned: [Pg.502]    [Pg.491]    [Pg.563]    [Pg.502]    [Pg.491]    [Pg.563]    [Pg.216]    [Pg.957]    [Pg.11]    [Pg.153]    [Pg.110]    [Pg.3]    [Pg.80]    [Pg.504]    [Pg.273]    [Pg.3222]    [Pg.504]    [Pg.205]    [Pg.574]    [Pg.127]    [Pg.216]    [Pg.404]    [Pg.110]    [Pg.142]    [Pg.340]    [Pg.23]    [Pg.293]    [Pg.247]    [Pg.132]    [Pg.50]    [Pg.267]    [Pg.297]    [Pg.298]    [Pg.365]   
See also in sourсe #XX -- [ Pg.562 ]




SEARCH



Equal

Equaling

Equality

Equalization

© 2024 chempedia.info