Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Primality test

Vol. 1512 L. M. Adleman, M.-D. A. Huang, Primality Testing and Abelian Varieties Over Finite Fields. VII, 142 pages. 1992. [Pg.207]

Within this hierarchy, factoring is non polynomial (NP) - although we do not know of any classical algorithm that runs in polynomial time, it is easy to test the solution in polynomial time. Curiously, until about a year ago, it had been believed that another problem, primality testing, was not polynomial. But then a polynomial time algorithm was discovered. Thus, the structure of the polynomial hierarchy is not yet set in stone. [Pg.19]

If one wants a deterministically polynomial-time algorithm that only outputs prime numbers, and where the corresponding factoring assumption follows from that made above, one has to rely on Cramer s conjecture (see above) and search for each prime from some random number upwards in steps of two, and test each number with the pure Miller primality test [Mill76], which relies on the extended Riemann hypothesis. [Pg.232]

Note that the sets All. in Construction 8.22 are useful, although with most of these algorithms gen, testing membership in [gen( 1 )] is not infeasible. For instance, consider the algorithm gen that searches for the smallest d such that p =dq + 1 is prime. Then the membership test for All/ only needs two primality tests, whereas a membership test for IgenCV )] would have to verify that no value d q + 1 with d [Pg.238]

Prekey generation is of the same order of complexity as key generation in ordinary digital signature schemes It is dominated by the primality tests needed for the generation of two primes, q and p. This means approximately one exponentiation per number tested for primality with the Rabin-Miller test. Hence the number of exponentiations is determined by the density of primes of the chosen size (see Section 8.1.5) however, many numbers can be excluded by trial division as usual. [Pg.303]

Prekey verification A fixed small number of exponentiations, such as 12 (with 5 iterations of the basic Rabin-Miller primality test for q and p each). [Pg.303]

Prekey generation is of the same order of complexity as key generation in ordinary digital signature schemes As in the discrete-logarithm case, it is dominated by the primality tests needed for the generation of two primes. [Pg.309]

CoLe87 Henri Cohen, Arjen K. Lenstra Implementation of a New Primality Test Mathematics of Computation 48/177 (1987) 103-121. [Pg.375]

Poll74 John M. Pollard Theorems on factorization and primality testing Proceedings of the Cambridge Philosophical Society 76 (1974) 521-528. [Pg.383]

RabiSO Michael O. Rabin Probabilistic algorithm for primality testing J. Number Theory 12/ (1980) 128-138. [Pg.383]

The key idea in GCD is to make extensive use of phase I (i.e., primal, dual subproblems) and limit as much as possible the use of phase II (i.e., master problem) by the application of appropriate convergence tests. This is because the master problem is known to be a more difficult and cpu time consuming problem, than the primal and dual subproblems of phase I. [Pg.191]

This section presents the theoretical development of the Generalized Cross Decomposition, GCD. Phase I is discussed first with the analysis of the primal and dual subproblems. Phase II is presented subsequently for the derivation of the problem while the convergence tests are discussed last. [Pg.191]

The convergence tests of the GCD make use of the notions of (i) upper bound improvement, (ii) lower bound improvement, and (iii) cut improvement. An upper bound improvement corresponds to a decrease in the upper bound UBD obtained by the primal subproblem P(yk)- A lower bound improvement corresponds to an increase in the lower bound LBD obtained by the dual subproblem D (ik). A cut improvement corresponds to generating a new cut which becomes active and hence is not dominated by the cuts generated in previous iterations. If the cut is generated in the relaxed primal master problem (RPM) it is denoted as a primal cut improvement. If the cut is generated in the relaxed Lagrange relaxation master problem then the improvement is classified as Lagrange relaxation cut improvement. [Pg.197]

Figure 6.9 presented the generic algorithmic steps of the generalized cross decomposition GCD algorithm, while in the previous section we discussed the primal and dual subproblems, the relaxed primal master problem, the relaxed Lagrange relaxation master problem, and the convergence tests. [Pg.199]

A natural choice of master problem is to use the relaxed primal master if the CTP test is not passed, and to use the relaxed Lagrange relaxation master if the CTD or CTDU test is not passed. Note also that it is not necessary to use both master problems. [Pg.201]

Hence, the CTP test is passed and we continue with the primal subproblem. For 3/ = y2 = (1,1,0) we solve the primal subproblem and obtain ... [Pg.207]

The convergence tests, however, need to be modified on the grounds that it is possible to have an infinite number of both primal and dual improvements if Y is continuous, and hence not attain termination in a finite number of steps. To circumvent this difficulty, Holmberg (1990) defined the following stronger e-improvements ... [Pg.209]

Remark 3 Note that in proving finiteness we need to use the relaxed primal master problem. Also note that it suffices that CTP-c fails for finiteness. However, we cannot show that the CTD-e test will fail after a finite number of steps. [Pg.210]

Remark 4 If the primal subproblem has a feasible solution for every y e Y, then the GCD algorithm will attain finite e-convergence (i.e. UBD - LBD < e) in a finite number of steps for any given e > 0. Obviously, in this case the e-feasibility convergence tests are not needed. [Pg.210]

Different proposals for the group-generation algorithm gen have the following in common First, q is chosen as a random prime of a certain length, and then the values p -dq+ with factors d from a certain range are tested for primality. The proposals vary in the range of d. In some cases, e.g., if only d = 2 is used, the choice of q must also be repeated if none of the possible p s is prime. [Pg.238]

Mill76 Gary L. Miller Riemann s Hypothesis and Tests for Primality Journal of Computer and System Sciences 13 (1983) 300-317. [Pg.381]

Boyle was sceptical because he was no longer willing to accept, blindly, the ancient conclusions that had been deduced from first principles. In particular, Boyle was dissatisfied with ancient attempts to identify the elements of the universe by mere reasoning. Instead, he defined elements in a matter-of-fact, practical way. An element, it had been considered ever since Thales time (see page 8), was one of the primal simple substances out of which the universe was composed. Well, then, a suspected element must be tested in order to see if it were really simple. If a substance could be broken into... [Pg.41]

Equation (13.7) represents the dual form of the regression model, since in it the activity is predicted by considering similarity measures of a test compound in relation to the training set compounds M. In order to obtain the traditional primal form of the 3D-QSAR model, which involves an explicit consideration of molecular descriptors and regression coefficients, one can make the substitution of Eqs. (13.1) and (13.3-13.6) to Eq. (13.7) to obtain ... [Pg.438]


See other pages where Primality test is mentioned: [Pg.231]    [Pg.304]    [Pg.67]    [Pg.231]    [Pg.304]    [Pg.67]    [Pg.17]    [Pg.55]    [Pg.191]    [Pg.198]    [Pg.201]    [Pg.201]    [Pg.212]    [Pg.48]    [Pg.260]    [Pg.595]    [Pg.73]    [Pg.73]    [Pg.73]    [Pg.439]    [Pg.182]   
See also in sourсe #XX -- [ Pg.231 ]




SEARCH



Primal

© 2024 chempedia.info