1 / 43

PRIMALITY TESTING – its importance for cryptography

PRIMALITY TESTING – its importance for cryptography. Lynn Margaret Batten Deakin University Talk at RMIT May 2003. Prime numbers have attracted much attention from mathematicians for many centuries. Questions such as how many are there? is there a formula for generating them?

sook
Download Presentation

PRIMALITY TESTING – its importance for cryptography

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PRIMALITY TESTING –its importance for cryptography Lynn Margaret Batten Deakin University Talk at RMIT May 2003

  2. Prime numbers have attracted much attention from mathematicians for many centuries. Questions such as • how many are there? • is there a formula for generating them? • how do you tell if a give number is a prime? have fascinated people for years.

  3. However, the first actual use of prime numbers in an important area outside of the theory of numbers was discovered only in the mid to late 1900s. This was in the establishment of a technical system to be used in maintaining the secrecy of electronic communications.

  4. COMMUNICATIONCHANNEL TRANSMITTER RECEIVER Encrypt MC = E (M), Using key K1 Decrypt CM = D (C), Using Key K2 K1 C MessageMSource K2 C Cryptanalyst K2 Key Source#2DecryptionKey K2Determined from K1 Key Source#1Random Key K1 is Produced K1 KEY CHANNEL Conventional cryptosystem. The key channel must be secure.

  5. The Diffie-Hellman scheme proposed in 1976, was a radical departure from what, up to then, had all been essentially ‘private key’ schemes. The idea was that everyone would own both a ‘private key’ and a ‘public key’. The public key would be published in a directory, like a telephone book. If A wanted to send B an encrypted message, A simply looked up B’s public key, applied it and sent the message. Only B knew B’s private key and could use it to decrypt the message. PROBLEM? Diffie and Hellman had no concrete example of an encryption/decryption pair which could pull this off!

  6. Then along came the Rivest, Shamir, Adleman (RSA) solution in 1977: Public information: • n an integer which is a product of two large primes (p and q kept secret), and • e a positive integer less than (p-1)(q-1) with gcd(e,(p-1)(q-1)) = 1. Secret information: • The two primes p and q such that n = pq, and • d such that ed  1 (mod (p – 1)(q – 1)).

  7. To encrypt the message/number m: c  me (mod n). To decrypt c: cd med m (mod n).

  8. Example. Let n = 101 x 107 = 10807 and e = 7. Note 7d 1 (mod 100x106), or 7d 1 (mod 10600) so d = 4543. To encrypt the message m = 109 we find c = 1097 (mod 10807) = 4836. To decrypt find cd = 48364543 109.

  9. The security of this scheme depends on the difficulty of factoring n. In fact, it is easy to show that knowing d is equivalent to factoring n.No way of breaking RSA is known, other than finding the secret information.Thus the RSA scheme leads to the following two problems:1.    Find a large pool of big ( >100 digits) primes. (If very few of these are available, Oscar will easily be able to get his hands on the list and simply try them all in order to break the scheme.)2.    Find a quick (polynomial time) algorithm to factor integers. (There is no known deterministic, polynomial time algorithm for factoring integers.)We take a look at problem 1.

  10. The primes p and q must be of sufficient size that factorization of their product is beyond computational reach. Moreover, they should be random primes in the sense that they be chosen as a function of a random input which defines a pool of candidates of sufficient cardinality that an exhaustive attack is infeasible. In practice, the resulting primes must also be of a pre-determined bitlength, to meet system specifications.

  11. Since finding large primes is very difficult. And also, since the known primes are usually available in some library or on some website, one of the 'solutions' to problem 1 has been to investigate numbers that are not primes, but simply act like primes.

  12. Generally speaking, we say that a composite integer N is a pseudoprime if it satisfies some condition that a prime must always satisfy.One result for primes is the well-known: FERMAT'S LITTLE THEOREM Let p be a prime, and gcd(a,p) = 1. Thenap-1 1 (mod p). [Try a =2 and p=7.] The converse of Fermat's theorem is false as we see by the following example:Let N = 2701 = 37•73. Then 22700 1 (mod2701).

  13. Now consider the following: DefinitionWe say that the composite integer N is a base b pseudoprime(written b-psp) if bN-1  1 (mod N). (*) Thus a b-psp acts like a prime with respect to Fermat's theorem, but it is not a prime. If there were only a few such numbers, this would not improve our situation, but as early as 1903 Maloshowed that there exists an infinite number of composite N satisfying (*).

  14. There exists an infinite number of base b pseudoprimes because:Theorem If p is an odd prime, p b (b2 1) and N = (b2p 1) / (b2 1), then N is a b-psp.

  15. The existence of so many pseudo-primes indicates that the question of deciding whether a given number is prime or composite is a difficult one. This leads us back to RSA and its second problem (factoring) which we now approach from a different angle – that of primality testing.

  16. It was simply very difficult (if not impossible) to prove that a randomly selected 100-digit number was a prime back in 1978. Furthermore, the primality proving methods that were available did not lend themselves to easy implementation in hardware, a necessary condition for RSA to become widely useable. A result of this situation was the refinement and further development of what are called probabilistic primality tests.

  17. Probabilistic methods Let be any set. A Monte Carlo algorithm for is an algorithm, which, given and a source of random numbers for choosing , returns “yes” or “no” with the properties that: If then the answer is always “no”; If then the answer is “yes” with probability at least ½.

  18. Solovay-Strassen testThe Solovay-Strassen probabilistic primality test (1977) was the first such test popularized by the advent of public-key cryptography. There is no longer any reason to use this test, because an alternative is available,the Miller-Rabin test, which is both more efficient and always at least as correct.

  19. Miller-Rabin Test The probabilistic primality test used most in practice today is the Miller-Rabin test (1980), also known as the strong pseudoprime test. The test is based on a more complex version of Fermat’sLittle Theorem: ap-1 1 (mod p) or ap-1 - 1  0 (mod p) for p prime and gcd(a, p) =1.

  20. For p odd, of course p – 1 = 2r is even. Then ap-1 - 1 = a2r – 1 = (ar -1)(ar + 1). So ap-1 – 1  0 (mod p) implies that the prime p divides into ar – 1 or into ar + 1 and consequently ar 1 (mod p) or ar -1 (mod p).

  21. This can be taken even further, by taking all powers of 2 out of p – 1 to obtain the following fact. Fact 1. Let n be an odd prime, and let n – 1 = 2sr where r is odd. Let a be any integer such that gcd(a, n) = 1. Then either  1 (mod n) or -1 (mod n) for some j, 0 js – 1.

  22. Definitions Let n be an odd composite integer and let n – 1 = 2sr where r is odd. Let be an integer in the interval [1, n – 1] relatively prime to n.(i) If (mod n) and if (mod n) for all j,0 js – 1, then is called a strong witness (to compositeness) for n.(ii) Otherwise, n is said to be a strong pseudoprime to the base . The integer is called a strong liar (to primality) for n.

  23. Example (strong pseudoprime) Consider the composite integer n = 91 =7x13. Try a = 9. Since 91 – 1 = 90 = 2 x 45, s = 1 and r = 45. Since 9r = 945 1 (mod 91), 91 is a strong pseudoprime to the base 9. The set of all strong liars for 91 is {1, 9, 10, 12, 16, 17, 22, 29, 38, 53, 62, 69 74, 75, 79, 81, 82, 90}. Notice that the number of strong liars for 91 is less than 90/4.

  24. Fact 1 can be used as a basis for a probabilistic primality test due to the following result.Fact 2 If n is an odd composite integer, then at most of all the numbers a, 1 an –1, are strong liars for n.

  25. Algorithm Miller-Rabin probabilistic primality testMILLER-RABIN (n,t)INPUT: an odd integer n 3 and security parameter t 1.OUTPUT: an answer ‘prime” or “composite”. 1.         Write n – 1 = 2sr such that r is odd.2.         For i from 1 to t do the following:2.1 Choose a random integer a, 2 an – 2.2.2 Compute y = armod n.2.3 If y 1 and yn – 1 then do the following: j  1. While j s – 1 and yn – 1 do the following:Compute yy2 mod n.If y  1 then return (“composite”).jj + 1.If y n – 1 then return (“composite”).3.         Return (“prime”).

  26. If n is actually prime, this algorithm will always declare ‘prime’. However, if n is composite, Fact 2 can be used to deduce the following probability of the algorithm erroneously declaring ‘prime’.

  27. FACT 3 (Miller-Rabin error- probability bound)For any odd composite integer n, the probability that MILLER-RABIN (n,t) incorrectly declares n to be “prime” is less than

  28. To perform the Miller-Rabin test on  N to base , we will need no more than log2( ) (which is the number of bits in the binary representation of ) modular exponentiations, each using bit operations. Hence, the Miller-Rabin test to base takes bit operations. Since we can run this up to – 3 times, but the more values of we run, the slower the algorithm.

  29. In 1983, Adleman, Pomerance and Rumely gave the first deterministic algorithm for primality testing that runs in less than exponential time. For n the number being tested, the time needed is .

  30. In 1986, two independent algorithms were developed by Goldwasser and Kilian and by Atkin which, under certain assumptions, would guarantee primality (but not necessary compositness) in polynomial time.

  31. Then in August 2002, Agrawal, Kawal and Saxena made public their unconditional deterministic, polynomial-time algorithm for primality testing. For the number being tested, this algorithm runs in time. The proof that the algorithm works uses relatively basic mathematics and we shall outline it here.

  32. The AKS algorithm is based on the following identity for prime numbers: for any such that We expand the difference between the polynomials.

  33. Thus, for the coefficient of in is If is prime, is divisible by for all If is not prime, let be a prime divisor of and The does not divide or In this case is not zero modulo

  34. So, given to test, one could choose a value for and test as above. We would need to evaluate about coefficients however, in the worst case, which is too slow. The trick used to reduce the run time is a standard one in algebra: We ‘mod out’ by a polynomial to obtain Still working modulo . How is chosen? Will this work? ( ) *

  35. ( ) * ( ) * In fact, all primes satisfy for any choice of and of Unfortunately, some composites may also satisfy for some choices of the pair Congruence takes time to check if Fast Fourier Multiplication (Knuth, 1998) is used. The authors show that a suitable choice of is: prime of order where contains a factor of a certain size. They then verify their algorithm for a small number of ‘s. ( ) *

  36. The algorithm_______________________________________________ Input: Integer • If ( is of the form ) output COMPOSITE; • ; • While • if output COMPOSITE • if ( is prime) • let be the largest prime factor of ; • if ( and • break; • ; • } • For to • if output COMPOSITE; • output PRIME; _______________________________________________________________

  37. The first loop in the algorithm tries to find a prime such that has a large prime factor The authors show that , as described in line 7 of the algorithm, must exist, and they are even able to establish bounds on it. They then use these bounds to establish that if is prime, the algorithm returns PRIME.

  38. In order to show that if is composite, the algorithm returns COMPOSITE, the following set is constructed: Where is a polynomial of the type on line 12 of the algorithm. There are such polynomials. Thus, if the algorithm falsely declares PRIME, every one of the incongruences in line 12 must be false. It follows that and the authors show that this leads to a contradiction.

  39. Time Complexity_______________________________________________ Input: Integer • If ( is of the form ) output COMPOSITE;

  40. iterations • While • If output COMPOSITE; • if ( is prime) • let be the largest prime factor of ; • If and • break; • ; • } Total: or

  41. for to • if output COMPOSITE • output PRIME; Total:

  42. Implications for future work: There is a good chance that people are already looking at implementing the new idea of using modulus by a polynomial to find a polynomial algorithm for factoring.

  43. REFERENCES M. Agrawal, N. Kayal, N. Saxena, ‘PRIMES is in P’. R. Crandall and C. Pomerance, ‘Prime numbers: A computational perspective’. Springer, 2001. D. Knuth, ‘Art of computer programming’, VII. Addison-Wesley, 1998. H. Williams, ‘Edouard Lucas and Primality Testing’, CMS Monographs, Wiley, 1998.

More Related