1 / 39

Slides by Irina Volinsky Evgeny Shulgin Lev Solar

Advanced Algorithms Course Final Project. Slides by Irina Volinsky Evgeny Shulgin Lev Solar Adapted from Dr. Ely Porat’s course lecture notes. Oblivious Transfer. Oblivious Transfer (OT). OT is a cornerstone in the Foundations of Cryptography

yasuo
Download Presentation

Slides by Irina Volinsky Evgeny Shulgin Lev Solar

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Algorithms Course Final Project Slides by Irina Volinsky Evgeny Shulgin Lev Solar Adapted from Dr. Ely Porat’s course lecture notes.

  2. Oblivious Transfer

  3. Oblivious Transfer (OT) • OT is a cornerstone in the Foundations of Cryptography • OT become the basis for realizing a broad class of interactive protocols, such as bit commitment, zero-knowledge proofs.

  4. Oblivious Transfer (OT) cont. • An OT protocol is a two-party protocol in which Alice (A) transmits a bit to Bob (B) and Bob receives it with probability ½ , and Alice doesn’t learn whether Bob receives the bit. • Alice sends X over some channel X  Y to Bob, where Bob may choose the channel hidden from Alice from a previously agreed-on and/or the channel may add noise to the transmission

  5. Oblivios Transfer (OT) Formal Definition • One-out-of-two String OT, denoted • One party A owns two secret k-bit strings w0 and w1 • Another party Bwants to learn wc c{0,1} for a secret bit c of his choice • B doesn’t learn any information about w1-c and A cannot obtain any information about c

  6. OT Variations • A natural restriction of the previous definition is: One-out-of-two Bit Obvious Transfer, denoted It concerns the case k = 1 in which w0and w1 aresingle bits secrets, generally called b0 and b1 • A natural extension of One-out-of-two String OT called All-or-Nothing Disclosure of Secrets (ANDOS), denoted

  7. ANDOS OT • A owns t secret k-bit strings w0, w1, …,wt-1 and B wants to learn wc for a secret integer 0c<t of his choice. • Transfer between two parties is done in an all-or-nothing fashion , which means it must be impossible for B to obtain information on more than one wi 0i<t and for A to obtain information about which secret B learned. • ANDOS has applications to zero-knowledge proofs, exchange of secrets, identification, etc.

  8. Example of OT This example uses one-way function ƒ. Alice has a list of secrets s1,…sk. • Alice tells Bob about ƒ. • Bob wants to know si. He chooses random numbers x1,…,xk in the domain of ƒ and sends Alice y1,…,yk ,where:  xi if j i yi=   ƒ(xj) if j = i • Alice computes zj = ƒ –1(yj) for j=1,…,k and sends Bob zj si • Bob can compute zi = ƒ –1 (ƒ (xi)) = xi and so can recover si since zj si = xi  si

  9. Example of OT (cont.) • Notice that Bob can cheat by sending other yj as ƒ(xj) rather than xj as he is supposed to do. This is called active cheating. Passive cheating involve analyzing protocol compliant data outside the protocol. Cheating is what makes protocol analysis difficult from a mathematical perspective.

  10. Correctness • Formally speaking, OT is a two-party protocol that satisfies the constraint of correctness • Let [P0, P1](a)(b) be a random variable that describes the output obtained by A and B when they execute together the programs P0 and P1 on respective inputs a and b. • Similarly, let [P0, P1]*(a)(b) be the random variable that describes the total1 information acquired during the execution of protocol [P0, P1] on inputs a and b. 1total information includes not only messages received and issued by the parties but also the result of any local random sampling they may have performed

  11. Correctness (cont.) • Let [P0, P1]p(a)(b) and [P0, P1]*p(a)(b) be the marginal random variables obtained by restricting the previous definitions to only one party P (it’s often called the view of P). • Definition of correctness ( stands for the empty string) • Protocol is correct for if • For any program there exists a probabilistic program s.t. wFtk, cT

  12. Correctness (cont.) • Intuitive description: • Condition 1 means that if the protocol is executed as described, it will accomplish the task it was designed for: B receives word wc and A receives nothing. • Condition 2 means that the situation in which B does not abort, A cannot induce a distribution on B’s output using a dishonest that she could not induce simply by changing the input words and then being honest (which she can always do without being detected). This condition called awareness. It concerned with the future use of the outputs of a protocol. • No correctness condition involving is necessary since A receives no output.

  13. Approximation algorithms

  14. Introduction • Why we need an approximation algorithms? • Many problems of practical are NP-complete, and we need to solve them. • If problem is is NP-complete, we are unlikely to find a polynomial-time algorithm for solving it exactly, but it may still be possible to find near-optimal solution in polynomial time (either in the worst case or on the average). In practice, near-optimality is often good enough. • An algorithms that returns near-optimal solutions is called an approximation algorithms.

  15. Definition • An algorithm is an -approximation for an optimization problem P if: • The algorithm runs in polynomial time • The algorithm always produces a solution that is within a factor of  (ratio bound) of the optimal solution

  16. TSP problem statement • Input:Complete undirected graph G = (V, E) thathas a nonnegative integer cost c(u,v) associatedwith each edge (u,v)E • Goal:To find a Hamilton cycle of G with minimal cost

  17. TSP (with triangle inequality) • The cost function c satisfies the triangleinequality if for all vertices u,v,w V, • c(u,w) c(u,v) + c(v,w) • Since TSP is NP-Complete, approximation algorithms allow for getting a solution close to the solution of an NP problem in polynomial time • TSP have an approximation algorithm with a ratio bound of 2

  18. Approximation algorithm for TSP (with triangle inequality) • Approx-TSP-Tour(G,c) • 1. Select a vertex r  V[G] to be a “root” vertex • 2. Grow a minimum spanning tree T for G from • root r using MST-Prim(G,c,r) • 3. Let L be the list of vertices visited in a • preorder tree walk of T • 4. Return the Hamilton cycle H that visits the • vertices in order L

  19. Approx-TSP-Tour Algorithm • Approx-TSP-Tour is an approximation algorithm with a ratio bound of 2 for the traveling-salesman problem with triangle inequality

  20. Approx-TSP-Tour Algorithm • Let K denote an optimal tour for the given set of vertices. We have to show that • c(K) 2c(H), where H is the tour returned by Approx-TSP-Tour. • Since we obtain a spanning tree by deleting any edge from a tour, if T is a minimum spanning tree for the given set of vertices, • then c(T) c(K). (1) • A full walk of T lists the vertices when they are first visited and also whenever they are returned to after a visit to a subtree. Let us call this walk W. Since the full walk traverses every edge of T exactly twice, we have • c(W)=2c(T). (2) • Equations (1) and (2) imply that c(W) c(K). (3)

  21. Approx-TSP-Tour Algorithm (cont.) • W is generally not a tour, since it visits some vertices more than once. By repeatedly applying the triangle inequality, we can remove from W all but the first visit to each vertex and the cost does not increase. This ordering is the same as that obtained by a preorder walk of the tree T. • Let H be the cycle corresponding to this preorder walk. It is a hamiltonian cycle, since every vertex is visited exactly once, and in fact it is the cycle computed by Approx-TSP-Tour. Since H is obtained by deleting vertices from the full walk W, • we have c(H) c(W). (4) • Combining inequalities (3) and (4) completes the proof.

  22. Bin Packing problem statement • Input:Setofnitems. Item i has size si , whereeach siis arational number, 0si1 • Goal:Minimize the number of bins of size 1 such that all the items can be packed into them

  23. Bin Packing • Since Bin Packing is NP-Hard, approximation algorithms allow for getting a solution close to the solution of an NP problem in polynomial time • Bin Packing have an approximation algorithm with a ratio bound of 2

  24. First Fit approximation algorithm for Bin Packing • Consider some ordering on empty bins. • First-Fit • 1. For i=1 to n • 2. Let j be the first bin such that i fits in bin j • 3. Put i in bin j • 4. End

  25. First-Fit Algorithm • First-Fit is an approximation algorithm with a ratio bound of 2 for the Bin Packing problem • We have to show that First-Fit(I) 2OPT(I) + 1 for all instances I. • Let SIZE(I) denote the sum of all si. • Then: SIZE(I) OPT(I).

  26. First-Fit Algorithm (cont.) • At most one bin can be half-full in the output of Firs-Fit(I), because if there were two bins half full, then the last item added to the latter bin should have been added to the firs • bin. Thus, • 1/2(First-Fit(I) - 1) SIZE(I) • which implies • First-Fit(I) 2SIZE(I) + 1 • i.e., • First-Fit(I) 2OPT(I) + 1 • The last equation completes the proof.

  27. Remarks • There is no polynomial-time approximation algorithm with ratio bound 1  for the general traveling-salesman problem (without the triangle inequality) • There is no polynomial-time approximation algorithm for the clique problem in any ratio bound

  28. Definition of PTAS • Definition:A polynomial­time approximation scheme (PTAS) for a minimization problem is a family of algorithms {Aε: ε > 0} such that for each ε > 0, Aε is a (1 + ε)­ approximation algorithm which runs in polynomial time in input size for fixed ε. For a maximization problem, we require that Aε is a (1- ε) - approximation algorithm. • Some problems which have a PTAS are knapsack and some scheduling problem.

  29. Dynamic programming: Knapsack • Here we consider the “knapsack problem”, and show that the technique of dynamic programming is useful in designing approximation algorithm.  • Knapsack: • Input:Set of items {1, ... , n}. Item i has a value vi and size si. Total “capacity” is B. vi, si, B Є Z+ • Goal:Find a subset of items S that maximizes the value of subject to the constraint .

  30. Dynamic programming: Knapsack (cont.) • We assume that , since if it can never be included in any feasible solution. • We now show that dynamic programming can be used to solve the knapsack problem exactly. 

  31. Definition • Definition: Let size of “smallest” subset of {1, … , i} with value exactly v. ( if no such subset exists).   • Now consider the following dynamic programming algorithm. Note that if V = maxi vi then is an upper bound on the value of any solution.

  32. DynProg Algorithm

  33. Have we proven that P = NP ? • It is known that knapsack problem is NP- hard. But the running time of the algorithm seems to be polynomial. Have we proven that P = NP ? No, since input is usually represented in binary; that is, it takes bits to write down . Since the running time is polynomial in it is exponential in the input size of the . We could think of write problem in unary (i.e. bits to encode ), in which case the running time would be polynomial in size of the input.

  34. Definitions • Definition: An algorithm for a problem with running polynomial of input encoded in unary is called pseudopolynomial. • Definition:A polynomial ­ time approximation scheme (PTAS) is a family of algorithms {Aε: ε > 0} for a problem such that for each ε > 0, Aεis a (1 + ε)­ approximation algorithm (for min problem) or (1- ε) - approximation algorithm (for max problems). If the running time is also a polynomial in , then {Aε} is a fully polynomial – time approximation scheme (FTAS, FPTAS).

  35. DynProg2 Algorithm • Run DynProg on ( ).

  36. Theorem: • Theorem: DynProg2 is an FPAS for knapsack. • Proof: • Let be the set of items found by DynProg2. Let be the optimal set. We know , since one possible knapsack is to simply take the most valuable item. • We also know, by the definition of ,

  37. Proof (cont.) • which implies • Then

  38. Proof (cont.) • So the running time is , so it is an FTAS.

  39. Bibliography • Dr. Ely Porat’s course lecture notes • Introduction to algorithms, H. Cormen, E. Leiserson, L. Rivest • Lecture Notes on Approximation Algorithms Fall 1998, P. Williamson • Oblivious Transfer and Intersecting codes, Gilles Brassard, Claude Crepeau, Miklos Santha 

More Related