1 / 37

Recoverable Encryption through Noised Secret over Large Cloud

Recoverable Encryption through Noised Secret over Large Cloud . Welcome message. Sushil Jajodia 1 , W. Litwin 2 & Th. Schwarz 3.

annice
Download Presentation

Recoverable Encryption through Noised Secret over Large Cloud

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Recoverable Encryption through Noised Secret over Large Cloud Welcome message Sushil Jajodia1, W. Litwin2&Th. Schwarz3 1George Mason University, Fairfax, VA {jajodia@gmu.edu}2 Presenter,Université Paris Dauphine, Lamsade{witold.litwin@dauphine.fr}3Thomas Schwarz, UCU, Montevideo {tschwarz@ucu.edu.uy}

  2. What ? • New schemes for encryption key backup at Escrow’s site • Back up of high quality encryption keys • E.g. AES (256b) • Brute-force recovery intentionally possible • In the absence of key creator (owner) • But only over a large cloud • 10K+ nodes • ThusnotatEscrow’sown site

  3. What ? • Legal recovery can be fast • E.g. Max 10 min on 10K-node cloud • Once all nodes are available • Unwelcome recovery is unlikely • E.g. 70 days at escrow’s processor • Illegal use of 10K-nodes is implausible • Cloud providers do everything they can against • Easily traceable if happens

  4. Why • High quality key loss danger is Achilles’ heel of modern crypto • Makes many folks refraining of any encryption • Other loose many tears if unthinkable happens • The schemes should benefit numerous applications

  5. How • Key ownerchooses inhibitive timing of 1-node (brute-force) recovery • Presumablyunwelcomeatescrow’sown site • E.g. 70 days • Consequently , fixes a large integerM calledbackup decryptioncomplexity • Creates the backup as 2-share noised secret • One actualshare of anoised secret ishiddenamong a large number of fakenoiseshares • Sends the backupto Escrow

  6. How • Key requestor asks Escrow to recover data in acceptable max recovery time R • Once all requested nodes are available • E.g. R = 10 min • Escrow sends R andthe noised share (within the noise) to the cloud • The cloudcannotdisclose the key • RENSscheme at the cloud partitions the recovery calculation over the cloud • To fit the timing for sure • Say partitions it over 10K nodes

  7. How • The cloud reports back to Escrow the actual share(s) • Escrow recovers the key from both shares • ClasicalXORing • Send the recovered key to Requestor • Not forgetting the bill • E.g. 150$ at Amazon for 10K-node wide EMR calculus (mid-2012)

  8. WhatElse ? • Client SideEncryption • Server SideRecovery • StaticScheme • ScalableScheme • RelatedWork • Conclusion • Details in Res. Rep. : http://www.lamsade.dauphine.fr/~litwin/Recoverable%20Encryption_10.pdf

  9. Client SideEncryption • Client X backs up key S • X estimates 1-node inhibitive time D • Say 70 days • D depends on trust to Escrow • D alsodetermines minimal cloud size N for future recovery in any time acceptable time R • Fixed by recovery requestor • E.g. 10 min • X expects N > D / R

  10. Client Side Key Encryption • X creates a shared secret for S • Basically 2-share secret with share s0 random and • s1 = s0XOR S • Common knowledge • S = s0XOR s1 • X transforms the secret into a noised one • X makes s0 a noised share in noise space I = 0,1…M-1 • For some M that X choosesas follows • M is Encryption Complexity

  11. S S0 S1 = XOR Shared Secret / Noised (Shared) Secret Hint H (s0) Noise shares Noised share S1 S = XOR S0 Noise shares Noise in I His one way hash SHA 256 by default

  12. Client Side EncryptionChoice of Encryption Complexity • Xdetermines throughputT • # of match attemptsH (s) ?= h = H (s0) per time unit • 1 Sec by default • X sets M toM = Int (DT). • M = 240 ÷ 250 in practice

  13. Client SideEncryption Xrandomlychooses m  I = [0,1…M[ Calculatesbase noisesharef = s0 – m Createsnoisedshares0n= (f, M, h). Sends backup S’ = (s0n, s1) to Escrow

  14. EscrowSideRecovery • EscrowE receiveslegitimaterequest of S recovery in time R atmost • E chooseseitherstatic or scalablescheme • E sends data S” = (s0n, R) to some cloud node with request for processing accordingly • Keeps s1 out of the cloud

  15. StaticScheme • Node Load Ln : # of noises among M assigned to node n for match attempts • ThroughputTn: # of match attempts node n can process / sec • Bucket (node) capacity Bn: # of match attempts node n can process / time R • Bn = R Tn • Load factor n = Ln /Bn

  16. StaticScheme Notice our data storage oriented vocabulary Observe that node n respects R iffn ≤ 1 Observe that the cloud respects R iff for every n we haven ≤ 1 This is true for both static and scalable scheme presented later on

  17. StaticScheme • Init Phase • Node C that got S” from E becomes coordinator • Calculates a(M) = L(M)/ B (C) • Usually  (M)>> 1 • Defines N asa(M) • Implicitly considers the cloud as homogenous

  18. StaticScheme • Intended for a homogenous Cloud • All nodes provide the same throughput

  19. StaticScheme • Map Phase • Node C asks for allocation of N-1 nodes • Associates logical address n = 1, 2…N-1 with each node & 0 for itself • Sends out for each node n data (n, a0, P) • a0 is its own physical address, e.g., IP • P specifies Reduce phase

  20. StaticScheme • Reduce Phase • P requests node n to attempt matches for every noise share s = (f + m) such that n = m mod N • In practice, e.g., while m < M: • Node 0 loops over m = 0, N, 2N… • Node 1 loops over m = 1, N+1, 2N+1… • ….. • Node N – 1 loops over m = (you guess)

  21. StaticScheme • Node n that gets the successful match sends s to C • Otherwise node n enters Termination • C asks every node to terminate • Details depend on actual cloud • C forwards s as s0 to E

  22. StaticScheme • E discloses the secret S and sends S to Requestor • Bill included • E.g.,up to400$ on CloudLayer for • D = 70 days • R = 10 min • Both implied N = 10K with private option

  23. StaticScheme • Observe N ≥ D / R and N  D / R • If the initial estimate of T by key owner holds • Average recovery time on the lucky node is R / 2 • Since every noise is equally likely to be the lucky one • Individual cost can be offset by key insurance service • Perhaps 4$/y per key per subscriber in our ex., i.e., peanuts. • Assuming 1% of clients performsactualrecovery per year

  24. StaticScheme • See Res. Report for details, i.e., • Numerical examples • Correctness • The scheme really partitions I. • Whatever is N and s0,one and only one node finds s0 • Averagerecovery time isR/2

  25. StaticScheme • Safety • No disclosure method can in practice be faster than the scheme • Dictionary attack, inverted file of hints… • Otherproperties

  26. ScalableScheme • Intended for heterogenousclouds • differentnodethroughputs • basicallyonlylocallyknown • E.g. • Private or hybridcloud • Public cloudwithoutso-calledprivatenode option

  27. ScalableScheme • Heterogeneous cloud • Node throughputs may differ

  28. ScalableScheme • Init phase similar up to  (M) calculus • Basically (M) >> 1 • Alsowe note itnow0I • If  > 1wesaythatnodeoverflows • Node 0 sets thenitsnodelevel j to j = 0 and splits • Requestsnode 2j = 1 • Setsj to j = 1 • Sends to node 1, (S”, j, a0)

  29. ScalableScheme • As result • There are N = 2 nodes • Node 0 and node 1 have each M / 2 match attempts to process • Iff both load factors are no more than 1 • Usually it would not be the case

  30. ScalableScheme • Recursivedistributedrule • Eachnoden splitsuntiln ≤1 • Each split increasesnodeleveljn to jn+ 1 • Each new noden’ getsjn’ = jninitially • Node 0 splitsthusperhapsinto 1,2,4… until0 ≤1 • Node 1 startswithj= 1 and splitsinto 3,5,9…until1≤1 • Node 2 startswithj= 1, splittinginto 4,6,10… until2≤1 • Yourgeneralrulehere • Node with smaller T splits more times and vice versa

  31. ScalableScheme • If cloud is homogenous, the address space is contiguous • Otherwise, it is not • No problem • Unlike for a extensible or linear hash data structure

  32. ScalableScheme • Reduce phase • Every node n at level j attempts matchesfor every k [0, M-1] such that n = k mod 2j. • If node 0 split three times, in Reduce phase it will attempt to match noised shares (f + k) with k = 0, 8, 16… • If node 1 split four times, it will attempt to match noised shares (f + k) with k = 1, 17, 33… • Etc

  33. ScalableScheme • N ≥ D / R • If S owner initial estimate holds • For homogeneous cloud it is 30% greater on the averageand twice as big at worst • Cloud cost may still be cheaper • No need for private option • Versatility may still make it preferable besides • Average recovery time remains R /2

  34. ScalableScheme • See again Res. Report for • Numerical ex. • Correctness • Safety • … • Details of perf. analysis remain future work

  35. RelatedWork RE for outsourced LH* files CSCP for outsourced LH* records sharing SharePoint Crypto puzzles One way hash with trapdoor 30-year old excitement around Clipper chip Botnets

  36. Conclusion • Key safety is Achilles’ heel of cryptography • Key loss or key disclosure ? That is The Question • RENS schemes alleviate the dilemma • Future work • Deeper formal analysis • Experiments • Variants • Especially that called « multiple noising » • Raising average recovery timetowards R • Other consequences of principle « Big Calculations » = « Big Data » ?

  37. Witold LITWIN & al * * Early stage discussions with J. Katz, UMD, helped to shape the noised secret idea Thanksfor Your Attention

More Related