1 / 32

Microsoft Cloud Computing Research Centre

Microsoft Cloud Computing Research Centre. Cloud Panopticon: Technical History. 1 st Annual Symposium, Cambridge 2014. Jon Crowcroft j on.crowcroft@cl.cam.ac.uk. Brief History of Surveillance Immune System. We’ve been here before

Download Presentation

Microsoft Cloud Computing Research Centre

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Microsoft Cloud Computing Research Centre Cloud Panopticon: Technical History 1st Annual Symposium, Cambridge 2014 Jon Crowcroft jon.crowcroft@cl.cam.ac.uk

  2. Brief History of Surveillance Immune System • We’ve been here before • mid 1990s lawful intercept agencies pressured Internet Community to weaken its tech • Response was (aptly numbered) rfc1984 • http://tools.ietf.org/html/rfc1984 • IAB/IESG/Internet Society/IETF • Attacks included • Weakened keys, Key escrow • Weaknesses included • “Conflicting International Policies • Use of multiple layered encryption

  3. What happened next? IETF “won” • TLS/HTTPS started to become routine • DNSSec & Certifcates • Cryptography • Better securing of infrastructure

  4. Surveillance and DPI • Tech for deep packet inspection, e.g. Endace • Initially developed for traffic engineering • to reveal popular application sest and traffic matrix • Became widely used for full packet capture at IXPs • Port mirrors all the data to security agency • Response: accelerate default use of HTTPs/TLS • Together with NATs, makes network intercept worthless • Even for “meta-data”

  5. What happens next? • Around this time, dominant traffic became • Mobile Device (many) <-> Cloud provider (few) • Key changes are: • Even more obfucasted (and secure) end points, but • Far far less, highly visible end points • instead of 100M NATd desktops talking to 100M websites, • we have a billion smart phones talking to a dozen cloud providers, almost all of latter in the US • Attack surface very very obvious

  6. Surveillance on Cloud • Was easy because: • Easy to find cloud data centers • Data stored in plain, so that analytics can work • Data between cloud machines was txferred in plain • Data is processed in the plain, so that targeted adverts can work • i.e. the main (2 sided) business model of cloud makes them idea to be weaponised.

  7. What happened next • Those revelations… • Embarrassed & annoyed “libertarian” tech cloudsters • Vancouver IETF plenary response vehement • Tech “solutions” • Crypt data between data centers (google) • Crypt data in storage (most) • Client side decrypt (apple) • Research in cryptic processing is ongoing

  8. Future • Securing key distribution (see RFC1984) • Viable solutions for cloud service on crypted data • Search, targeted ads, solutions exist • Analytics – could use trusted 3rd party now • Later, we’ll see

  9. What happens to lawful intercept? • Two extremes • They lose • They have to do their job properly and • Have probable cause • get warrants • Do intelligence… • Law mandates client side trapdoors (against RFC1984)

  10. Conclusion • The arms race between • security agencies and bad guys on the one hand • And the public on the other • Is not new • Is not over • Is not transparent • or informed by good cost benefit analysis; • see for example this Cato report • Responsible Counterterrorism Policy • http://www.cato.org/publications/policy-analysis/

  11. Microsoft Cloud Computing Research Centre Regional clouds: technical considerations 1st Annual Symposium, Cambridge 2014 Jon Crowcroft jon.crowcroft@cl.cam.ac.uk Jat Singh jatinder.singh@cl.cam.ac.uk

  12. Regional Clouds • Hard to define, many outstanding issues • Management and control underpins the rhetoric • Who has the power (capability), who is trusted. • Technical mechanisms for management • Offerings in a regional-cloud context • Implications - does this make sense? • Research, improving industrial ‘best-practices’

  13. Outline Explore different levels of the technical stack Focus: • Network-level routing • Cloud provisioning • Cryptography • Flowcontrols (‘data tagging’)

  14. Internet & Routing Controls • Autonomous Systems (AS): ‘sections’ of the network • Internet exchange points: exchange between AS • Border Gateway Protocol encapsulates the routing policy between networks • In practice, routing policy reflects peering/service/business arrangements

  15. Internet & routing controls (regional clouds) • Cloud providers manage their infrastructure • Many already account for geography for better service provisioning (performance, latency, etc.) • Bigger providers already involved in peering arrangements • Technically feasible with right incentives to ensure that data is routed within a geographical boundary E.g. economic benefits, regulation, … • But such an approach is blunt • applies to all traffic, regardless

  16. Cloud provisioning: service levels • Provider manages that below, tenants above • Different management concerns for each service offering Maybe Illustrate Tenants/users

  17. Cloud provisioning: service offerings • Already work on tailoring services to particular constraints • Differential privacy: tailor query results to not reveal too much private information • Already offer services based on user/tenant locale • Not only for performance, but also security, rights management, etc. (e.g. iPlayer) • Providers already manage their infrastructure • Customising service and content for regional concerns • Thus, already the capability to tailor services for particular regional and/or jurisdictional concerns

  18. Cloud provisioning: Unikernels • Cloud exists to leverage shared infrastructure • Isolation is important: • VMs – Separate for tenants, complete OS, managed by hypervisor • Containers – shared OS, isolated users • Deployment heavy, isolation overheads, … • Future? Unikernels: • library OS, build/compile a VM with only that required • Hypervisor managed, removes user-space isolation concerns

  19. Cloud provisioning: Unikernels (2) • Very small, lightweight easily deployed VMs: • Easily moved around the infrastructure • Deploy in locales/jurisdictions when/where relevant • Facilitates customised services • Specific unikernels for particular services • Encapsulating specific jurisdictional requirements? • Transparency: Natural audit trail • “Pulls” that what is required to build, on demand

  20. Data-centric controls

  21. Cryptography • Range of purposes: • Data protection: storage, transit, comm. channels • Authentication, certification, attestation, etc. • Encryption • Unintelligible, except those with the keys • encrypt(plaintext, key) => ciphertext • decrypt(ciphertext, key) => plaintext • Regional Q: Who can (potentially) access the keys?

  22. Client-side encryption Cloud provider • Cloud services • Computation generally on plaintext • Fully homomorphic encryption not practicable (yet) • Encrypted search, privacy-preserving targeted ads Client C P

  23. Encryption and keys • Who could access the keys? • Trust and legal regime(s) • Client-held keys • Cloud providers holding client keys • Providers now (internally) use crypto provisioning • Trusted third-parties: CAs, Key-escrows • Key management isn’t easy • Vulnerabilities: compromised keys, broken schemes and/or implementations • Transparency: when was data decrypted?

  24. Flow controls: data tagging • ‘Tag’ data to • track, and • control where it flows • Metadata ‘stuck’ to data to effect data management policy • Cloud benefits: • Management within the provider’s realm • Control and/or assurance, transparency • Various approaches • E.g. CSN @ Imperial: tenants collaborate to find leaks • Information Flow Control (IFC)

  25. IFC: Regional isolation at application-level • Entities run in a ‘security context’ (tagged) • Tags: <concern, specifier> Service X <zone, EU> Database <zone, EU> ✔ ✔ Application 1 <zone, EU> <zone, EU> <zone, EU> <zone, EU> Service X <zone, US> ✖ • All context and flows audited • Mechanisms for EU->US, but trusted, privileged (audited!)

  26. IFC: Ongoing work • Experimenting at the OS level, all application-level I/O • System-calls within, messaging across machines • Requires a trusted-computing base • Protects at levels above enforcement • Much more to do! • Enforcement: Small as possible, verifiable, hardware • Policy specification • Privilege management • Tag specifications and naming

  27. IFC in the cloud • Control and transparency • Within the realm of the cloud provider • Data-centric, fine-grained isolation • Enforcement naturally leads to audit • Aims at compliance/assurance, generally not spooks • Potential for virtual jurisdiction? • Cloud isolates/offers services for specific jurisdictions

  28. Conclusion • Regional cloud issues concern data management • Technical mechanisms for control, and transparency • Different mechanisms at different technical levels • Different capabilities, visibility • Developments in this space • Improve cloud best practice • May address concerns underpinning the balkanisationrhetoric

  29. To summarise • Nearly 20 yrs ago, CALEA asked us to colludge in weaker security for everyone • This would be bad for civil society, so • We said no! • Now Snowden reveals that we were ignored • Tit-for-tat is optimal strategy: • This time we will make it no!

  30. Technical workshop CLaw: Legal and technical issues in cloud computing IC2E: IEEE International Conference on Cloud Engineering (Mar 2015) http://conferences.computer.org/IC2E/2015

  31. Information Flow Control: Regional example (2) • Only privileged, trusted, entities may change context • Encrypt EU data before sending Context change Audit <zone, US> Application 1 <zone, EU> Application 1 <zone, US> Application 2 <zone, US> ✔

  32. Information Flow Control: Regional example (2) • More accurate… zone-mixer <zone, EU> <zone, US> ✔ NB this isn’t “application 1”, as it’s previous outputs would have had US and EU tagged outputs… Context change <zone, US> <zone, EU> Application 2 <zone, US> Audit ✔ zone-mixer <zone, US>

More Related