1 / 36

Introduction to Information Security 0368-3065, Spring 2014 Lecture 9: Virtual machine confinement, trusted computing a

Introduction to Information Security 0368-3065, Spring 2014 Lecture 9: Virtual machine confinement, trusted computing architecture . Eran Tromer Slides credit: Dan Boneh , Stanford. Confinement using Virtual Machines. Virtual machines. US patent 6,922,774 ( NSA NetTop ). Untrusted Code.

ormand
Download Presentation

Introduction to Information Security 0368-3065, Spring 2014 Lecture 9: Virtual machine confinement, trusted computing a

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Information Security0368-3065, Spring 2014Lecture 9: Virtual machine confinement,trusted computing architecture Eran TromerSlides credit:Dan Boneh, Stanford

  2. ConfinementusingVirtual Machines

  3. Virtual machines US patent 6,922,774 (NSA NetTop) UntrustedCode Key Security benefits: • Confinement (isolation, sandboxing) • Management • Monitoring • Recovery • Forensics (replay) Process Process Process Process Process Process OS services OS services OS kernel OS kernel Hypervisor Host OS (if Type 2) CPU Memory Devices

  4. VMM security assumption • VMM Security assumption: • Malware may infect guest OS and guest apps • But malware cannot escape from the infected VM • Cannot infect host OS • Cannot infect other VMs on the same hardware • Requires that VMM protect itself and is not buggy • VMM is much simpler than full OS • VMM API is much simpler than OS API • (but host OS still has device drivers)

  5. Example of VM security application:VMM Introspectionprotecting the anti-virus system

  6. Example:intrusion Detection / anti-virus • Runs as part of OS kernel and user space process • Kernel root kit can shutdown protection system • Common practice for modern malware • Standard solution: run IDS system in the network • Problem: insufficient visibility into user’s machine • Better: run IDS as part of VMM (protected from malware) • VMM can monitor virtual hardware for anomalies • VMI: Virtual Machine Introspection • Allows VMM to check Guest OS internals

  7. Sample checks Stealth malware: • Creates processes that are invisible to “ps” • Opens sockets that are invisible to “netstat” 1. Lie detector check • Goal: detect stealth malware that hides processes and network activity • Method: • VMM lists processes running in GuestOS • VMM requests GuestOS to list processes (e.g. ps) • If mismatch, kill VM

  8. Sample checks (cont.) 2. Application code integrity detector • VMM computes hash of user app-code running in VM • Compare to whitelist of hashes • Kills VM if unknown program appears 3. Ensure GuestOS kernel integrity • example: detect changes to sys_call_table 4. Virus signature detector • Run virus signature detector on GuestOS memory 5. Detect if GuestOS puts NIC in promiscuousmode

  9. Virtualization – covert channels and side channels UntrustedCode Key • Covert channel: unintended communication channel between isolated but cooperating components(sender and receiver) • Can be used to leak classified data from secure component to public component • Side channel: unintended channel that lets an attacker component retrieve information from an victim component without the latter’s cooperation • Often induced by low-level resource contention Process Process Process Process Process Process OS services OS services OS kernel OS kernel Hypervisor CPU Memory Devices

  10. An example covert channel • Both VMs use the same underlying hardware • To send a bit b  {0,1} malware does: • b= 1: at midnight do CPU intensive calculation • b= 0: at midnight do nothing • At midnight, listener does a CPU intensive calculation and measures completion time • Now b = 1  completion-time > threshold • Many covert channel exist in running system: • File lock status, cache contents, interrupts, … • Very difficult to eliminate

  11. Cache-based side-channels in cloud computing Demonstrated, using Amazon EC2 as a study case: • Cloud cartographyMapping the structure of the “cloud” and locating a target on the map. • Placement vulnerabilities An attacker can place his VM on the same physical machine as a target VM (40% success for a few dollars) • Cross-VM exfiltrationOnce VMs are co-resident, secret informationcan be exfiltrated across VM boundary.(Simulated: theft of decryption keys!)

  12. Motivation Virtual machine confinement: a blessing or a curse?

  13. Subvirt [King Chen Wang Verbowski Wang Lortch 2006] • Virus idea: • Once on the victim machine, install a malicious VMM • Virus hides in VMM • Invisible to virus detector running inside VM Anti-virus  Anti-virus OS VMM and virus OS HW HW

  14. The MATRIX

  15. VM Based Malware (blue pill virus) [Rutkowska 2006] • A virus that installs a malicious VMM (hypervisor) on-the-fly under running OS • Use SVM/VT-x to create VM • Microsoft Security Bulletin: (Oct, 2006) :Suggests disabling hardware virtualization features by default for client-side systemshttp://www.microsoft.com/whdc/system/platform/virtual/CPUVirtExt.mspx • VMBRs are easy to defeat • A guest OS can detect that it is running on top of VMM

  16. VMM Detection • Can an OS detect it is running on top of a VMM? • Applications: • Virus detector can detect VMBR • Normal virus (non-VMBR) can detect VMM • refuse to run to avoid reverse engineering • Software that binds to hardware (e.g. MS Windows) can refuse to run on top of VMM • DRM systems may refuse to run on top of VMM

  17. VMM detection (red pill techniques) • VM platforms often emulate simple hardware • VMWare emulates an ancient i440bx chipset … but report 8GB RAM, dual Opteron CPUs, etc. 2. VMM introduces time latency variances • Memory cache behavior differs in presence of VMM • Results in relative latency in time variations for any two operations 3. VMM shares the TLB with GuestOS • GuestOS can detect reduced TLB size 4. Deduplication (VMM saves single copies of identical pages) … and many more methods [GAWF’07]

  18. VMM Detection Bottom line: The perfect VMM does not exist • VMMs today (e.g. VMWare) focus on: Compatibility: ensure off the shelf software works Performance: minimize virtualization overhead • VMMs do not provide transparency • Anomalies reveal existence of VMM

  19. Trusted Computing Architecture

  20. Background • TCG consortium. Founded in 1999 as TCPA. • Main players (promoters): (>200 members) AMD, HP, IBM, Infineon, Intel, Lenovo, Microsoft, Sun • Goals: • Hardware protected (encrypted) storage: • Only “authorized” software can decrypt data • e.g.: protecting key for decrypting file system • Secure boot: method to “authorize” software • Attestation: Prove to remote server what software is running on my machine.

  21. Secure boot History of BIOS/EFI malware: • CIH (1998): CIH virus corrupts system BIOS • Heasman(2007): • System Management Mode (SMM) “rootkit” via EFI • Sacco, Ortega (2009): infect BIOS LZH decompressor • CoreBOOT: generic BIOS flashing tool Main point: BIOS runs before any defenses (e.g. antivirus) Proposed defense: lock system configuration (BIOS + OS) Today: TCG approach

  22. TCG: changes to PC • Extra hardware: TPM • Trusted Platform Module (TPM) chip • Single 33MhZ clock. • TPM Chip vendors: (~.3$) • Atmel, Infineon, National, STMicro • Intel D875GRH motherboard • Software changes: • BIOS, EFI (UEFI) • OS and Apps

  23. TPMs in the real world • TPMs widely available on laptops, desktops and some servers • Software using TPMs: • File/disk encryption: BitLocker, IBM, HP, Softex • Attestation for enterprise login: Cognizance, Wave • Client-side single sign on: IBM, Utimaco, Wave

  24. TPM Basics What the TPM does How to use it

  25. Components on TPM chip Non Volatile Storage(> 1280 bytes) OtherJunk PCR Registers (16 registers) LPCbus I/O API calls Crypto Engine: RSA, SHA-1, HMAC, RNG RSA: 1024, 2048 bit modulus SHA-1: Outputs 20 byte digest

  26. Non-volatile storage 1. Endorsement Key (EK) (2048-bit RSA) • Created at manufacturing time. Cannot be changed. • Used for “attestation” (described later) 2. Storage Root Key (SRK)(2048-bit RSA) • Used for implementing encrypted storage • Created after running TPM_TakeOwnership( OwnerPassword, … ) • Can be cleared later with TPM_ForceClear from BIOS 3.OwnerPassword(160 bits) and persistent flags Private EK, SRK, and OwnerPwd never leave the TPM

  27. PCR: the heart of the matter • PCR: Platform Configuration Registers • Lots of PCR registers on chip (at least 16) • Register contents: 20-byte SHA-1 digest (+junk) • Updating PCR #n : • TPM_Extend(n,D): PCR[n]  SHA-1 ( PCR[n] || D ) • TPM_PcrRead(n): returns value(PCR(n)) • PCRs initialized to default value (e.g. 0) at boot time • TPM can be told to restore PCR values in NVRAM via TPM_SaveState and TPM_Startup(ST_STATE) for system suspend/resume

  28. Using PCRs: the TCG boot process • BIOS boot block executes • Calls TPM_Startup (ST_CLEAR) to initialize PCRs to 0 • Calls PCR_Extend( n, <BIOS code> ) • Then loads and runs BIOS post boot code • BIOS executes: • Calls PCR_Extend( n, <MBR code> ) • Then runs MBR (master boot record), e.g. GRUB. • MBR executes: • Calls PCR_Extend( n, <OS loader code, config> ) • Then runs OS loader … and so on

  29. In a diagram Hardware BIOS boot block OS loader BIOS Application MBR OS Root of trust in integrity measurement measuring TPM Extend PCR Root of trust in integrity reporting • After boot, PCRs contain hash chain of booted software • Collision resistance of SHA-1 ensures commitment

  30. Example: Trusted GRUB (IBM’05) What PCR # to use and what to measure specified in GRUB config file

  31. Using PCR values after boot • Application 1: encrypted (a.k.a sealed) storage. • Step 1: TPM_TakeOwnership( OwnerPassword, … ) • Creates 2048-bit RSA Storage Root Key (SRK) on TPM • Cannot run TPM_TakeOwnership again without OwnerPwd: • Ownership Enabled Flag  False • Done once by IT department or laptop owner. • (optional) Step 2: TPM_CreateWrapKey / TPM_LoadKey • Create more RSA keys on TPM protected by SRK • Each key identified by 32-bit keyhandle

  32. Protected Storage • Main Step: Encrypt data using RSA key on TPM • TPM_Seal(some)Arguments: • keyhandle: which TPM key to encrypt with • KeyAuth: Password for using key `keyhandle’ • PcrValues: PCRs to embed in encrypted blob • data block: at most 256 bytes (2048 bits) • Used to encrypt symmetric key (e.g. AES) • Returns encrypted blob. • Main point: blob can only be decrypted with TPM_Unseal when PCR-reg-vals = PCR-vals in blob. • TPM_Unseal will fail othrewise

  33. Protected Storage • Embedding PCR values in blob ensures that only certain apps can decrypt data. • e.g.: Messing with MBR or OS kernel will change PCR values.

  34. Sealed storage: applications • Lock software on machine: • OS and apps sealed with MBR’s PCR. • Any changes to MBR (to load other OS) will prevent locked software from loading. • Prevents tampering and reverse engineering • Web server: seal server’s SSL private key • Goal: only unmodified Apache can access SSL key • Problem: updates to Apache or Apache config • General problem with software upgrades/patches:Upgrade process must re-seal all blobs with new PCRs

  35. Security? • Resetting TPM after boot • Attacker can disable TPM until after boot, then extend PCRs arbitrarily(one-byte change to boot block) [Kauer07] • Software attack: send TPM_Init on LPC bus allows calling TPM_Startup again (to reset PCRs) • Simple hardware attack: use a wire to connect TPM reset pin to ground • Once PCRs are reset, they can be extended to reflect a fake configuration. • Rollback attack on encrypted blobs • e.g. undo security patches without being noticed. • Can be mitigated using Data Integrity Regs (DIR) • Need OwnerPassword to write DIR

More Related