1 / 41

Enhancing Trusted Platform Modules with Hardware-Based Virtualization Techniques

Enhancing Trusted Platform Modules with Hardware-Based Virtualization Techniques. Frederic Stumpf Department of Computer Science Technische Universit¨at Darmstadt Darmstadt, Germany. Claudia Eckert Department of Computer Science Technische Universit¨at Darmstadt Darmstadt, Germany.

lexi
Download Presentation

Enhancing Trusted Platform Modules with Hardware-Based Virtualization Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enhancing Trusted Platform Modules withHardware-Based Virtualization Techniques Frederic Stumpf Department of Computer Science TechnischeUniversit¨at Darmstadt Darmstadt, Germany Claudia Eckert Department of Computer Science TechnischeUniversit¨at Darmstadt Darmstadt, Germany Distributed System Lab

  2. OUTlINE • INTRODUCTION • RELATED WORK • VIRTUALIZING THE TPM • TPM ARCHITECTURE • DIRECT NATIVE EXECUTION • SECURE TPM CONTEXT MIGRATION • TPM CREDENTIALS • CONCLUSIONS Distributed System Lab

  3. INTRODUCTION • The TPM can be used toenhance system security, since it offers a meansof storing cryptographic keys in a protected manner. • Berger et al. propose to virtualize thehardware TPM, by equipping every VM with its own softwareTPMthat only uses the underlying hardware TPM for operations. This approach does not provide the same security level. Distributed System Lab

  4. To provide life-time protection of cryptographic secrets andthe possibility of using the functionalities of a hardwarebasedTPM inside the VMs. we use a TPM that is capable of supporting virtualization with hardware measures. Distributed System Lab

  5. RELATED WORK • Microsoft’s Next Generation Secure Computing Baseis an approach that aims at establishinga small trusted computing base. • NGSCB depends on hardwarevirtualization technology and on trusted computingtechnology. • Since in this approach only one VM can access the TPM, virtualizing the TPM is not necessary. Distributed System Lab

  6. Intel and AMD have recently introduced the Trusted ExecutionTechnology (TXT) and the Secure VirtualMachine technology. • These architectures implement virtualizationtechnology and are thus capable of supporting differentefficient VMs with hardware measures. Distributed System Lab

  7. Our uses functionalities of the Intel VT-X/Iarchitecture. This architecture augments the x86-processorwith two new forms of CPU operation: • VMXroot operation:VMM runs • VMX non-rootoperation:the guest systems run. • The processor additionally provides a special purposestructure called virtual machine Control Structure (VMCS). • State information of the virtual machine isstored and loaded into the processor if a state transition isperformed. • A state transition to a VM is called vmentry. • and the transition back to the VMM is called vmexit. Distributed System Lab

  8. VIRTUALIZING THE TPM • We propose a multi-context TPM that iscompletely realized in hardware. • Requirements:We present the design goals for using a TPMin virtual environments. • Performance • Compatibility • Simplicity • Security • Minimal modifications to the specification Distributed System Lab

  9. Our approach:The main challenges to providinghardware enhanced virtualizationof TPMs is determining how to handle TPM data. This data cannot be shared across all VMs. • It is thus necessary to provide every VM with its own instance of a hardware TPM, including its ownowner-password, PCRs, SRKs and EKs. Distributed System Lab

  10. The multi-context TPM operates on aControl Structure that is loaded into the TPM each time aparticular VM operates on its TPM. • A VMM is responsiblefor providing an abstract interface to the underlying hardwareTPM and for isolating the different TPM instances. • If a VM wants to issue a commandto the hardware TPM, the VMM loads the correspondingTPM Control Structure (TPMCS) into the TPM. Distributed System Lab

  11. To provide a hardware protection mechanism, we introduceanother privilege level to the TPM. A virtual machine runs ina lower TPM privilege level and thus can only operate on itsown TPM Control Structure. Distributed System Lab

  12. TPM ARCHITECTURE • TPMprovides anon-volatile storage where the active TPM ControlStructure is loaded. The data of the TPM Control Structurecan only be loaded and unloaded into the TPM. • When a TPM Control Structure is unloaded and written back to disk, it is alwaysprotected by secrets that are stored in the root-data structure. • Only TPM commands that are issued by a VMM can operate on the root data structure. Distributed System Lab

  13. To provide the TPM with a hardware-based protection architecture and to isolate one VM TPM context from another, we introduce protection rings into the TPM. • We introduce a 1-bit control register (CR), which the actual TPM state refers to. Every time a context switch occurs. Distributed System Lab

  14. Our proposed architecture requires a direct interaction with the CPU in order to satisfy the necessary protection domains and ensure thatcontext switches areenforced. • The authorization data of the TPM are kept secure inside the VM, theVMM cannot inspect the communication between VM andTPM. • All communication between TPM and the software stack is encrypted with a session-key that is derived from theauthorization data. Distributed System Lab

  15. The TPM back-end device driver is integrated into the VMM and therefore runs in ring 0 of the CPU’s VMXroot mode. • Its main purposes are isolating the different TPMinterfaces and scheduling the TPM commands invoked by theVM. Distributed System Lab

  16. The white areas of the Figureshow the operations of a VMM that set up the environment forevery VM and its corresponding TPM context. • The gray areasrepresent two different VMs with their corresponding TPMcontexts. Distributed System Lab

  17. The VMM assigns for every TPM contextand for every VM instance, an execution time in which the VM isallowed to execute operations on the TPM. • The execution time includes the time a VM is allowed to use the TPM and how long the VM is allowed to use the CPU. • If the execution time has passed, the VMM regains control of the TPM and assigns the next context of the TPM. Distributed System Lab

  18. A transition from one TPM context to another is controlled by the VMM through a TPM Control Structure (TPMCS). • The VMM associates to every TPM context its own TPM Control Structure. Only the TPM can operate onthis structure. • Each time the VMM closes a TPM context andspawns a new context, the old Control Structure is saved on the disk and the new Control Structure is loadedinto the TPM. Distributed System Lab

  19. To protect this TPM structure from unauthorized modifications,we propose using the Storage Root Key of the TPM toseal this structure to the currentplatform. • The SRK is stored inside the TPM root-structure and is only accessible by the TPM in the root-mode of the TPM. Distributed System Lab

  20. To realize the management of the TPM, we extendthe additional TPM commands thatmanage the TPM state and are used to controlthe different TPM contexts. • All commands are executable only by the VMM, since they must be executed in ring 0 of the TPM. • The TPM also supports some sensitiveinstructions that allow a TPM context of being migrated to another TPM. ex: TPM_Xon/TPM_XoffTPM_LaunchPM_ResumeTPM_Exit Distributed System Lab

  21. DIRECT NATIVE EXECUTION • A processor is shared by some processes or VMs and every process is allowed to use the resource for a time period. • The main advantage of the sharing technique is that it enables direct native execution and thus the use of efficient virtualization. • In order to achieve direct native execution, it is necessary that sensitive instructions trap into the VMM.If sensitive instruction is executed bya VM it must trap into the VMM. Distributed System Lab

  22. Handling sensitive instruction • If a sensitive instruction, such as TPM_Exit, is executed inring 1, the TPM must trap into the VMM. • Sensitive commandsmust be executable only by the VMM, otherwise, a VM couldoverwrite values of a TPM Control Structure which belongsto another TPM context. • A VMM cannot inspect allcommands sent by a VM, so the TPM must decide whether ornot this command is to be executed. Distributed System Lab

  23. Distributed System Lab

  24. Scheduling the TPM • If the TPM is not directlyintegrated into the CPU, a vm_exit does not directly set the control register of the TPM to zero. • The CPU performs a vm_exit that is caused by the interval timer and sets the PC to a specific entry point of the VMM. • The VMM then determines which VM is next in line to use the processor. • The VMM then closes the TPM context by sending a TPM_Exit command to the TPM. Distributed System Lab

  25. The TPM resets its control register to zero, issues an interruptand delivers the corresponding interrupt vector number usingan interrupt controller to the CPU. • The CPU again jumps to aspecific entry point of the VMM. • The VMM then resumes the next TPMcontext by sending theTPM_Launchcommand together withthe stored and encrypted TPMCS to the TPM. Distributed System Lab

  26. Distributed System Lab

  27. SECURE TPM CONTEXT MIGRATION • To prevent a TPMCS’s being migrated to multiple contextsor an old TPMCS’s being again replayed into the system. • We propose the use of the monotonic counter of the TPM tosynchronize all existing TPM contexts with the root-structureof the TPM. • Each TPMCS and the root-TPM structure hold a register in which both are incrementedeach time a TPMCS is migrated.The register called migration counter. Distributed System Lab

  28. Before a TPMCS can be migrated,the TPM internally verifies whether both migration countershave the same value. • If an already migrated TPMCS is loadedagain in the TPM, its migration counter differs from the onestored inside the TPM and the TPM will refuse to operate onthis structure. • The migration counter cannot be reset and can only beincremented. Distributed System Lab

  29. The TPMCS is directly bound to random nonces generated onthe source and the destination platform to prevent a TPMCSfrom being migrated to multiple TPMs • The TPM generates a non-migratableTPM key KAand certifies this key to provide assurancesthat it is held in a protected storage of a genuine TPM. KA Distributed System Lab

  30. Distributed System Lab

  31. KAis transferred together with a nonce (NA) andthe AIK certificate to the destination platform, where theimport is initialized. • The destination TPM initializes the import procedureby generating and certifying its own non-migratable TPM key(KB). • The TPM delivers the encrypted package consisting of NA and NBto the migration interface. Distributed System Lab

  32. The migration interface collects all certificates thatare necessary to prove that KB is a TPM protected key andtransfers the package back to the source TPM. • The sourcemigration interface then provides owner authorization to theTPM and delivers the received encrypted package. • The TPMverifies authorization of the command and checks internallywhether the migration counter conforms to the migrationcounter of the TPMCS that is to be migrated. Distributed System Lab

  33. The TPM then encrypts the TPMCS and the nonces with thedelivered TPM key of the destination platform and transfersthe package to the destination platform. • The TPM of thedestination platform imports the package and activates theTPMCS if the nonces and the asserted authorization dataare correct. Distributed System Lab

  34. TPM CREDENTIALS • In order to provide full-fledged TPMs for every TPM context, every context needs to possess its own EK that is generated andprotected by a hardware TPM. • We propose to establish a certificate chain with the EK, which is held in the root-TPM structure. • This process allows us to obtain AIKs forevery TPM context simply by verifying the certificate chainof the EK. Distributed System Lab

  35. Spawning a TPM context Distributed System Lab

  36. A TPM Object-Specific Authorization Protocol(OSAP)session is created between the TPM and the VMM.This command computes a cryptographic secret and a handle(H) to this session. • The returned handle is then usedtogether with the authentication data (owner-password andSRK-password) to issue the TPM_Launch command. • After the TPMCS has been created, the VMs integrity is measured and the obtained measurements are added to thePCR value of the current loaded TPMCS. Distributed System Lab

  37. Exiting a TPM context • To prevent an attacker from beingable to inject corrupt data structures, a TPMCS is sealed andsigned with keys that are protected by the TPM. • The TPM creates a special purposedata structure Dn in which the current values of a TPMCS arestored.This data structure is then sealed to the actual platform. Distributed System Lab

  38. Distributed System Lab

  39. We denote the sealing of the structure Dn+1 at a specifictime t with Seal(Dn+1, PCRInfo, H). • H is a key-handlefor the storage root key and PCRInfo is a TPM_PCR_INFOstructure that contains the information to which PCRs Dn+1will be bound. • The operation to unseal is denoted as Unseal({Dn+1}SRK) where for simplicity reasons{Dn+1}SRKalso includes the structure of the platform configuration registers. Distributed System Lab

  40. CONCLUSIONS • A hardware module called trusted platformmodule (TPM) that protects cryptographic secrets and iscapable of acting as a trust anchor. • Our approach extends the TPM specification andshows how a hardware TPM that is capable of supportingvirtualization with hardware measures. • We introduced a TPM privilege level and a TPM Control Structure. The combination of both concepts allows a virtual environment to directly operate on the TPM. Distributed System Lab

  41. We are currently working on the implementation of ourproposed TPM architecture and planingto extend the TPM emulator and to integrate it into the Xen-hypervisor. Distributed System Lab

More Related