Nohype virtualized cloud infrastructure without the virtualization
This presentation is the property of its rightful owner.
Sponsored Links
1 / 33

NoHype : Virtualized Cloud Infrastructure without the Virtualization PowerPoint PPT Presentation


  • 123 Views
  • Uploaded on
  • Presentation posted in: General

NoHype : Virtualized Cloud Infrastructure without the Virtualization. Eric Keller , Jakub Szefer , Jennifer Rexford, Ruby Lee. Princeton University. IBM Cloud Computing Student Workshop (ISCA 2010 + Ongoing work). Virtualized Cloud Infrastructure.

Download Presentation

NoHype : Virtualized Cloud Infrastructure without the Virtualization

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Nohype virtualized cloud infrastructure without the virtualization

NoHype: Virtualized Cloud Infrastructure without the Virtualization

Eric Keller, JakubSzefer, Jennifer Rexford, Ruby Lee

Princeton University

IBM Cloud Computing Student Workshop

(ISCA 2010 + Ongoing work)


Virtualized cloud infrastructure

Virtualized Cloud Infrastructure

  • Run virtual machines on a hosted infrastructure

  • Benefits…

    • Economies of scale

    • Dynamically scale (pay for what you use)


Without the virtualization

Without the Virtualization

  • Virtualization used to share servers

    • Software layer running under each virtual machine

Guest VM1

Guest VM2

Apps

Apps

OS

OS

Hypervisor

servers

Physical Hardware


Without the virtualization1

Without the Virtualization

  • Virtualization used to share servers

    • Software layer running under each virtual machine

  • Malicious software can run on the same server

    • Attack hypervisor

    • Access/Obstruct other VMs

Guest VM1

Guest VM2

Apps

Apps

OS

OS

Hypervisor

servers

Physical Hardware


Are these vulnerabilities imagined

Are these vulnerabilities imagined?

  • No headlines… doesn’t mean it’s not real

    • Not enticing enough to hackers yet?(small market size, lack of confidential data)

  • Virtualization layer huge and growing

    • 100 Thousand lines of code in hypervisor

    • 1 Million lines in privileged virtual machine

  • Derived from existing operating systems

    • Which have security holes


Nohype

NoHype

  • NoHype removes the hypervisor

    • There’s nothing to attack

    • Complete systems solution

    • Still retains the needs of a virtualized cloud infrastructure

Guest VM1

Guest VM2

Apps

Apps

OS

OS

No hypervisor

Physical Hardware


Virtualization in the cloud

Virtualization in the Cloud

  • Why does a cloud infrastructure use virtualization?

    • To support dynamically starting/stopping VMs

    • To allow servers to be shared (multi-tenancy)

  • Do not need full power of modern hypervisors

    • Emulating diverse (potentially older) hardware

    • Maximizing server consolidation


Roles of the hypervisor

Roles of the Hypervisor

  • Isolating/Emulating resources

    • CPU: Scheduling virtual machines

    • Memory: Managing memory

    • I/O: Emulating I/O devices

  • Networking

  • Managing virtual machines

Push to HW /

Pre-allocation

Remove

Push to side

NoHype has a double meaning… “no hype”


Scheduling virtual machines

Today

Scheduling Virtual Machines

  • Scheduler called each time hypervisor runs(periodically, I/O events, etc.)

    • Chooses what to run next on given core

    • Balances load across cores

VMs

switch

timer

switch

I/O

timer

switch

hypervisor

time


Dedicate a core to a single vm

NoHype

Dedicate a core to a single VM

  • Ride the multi-core trend

    • 1 core on 128-core device is ~0.8% of the processor

  • Cloud computing is pay-per-use

    • During high demand, spawn more VMs

    • During low demand, kill some VMs

    • Customer maximizing each VMs work, which minimizes opportunity for over-subscription


Managing memory

Today

Managing Memory

  • Goal: system-wide optimal usage

    • i.e., maximize server consolidation

  • Hypervisor controls allocation of physical memory


Pre allocate memory

NoHype

Pre-allocate Memory

  • In cloud computing: charged per unit

    • e.g., VM with 2GB memory

  • Pre-allocate a fixed amount of memory

    • Memory is fixed and guaranteed

    • Guest VM manages its own physical memory(deciding what pages to swap to disk)

  • Processor support for enforcing:

    • allocation and bus utilization


Emulate i o devices

Today

Emulate I/O Devices

  • Guest sees virtual devices

    • Access to a device’s memory range traps to hypervisor

    • Hypervisor handles interrupts

    • Privileged VM emulates devices and performs I/O

Guest VM1

Guest VM2

Priv. VM

Device

Emulation

Apps

Apps

Real

Drivers

OS

OS

trap

trap

hypercall

Hypervisor

Physical Hardware


Emulate i o devices1

Today

Emulate I/O Devices

  • Guest sees virtual devices

    • Access to a device’s memory range traps to hypervisor

    • Hypervisor handles interrupts

    • Privileged VM emulates devices and performs I/O

Guest VM1

Guest VM2

Priv. VM

Device

Emulation

Apps

Apps

Real

Drivers

OS

OS

trap

trap

hypercall

Hypervisor

Physical Hardware


Dedicate devices to a vm

NoHype

Dedicate Devices to a VM

  • In cloud computing, only networking and storage

  • Static memory partitioning for enforcing access

    • Processor (for to device), IOMMU (for from device)

Guest VM1

Guest VM2

Apps

Apps

OS

OS

Physical Hardware


Virtualize the devices

NoHype

Virtualize the Devices

  • Per-VM physical device doesn’t scale

  • Multiple queues on device

    • Multiple memory ranges mapping to different queues

Network Card

Classify

MAC/PHY

Processor

Chipset

Peripheral

bus

MUX

Memory


Networking

Today

Networking

  • Ethernet switches connect servers

server

server


Networking in virtualized server

Today

Networking (in virtualized server)

  • Software Ethernet switches connect VMs

Virtual server

Virtual server

Software

Virtual switch


Networking in virtualized server1

Today

Networking (in virtualized server)

  • Software Ethernet switches connect VMs

Guest VM2

Guest VM1

Apps

Apps

OS

OS

Hypervisor

hypervisor


Networking in virtualized server2

Today

Networking (in virtualized server)

  • Software Ethernet switches connect VMs

Guest VM2

Guest VM1

Priv. VM

Apps

Apps

Software

Switch

OS

OS

Hypervisor


Do networking in the network

NoHype

Do Networking in the Network

  • Co-located VMs communicate through software

    • Performance penalty for not co-located VMs

    • Special case in cloud computing

    • Artifact of going through hypervisor anyway

  • Instead: utilize hardware switches in the network

    • Modification to support hairpin turnaround


Removing the hypervisor summary

Removing the Hypervisor Summary

  • Scheduling virtual machines

    • One VM per core

  • Managing memory

    • Pre-allocate memory with processor support

  • Emulating I/O devices

    • Direct access to virtualized devices

  • Networking

    • Utilize hardware Ethernet switches

  • Managing virtual machines

    • Decouple the management from operation


Nohype double meaning

NoHype Double Meaning

  • Means no hypervisor, also means “no hype”

  • Multi-core processors

  • Extended Page Tables

  • SR-IOV and Directed I/O (VT-d)

  • Virtual Ethernet Port Aggregator (VEPA)


Nohype double meaning1

NoHype Double Meaning

  • Means no hypervisor, also means “no hype”

  • Multi-core processors

  • Extended Page Tables

  • SR-IOV and Directed I/O (VT-d)

  • Virtual Ethernet Port Aggregator (VEPA)

Current Work: Implement it on today’s HW


Xen as a starting point

Xen as a Starting Point

  • Management tools

  • Pre-allocate resources

    • i.e., configure virtualized hardware

  • Launch VM

Guest VM1

Priv. VM

xm

Xen

Pre fill EPT mapping

to partition memory

core

core


Network boot

Network Boot

  • gPXE in Hvmloader

    • Added support for igbvf (Intel 82576)

  • Allows us to remove disk

    • Which are not virtualized yet

Guest VM1

Priv. VM

DHCP/

gPXE

servers

xm

hvmloader

Xen

core

core


Allow legacy bootup functionality

Allow Legacy Bootup Functionality

  • Known good kernel + initrd (our code)

    • PCI reads return “no device” except for NIC

    • HPET reads to determine clock freq.

Guest VM1

Priv. VM

DHCP

gPXE

servers

xm

kernel

Xen

core

core


Use device level virtualization

Use Device Level Virtualization

  • Pass through Virtualized NIC

  • Pass through Local APIC (for timer)

Guest VM1

Priv. VM

DHCP

gPXE

servers

xm

kernel

Xen

core

core


Block all hypervisor access

Block All Hypervisor Access

  • Mount iSCSI drive for user disk

  • Before jumping to user code, switch off hypervisor

    • Any VM Exit causes a Kill VM

    • User can load kernel modules, any applications

Guest VM1

Priv. VM

DHCP

gPXE

servers

xm

kernel

Xen

Kill VM

iSCSI

servers

core

core


Timeline

Timeline

Set up

hvmloader

Kernel

(device disc.)

Customer code

Guest

VM

space

VMX

Root

time


Next steps

Next Steps

  • Assess needs for future processors

  • Assess OS modifications

    • to eliminate need for golden image(e.g., push configuration instead of discovery)


Conclusions

Conclusions

  • Trend towards hosted and shared infrastructures

  • Significant security issue threatens adoption

  • NoHype solves this by removing the hypervisor

  • Performance improvement is a side benefit


Questions

Questions?

Contact info:

[email protected]

http://www.princeton.edu/~ekeller

[email protected]

http://www.princeton.edu/~szefer


  • Login