nohype virtualized cloud infrastructure without the virtualization
Download
Skip this Video
Download Presentation
NoHype : Virtualized Cloud Infrastructure without the Virtualization

Loading in 2 Seconds...

play fullscreen
1 / 33

NoHype : Virtualized Cloud Infrastructure without the Virtualization - PowerPoint PPT Presentation


  • 167 Views
  • Uploaded on

NoHype : Virtualized Cloud Infrastructure without the Virtualization. Eric Keller , Jakub Szefer , Jennifer Rexford, Ruby Lee. Princeton University. IBM Cloud Computing Student Workshop (ISCA 2010 + Ongoing work). Virtualized Cloud Infrastructure.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' NoHype : Virtualized Cloud Infrastructure without the Virtualization' - damia


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
nohype virtualized cloud infrastructure without the virtualization

NoHype: Virtualized Cloud Infrastructure without the Virtualization

Eric Keller, JakubSzefer, Jennifer Rexford, Ruby Lee

Princeton University

IBM Cloud Computing Student Workshop

(ISCA 2010 + Ongoing work)

virtualized cloud infrastructure
Virtualized Cloud Infrastructure
  • Run virtual machines on a hosted infrastructure
  • Benefits…
    • Economies of scale
    • Dynamically scale (pay for what you use)
without the virtualization
Without the Virtualization
  • Virtualization used to share servers
    • Software layer running under each virtual machine

Guest VM1

Guest VM2

Apps

Apps

OS

OS

Hypervisor

servers

Physical Hardware

without the virtualization1
Without the Virtualization
  • Virtualization used to share servers
    • Software layer running under each virtual machine
  • Malicious software can run on the same server
    • Attack hypervisor
    • Access/Obstruct other VMs

Guest VM1

Guest VM2

Apps

Apps

OS

OS

Hypervisor

servers

Physical Hardware

are these vulnerabilities imagined
Are these vulnerabilities imagined?
  • No headlines… doesn’t mean it’s not real
    • Not enticing enough to hackers yet?(small market size, lack of confidential data)
  • Virtualization layer huge and growing
    • 100 Thousand lines of code in hypervisor
    • 1 Million lines in privileged virtual machine
  • Derived from existing operating systems
    • Which have security holes
nohype
NoHype
  • NoHype removes the hypervisor
    • There’s nothing to attack
    • Complete systems solution
    • Still retains the needs of a virtualized cloud infrastructure

Guest VM1

Guest VM2

Apps

Apps

OS

OS

No hypervisor

Physical Hardware

virtualization in the cloud
Virtualization in the Cloud
  • Why does a cloud infrastructure use virtualization?
    • To support dynamically starting/stopping VMs
    • To allow servers to be shared (multi-tenancy)
  • Do not need full power of modern hypervisors
    • Emulating diverse (potentially older) hardware
    • Maximizing server consolidation
roles of the hypervisor
Roles of the Hypervisor
  • Isolating/Emulating resources
    • CPU: Scheduling virtual machines
    • Memory: Managing memory
    • I/O: Emulating I/O devices
  • Networking
  • Managing virtual machines

Push to HW /

Pre-allocation

Remove

Push to side

NoHype has a double meaning… “no hype”

scheduling virtual machines

Today

Scheduling Virtual Machines
  • Scheduler called each time hypervisor runs(periodically, I/O events, etc.)
    • Chooses what to run next on given core
    • Balances load across cores

VMs

switch

timer

switch

I/O

timer

switch

hypervisor

time

dedicate a core to a single vm

NoHype

Dedicate a core to a single VM
  • Ride the multi-core trend
    • 1 core on 128-core device is ~0.8% of the processor
  • Cloud computing is pay-per-use
    • During high demand, spawn more VMs
    • During low demand, kill some VMs
    • Customer maximizing each VMs work, which minimizes opportunity for over-subscription
managing memory

Today

Managing Memory
  • Goal: system-wide optimal usage
    • i.e., maximize server consolidation
  • Hypervisor controls allocation of physical memory
pre allocate memory

NoHype

Pre-allocate Memory
  • In cloud computing: charged per unit
    • e.g., VM with 2GB memory
  • Pre-allocate a fixed amount of memory
    • Memory is fixed and guaranteed
    • Guest VM manages its own physical memory(deciding what pages to swap to disk)
  • Processor support for enforcing:
    • allocation and bus utilization
emulate i o devices

Today

Emulate I/O Devices
  • Guest sees virtual devices
    • Access to a device’s memory range traps to hypervisor
    • Hypervisor handles interrupts
    • Privileged VM emulates devices and performs I/O

Guest VM1

Guest VM2

Priv. VM

Device

Emulation

Apps

Apps

Real

Drivers

OS

OS

trap

trap

hypercall

Hypervisor

Physical Hardware

emulate i o devices1

Today

Emulate I/O Devices
  • Guest sees virtual devices
    • Access to a device’s memory range traps to hypervisor
    • Hypervisor handles interrupts
    • Privileged VM emulates devices and performs I/O

Guest VM1

Guest VM2

Priv. VM

Device

Emulation

Apps

Apps

Real

Drivers

OS

OS

trap

trap

hypercall

Hypervisor

Physical Hardware

dedicate devices to a vm

NoHype

Dedicate Devices to a VM
  • In cloud computing, only networking and storage
  • Static memory partitioning for enforcing access
    • Processor (for to device), IOMMU (for from device)

Guest VM1

Guest VM2

Apps

Apps

OS

OS

Physical Hardware

virtualize the devices

NoHype

Virtualize the Devices
  • Per-VM physical device doesn’t scale
  • Multiple queues on device
    • Multiple memory ranges mapping to different queues

Network Card

Classify

MAC/PHY

Processor

Chipset

Peripheral

bus

MUX

Memory

networking

Today

Networking
  • Ethernet switches connect servers

server

server

networking in virtualized server

Today

Networking (in virtualized server)
  • Software Ethernet switches connect VMs

Virtual server

Virtual server

Software

Virtual switch

networking in virtualized server1

Today

Networking (in virtualized server)
  • Software Ethernet switches connect VMs

Guest VM2

Guest VM1

Apps

Apps

OS

OS

Hypervisor

hypervisor

networking in virtualized server2

Today

Networking (in virtualized server)
  • Software Ethernet switches connect VMs

Guest VM2

Guest VM1

Priv. VM

Apps

Apps

Software

Switch

OS

OS

Hypervisor

do networking in the network

NoHype

Do Networking in the Network
  • Co-located VMs communicate through software
    • Performance penalty for not co-located VMs
    • Special case in cloud computing
    • Artifact of going through hypervisor anyway
  • Instead: utilize hardware switches in the network
    • Modification to support hairpin turnaround
removing the hypervisor summary
Removing the Hypervisor Summary
  • Scheduling virtual machines
    • One VM per core
  • Managing memory
    • Pre-allocate memory with processor support
  • Emulating I/O devices
    • Direct access to virtualized devices
  • Networking
    • Utilize hardware Ethernet switches
  • Managing virtual machines
    • Decouple the management from operation
nohype double meaning
NoHype Double Meaning
  • Means no hypervisor, also means “no hype”
  • Multi-core processors
  • Extended Page Tables
  • SR-IOV and Directed I/O (VT-d)
  • Virtual Ethernet Port Aggregator (VEPA)
nohype double meaning1
NoHype Double Meaning
  • Means no hypervisor, also means “no hype”
  • Multi-core processors
  • Extended Page Tables
  • SR-IOV and Directed I/O (VT-d)
  • Virtual Ethernet Port Aggregator (VEPA)

Current Work: Implement it on today’s HW

xen as a starting point
Xen as a Starting Point
  • Management tools
  • Pre-allocate resources
    • i.e., configure virtualized hardware
  • Launch VM

Guest VM1

Priv. VM

xm

Xen

Pre fill EPT mapping

to partition memory

core

core

network boot
Network Boot
  • gPXE in Hvmloader
    • Added support for igbvf (Intel 82576)
  • Allows us to remove disk
    • Which are not virtualized yet

Guest VM1

Priv. VM

DHCP/

gPXE

servers

xm

hvmloader

Xen

core

core

allow legacy bootup functionality
Allow Legacy Bootup Functionality
  • Known good kernel + initrd (our code)
    • PCI reads return “no device” except for NIC
    • HPET reads to determine clock freq.

Guest VM1

Priv. VM

DHCP

gPXE

servers

xm

kernel

Xen

core

core

use device level virtualization
Use Device Level Virtualization
  • Pass through Virtualized NIC
  • Pass through Local APIC (for timer)

Guest VM1

Priv. VM

DHCP

gPXE

servers

xm

kernel

Xen

core

core

block all hypervisor access
Block All Hypervisor Access
  • Mount iSCSI drive for user disk
  • Before jumping to user code, switch off hypervisor
    • Any VM Exit causes a Kill VM
    • User can load kernel modules, any applications

Guest VM1

Priv. VM

DHCP

gPXE

servers

xm

kernel

Xen

Kill VM

iSCSI

servers

core

core

timeline
Timeline

Set up

hvmloader

Kernel

(device disc.)

Customer code

Guest

VM

space

VMX

Root

time

next steps
Next Steps
  • Assess needs for future processors
  • Assess OS modifications
    • to eliminate need for golden image(e.g., push configuration instead of discovery)
conclusions
Conclusions
  • Trend towards hosted and shared infrastructures
  • Significant security issue threatens adoption
  • NoHype solves this by removing the hypervisor
  • Performance improvement is a side benefit
questions
Questions?

Contact info:

[email protected]

http://www.princeton.edu/~ekeller

[email protected]

http://www.princeton.edu/~szefer

ad