Microkernels, virtualization, exokernels - PowerPoint PPT Presentation

microkernels virtualization exokernels n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Microkernels, virtualization, exokernels PowerPoint Presentation
Download Presentation
Microkernels, virtualization, exokernels

play fullscreen
1 / 23
Microkernels, virtualization, exokernels
637 Views
Download Presentation
isabelle-hopper
Download Presentation

Microkernels, virtualization, exokernels

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Microkernels, virtualization, exokernels Tutorial 1 – CSC469

  2. Monolithic kernel vs Microkernel Monolithic OS kernel Application System call • What was the main idea? • What were the problems? User mode VFS Microkernel IPC, file system Application IPC Unix server Device driver File server Scheduler, virtual memory Kernel mode Device drivers, dispatcher … IPC, virtual memory Hardware Hardware

  3. Exokernels • Motivation • OSes hide machine info behind abstractions (processes, files, address spaces, IPC) • These abstractions are hardcoded => restrictive • Idea • Separate protection from management • Application-level (untrusted) resource management • VM, IPC implemented at application level • Library OSes implement all the abstractions • Basically, it’s a minimal kernel that multiplexes securely the hardware resources

  4. Virtual Machine Monitors (VMM) • Definitions: • A VMM is a hardware virtualization technique that allows multiple operating systems, termed guests, to run concurrently on a host computer • A VMM is a software layer that runs on a host platform and provides an abstraction of a complete computer system to higher-level software. • Also called Hypervisor

  5. VMM types • Type 1: run directly on the host's hardware • Type 2: run within a conventional operating system environment VMWare workstation, VirtualPC, User-Mode Linux, UMLinux Disco, VMWare ESX Server, Xen

  6. Disco • Goals • Extend modern OS to run efficiently on shared memory multiprocessors without large changes to the OS • VMM can run multiple copies of Silicon Graphics IRIX operating system on a Stanford Flash shared memory multiprocessor

  7. Problem • Commodity OS's not well-suited for ccNUMA (1997) • Do not scale: Lock contention, memory architecture • Do not isolate/contain faults: more processors => more failures • Customized operating systems • Take time to build, lag hardware • Cost a lot of money

  8. Solution • Add a virtual machine monitor (VMM) • Commodity OSesrun in their own virtual machines (VMs) • Communicate through distributed protocols • VMM uses global policies to manage resources • Moves memory between VMs to avoid paging • Schedules virtual processors to balance load

  9. Advantages • Scalability • Flexibility • Hide NUMA effect • Fault Containment • Compatibility with legacy applications

  10. Disco

  11. VM challenges • Overheads • Instruction execution, exception processing, I/O • Memory • Code and data of hosted operating systems • Replicated buffer caches • Resource management • Lack of information • Idle loop, lock busy-waiting • Page usage • Communication and sharing • Not really a problem anymore because of distributed protocols

  12. Disco interface • VCPUs provide abstraction of a MIPS R10000 processor • Emulates all instructions, the MMU, trap architecture • Enabling/disabling interrupts, accessing privileged registers -> Memory-based interface to VMM • Physical memory • Contiguous address space, starting at address 0 • Physical-to-machine address translation, second (software) TLB

  13. Disco interface (cont’d) • I/O devices • Each VM assumes exclusive access to I/O devices • Virtual devices exclusive to VM • Physical devices multiplexed between virtual ones • Special interface to SCSI disks and network devices • Interpose on DMA calls • Disk: • Set of virtualized disks to be mounted by VMs • Copy-on-write disks; for persistent disks, uses NFS • Network: • Virtual subnet across all virtual machines • Uses copy-on-write mappings => reduces copying, allows sharing

  14. Xen virtualization • Technically, two kinds • Paravirtualization • Guests run a modified OS • High performance on x86 • Hardware-assisted virtualization • CPUs that support virtualization • Unmodified guest OSes

  15. Xen infrastructure

  16. How does it compare to Disco? • Three main differences • Less complete virtualization • Domain0 to initialize/manage VMs, incl. to set policies • Strong performance isolation • Other • Interface is pretty close to hardware and enables low-overhead high-performance virtualization • Need to change more OS code than in Disco • All the cool details • “Xen and the art of virtualization” – SOSP’03

  17. Questions • What’s the difference between a hypervisor and an exokernel?

  18. Questions • What about an exokernel and a microkernel? • Performance? • What about fault isolation?

  19. Questions • What is the difference between a hypervisor (VMM) and a microkernel?

  20. Questions • Can a microkernel be used to implement a hypervisor?

  21. Questions • Can a hypervisor be used to implement a microkernel?

  22. Hypervisors for servers • Type 1 or Type 2? • Hyper-V: “MicroKernelized” Hypervisor Design • VMWare ESX Server: “Monolithic” hypervisor architecture

  23. Hypervisor design Monolithic Hypervisor Microkernel Hypervisor VM1 (Admin) VM2 VMn VM1 (Admin) VM2 VMn Virt Stack Hypervisor Virtualization Stack Drivers Drivers Drivers Drivers Hypervisor Hardware Hardware • Both true Type 1 hypervisors – no host OS • The hardware is the physical machine; OSs are all virtual