emulab net an emulation testbed for networks and distributed systems n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
emulab: An Emulation Testbed for Networks and Distributed Systems PowerPoint Presentation
Download Presentation
emulab: An Emulation Testbed for Networks and Distributed Systems

Loading in 2 Seconds...

play fullscreen
1 / 32

emulab: An Emulation Testbed for Networks and Distributed Systems - PowerPoint PPT Presentation


  • 74 Views
  • Uploaded on

emulab.net: An Emulation Testbed for Networks and Distributed Systems. Jay Lepreau and many others University of Utah Intel IXA University Workshop June 21, 2001. The Main Players. Undergrads Chris Alfeld, Chad Barb, Rob Ricci Grads

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'emulab: An Emulation Testbed for Networks and Distributed Systems' - albany


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
emulab net an emulation testbed for networks and distributed systems

emulab.net:An Emulation Testbed for Networks and Distributed Systems

Jay Lepreau

and

many others

University of Utah

Intel IXA University Workshop

June 21, 2001

the main players
The Main Players
  • Undergrads
    • Chris Alfeld, Chad Barb, Rob Ricci
  • Grads
    • Dave Andersen, Shashi Guruprasad, Abhijeet Joglekar, Indrajeet Kumar, Mac Newbold
  • Staff
    • Mike Hibler, Leigh Stoller
  • Alumni
    • Various

(Red: here at Intel today)

slide3
What?
  • A configurable Internet emulator in a room
    • Today: 200 nodes, 500 wires, 2x BFS (switch)
    • virtualizable topology, links, software
  • Bare hardware with lots of tools
  • An instrument for experimental CS research
  • Universally available to any remote experimenter
  • Simple to use
what s a node
What’s a Node?
  • Physical hardware: PCs, StrongARMs
  • Virtual node:
    • Router (network emulation)
    • Host, middlebox (distributed system)
  • Future physical hardware: IXP1200 +
slide5
Why?
  • “We evaluated our system on five nodes.” -job talk from university with 300-node cluster
  • “We evaluated our Web proxy design with 10 clients on 100Mbit ethernet.”
  • “Simulation results indicate ...”
  • “Memory and CPU demands on the individual nodes were not measured, but we believe will be modest.”
  • “The authors ignore interrupt handling overhead in their evaluation, which likely dominates all other costs.”
  • “Resource control remains an open problem.”
why 2
Why 2
  • “You have to know the right people to get access to the cluster.”
  • “The cluster is hard to use.”
  • “<Experimental network X> runs FreeBSD 2.2.x.”
  • “October’s schedule for <experimental network Y> is…”
  • “<Experimental network Z> is tunneled through the Internet”
complementary to other experimental environments
Complementary to Other Experimental Environments
  • Simulation
  • Small static testbeds
  • Live networks
  • Maybe someday, a large scale set of distributed small testbeds (“Access”)
slide8

Sharks

Sharks

PC

Internet

Web/DB/SNMP

Switch Mgmt

Users

PowerCntl

Control Switch/Router

Serial

PC

40

160

Switched “Backplane”

fundamental leverage
Fundamental Leverage:
  • Extremely Configurable
  • Easy to Use
key design aspects
Key Design Aspects
  • Allow experimenter complete control
  • … but provide fast tools for common cases
    • OS’s, disk loading, state mgmt tools, IP, traffic generation, batch, ...
  • Virtualization
    • of all experimenter-visible resources
    • node names, network interface names, network addresses
    • Allows swapin/swapout
design aspects cont d
Design Aspects (cont’d)
  • Flexible, extensible, powerful allocation algorithm
  • Persistent state maintenance:
    • none on nodes
    • all in database
    • leverage node boot time: only known state!
  • Separate control network
  • Familiar, powerful, extensible configuration language: ns
some unique characteristics
Some Unique Characteristics
  • User-configurable control of “physical” characteristics: shaping of link latency/bandwidth/drops/errors(via invisibly interposed “shaping nodes”), router processing power, buffer space, …
  • Node breakdown today:
    • 40 core, 160 edge
more unique characteristics
More Unique Characteristics
  • Capture of low-level node behavior such as interrupt load and memory bandwidth
  • User-replaceable node OS software
  • User-configurable physical link topology
  • Completely configurable and usable by external researchers, including node power cycling
a few research issues and challenges
A Few Research Issues and Challenges
  • Network management of unknown entities
  • Security
  • Scheduling of experiments
  • Calibration, validation, and scaling
  • Artifact detection and control
  • NP-hard virtual --> physical mapping problem
  • Providing a reasonable user interface
  • ….
an experiment
An “Experiment”
  • emulab’s central operational entity
  • Directly generated by an ns script,
  • … then represented entirely by database state
  • Steps: Web, compile ns script, map, allocate, provide access, assign IP addrs, host names, configure VLANs, load disks, reboot, configure OS’s, run, report
automatic mapping of desired topologies and characteristics to physical resources
Automatic mapping of desired topologies and characteristics to physical resources
  • Algorithm goals:
    • minimize likelihood of experimental artifacts (bottlenecks)
    • “optimal” packing of multiple simultaneous experiments
    • Extensible for heterogenous hardware, software, new features
  • Randomized heuristic algorithm: simulated annealing
  • May move to genetic algorithm
mapping results
Mapping Results
  • < 1 second for first solution, 40 nodes
  • “Good” solution within 5 seconds
  • Apparently insensitive to number of node “features”
disk loading
Disk Loading
  • 13 GB generic IDE 7200 rpm drives
  • Was 20 minutes for 6 GB image
  • Now 88 seconds
slide26
How?
  • Do obvious compression
  • … and a little less obvious: zero the fs
  • Disk writes become the bottleneck
  • Hack the disk driver
  • Carefully overlap I/O and decompression
  • ==> 6 minutes
last step
Last Step
  • Domain-specific compression
  • Type the filesystem blocks:
    • allocated
    • free
  • Never write the free ones
ongoing and future work
Multicast disk images

IXP1200 nodes, tools, code fragments

Routers, high-capacity shapers

Event system

Scheduling system

Topology generation tools and GUI

Simulation/enulation transparency

Linked testbeds

Wireless nodes, Mobile nodes

Logging. Visualization tools

Microsoft OSs, high speed links, more nodes!

Ongoing and Future Work
final remarks
Final Remarks
  • 18 projects have used it (14 external)
    • Plus several class projects
  • Two OSDI’00 and three SOSP’01 papers
    • 20% SOSP general acceptance rate
    • 60% SOSP acceptance rate for emulab users!
  • More emulab’s under construction:
    • Yes: Univ. of Kentucky, Stuttgart
    • Maybe: WUSTL, Duke
  • Sponsors (red: major ones, current or expected)
    • NSF, DARPA, University of Utah
    • Cisco, Intel, Compaq, Microsoft, Novell, Nortel