1 / 12

The EverLab Project

The EverLab Project. Danny Bickson, Elliot Jaffe The Hebrew University of Jerusalem, Israel. The Evergrow EU Project . About 28 participants from academy and industry. Project goal: predict the Interent in 25 years from now 4-year project. A = Aston Univ.uk B = Orsay.fr C = Louvain.ac.be

klarson
Download Presentation

The EverLab Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The EverLab Project Danny Bickson, Elliot Jaffe The Hebrew University of Jerusalem, Israel 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  2. The Evergrow EU Project • About 28 participants from academy and industry. • Project goal: predict the Interent in 25 years from now • 4-year project. 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  3. A = Aston Univ.uk B = Orsay.fr C = Louvain.ac.be D = Magdeburg.de E = Rome.it F = SICS.se G = Colbud.hu H = TAU.ac.il 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  4. Problem Statement • 8 European sites of 14 IBM eServer blades each: • Tel Aviv, Orsay, Stockholm, Rome, Budapest, Magdenburg, Louvain-La-Nouve and Aston. • Would like to run distributed applications, measurement and simulations. • Problems of users and node management • No system wide login, 8 different sys admins, No common OS, 8 different firewalls, No system wide monitor 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  5. Proposed Solution • Install a private PlanetLab network across all clusters. • Maintain one PLC for all user and node management. • Move existing software from PlanetLab to our clusters. • Examples: P-Grid, Mozart/OZ, Julia, etc. • Educate users to work in the PlanetLab environment. 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  6. Project challenges • Hardware/software • Security issues • Political issues.. 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  7. Hardware challenges • Different hardware drivers/ kernel models needed • IBM eServer blades have a common USB bus • One CD-ROM and floppy drive for 14 blades • Keyboard/mouse are on USB bus • PlanetLab does not support USB/SATA • Booting from a USB disk on key is not supported • Currently a PlanetLab node boots from a fixed CD-ROM drive, and reads the config file from the floppy drive. 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  8. Current project status • PLC Installed at HUJI, Jerusalem • Managed to install on one node in HUJI, and one blade in TAU (11.10.05) • Major changes relative to the PlanetLab software. • Modified boot cd Kernel to support hardware • Modified install scripts to load additional hardware modules • Changed installation to override floppy and CD-ROM drives 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  9. Remaining Issues • How to securely reboot nodes that have no read-only media • CDROM • USB • DHCP/Bootp • BootCD Image on unmounted partition • Blades have significant disk space. Can we share it across the system? • NFS4, AFS, GFS,GPFS 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  10. Security issues • Blades are in highly secure environment (closed computer rooms) • Local System Admins are very nervous about exposed machines on their networks (no firewall, etc). • EverLab nodes share a common switching fabric with other blades • EverLab nodes will be separated in a different VLAN 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  11. Political issues • Each computer room is managed by different administration • Convince each site that it will benefit from the project • Hardware has required multiple service calls • Request users to change their software / working habits 2nd European PlanetLab Meetings, EPFL, Oct. 2005

  12. Future work • Install several dozen nodes in the coming two months • Merge our changes back to PLC • Start distributed experiments • Create a diskless PlanetLab node 2nd European PlanetLab Meetings, EPFL, Oct. 2005

More Related