1 / 51

Tactical Storage System: Separating Abstractions from Resources

This paper discusses the Tactical Storage System (TSS), which allows users to create, reconfigure, and tear down storage abstractions without administrator involvement. It addresses the problems and limitations of the standard storage model and outlines the components and applications of TSS.

slind
Download Presentation

Tactical Storage System: Separating Abstractions from Resources

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Separating Abstractions from Resources in a Tactical Storage System Douglas Thain University of Notre Dame http://www.nd.edu/~ccl

  2. Abstract • Users of distributed systems encounter many practical barriers between their jobs and the data they wish to access. • Problem: Users have access to many resources (disks), but are stuck with the abstractions (cluster NFS) provided by administrators. • Solution: Tactical Storage Systems allow any user to create, reconfigure, and tear down abstractions without bugging the administrator.

  3. Transparent Distributed Filesystem shared disk The Standard Model

  4. Transparent Distributed Filesystem Transparent Distributed Filesystem private disk private disk private disk FTP, SCP, RSYNC, HTTP, ... shared disk shared disk private disk The Standard Model

  5. Problems with the Standard Model • Users encounter partitions in the WAN. • Easy to access data inside cluster, hard outside. • Must use different mechanisms on diff links. • Difficult to combine resources together. • Resources go unused. • Disks on each node of a cluster. • Unorganized resources in a department/lab. • Unnecessary cross-talk between users. • User A demands async NFS for performance. • User B demands sync NFS for consistency. • A global file system is not possible!

  6. What if... • Users could easily access any storage? • I could borrow an unused disk for NFS? • An entire cluster can be used as storage? • Multiple clusters could be combined? • I could reconfigure structures without root? • (Or bugging the administrator daily.) • Solution: Tactical Storage System (TSS)

  7. Outline • Problems with the Standard Model • Tactical Storage Systems • File Servers, Catalogs, Abstractions, Adapters • Applications: • Remote Database Access in HEP Simulation • Remote Dynamic Linking in HEP Simulation • Expandable Filesystem for Experimental Data • Expandable Database for Bioinformatics Simulation • Final Thought

  8. Tactical Storage Systems (TSS) • A TSS allows any node to serve as a file server or as a file system client. • All components can be deployed without special privileges – but with security. • Users can build up complex structures. • Filesystems, databases, caches, ... • Two Independent Concepts: • Resources – The raw storage to be used. • Abstractions – The organization of storage.

  9. App App Adapter Central Filesystem Distributed Filesystem Abstraction Adapter Distributed Database Abstraction file server file server file server file server file server file server file server UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX Cluster administrator controls policy on all storage in cluster Workstations owners control policy on each machine. App Adapter ??? file system file system file system file system file system file system file system

  10. Components of a TSS: 1 – File Servers 2 – Catalogs 3 – Abstractions 4 – Adapters

  11. 1 – File Servers • Unix-Like Interface • open/close/read/write • getfile/putfile to stream whole files • opendir/stat/rename/unlink • Complete Independence • choose friends • limit bandwidth/space • evict users? • Trivial to Deploy • run server + setacl • no privilege required • can be thrown into a grid system • Flexible Access Control Chirp Protocol file server A file server B file system owner of server A owner of server B

  12. Access Control in File Servers • Unix Security is not Sufficient • No global user database possible/desirable. • Mapping external credentials to Unix gets messy. • Instead, Make External Names First-Class • Perform access control on remote, not local, names. • Types: Globus, Kerberos, Unix, Hostname, Address • Each directory has an ACL: globus:/O=NotreDame/CN=DThain RWLA kerberos:dthain@nd.edu RWL hostname:*.cs.nd.edu RL address:192.168.1.* RWLA

  13. test.c test.dat a.out cms.exe Problem: Shared Namespace file server globus:/O=NotreDame/* RWLAX

  14. /O=NotreDame/CN=Monk /O=NotreDame/CN=Ted mkdir mkdir /O=NotreDame/CN=Monk RWLA /O=NotreDame/CN=Ted RWLA test.c a.out test.c a.out Solution: Reservation (V) Right file server mkdir only! O=NotreDame/CN=* V(RWLA)

  15. 2 - Catalogs HTTP XML, TXT, ClassAds catalog server catalog server periodic UDP updates

  16. 3 - Abstractions • An abstraction is an organizational layer built on top of one or more file servers. • End Users choose what abstractions to employ. • Working Examples: • CFS: Central File System • DSFS: Distributed Shared File System • DSDB: Distributed Shared Database • Others Possible? • Distributed Backup System • Striped File System (RAID/Zebra)

  17. CFS: Central File System appl appl appl adapter adapter adapter CFS CFS CFS file server file file file

  18. access data lookup file location DSFS: Dist. Shared File System appl appl adapter adapter DSFS DSFS file server file server file server file file file file file ptr file file file file file ptr ptr

  19. DSDB: Dist. Shared Database appl appl adapter adapter DSDB DSDB insert query direct access file server database server file server create file file file file index file file file file file file file file

  20. tcsh tcsh cat cat vi vi 4 - Adapter • Like an OS Kernel • Tracks procs, files, etc. • Adds new capabilities. • Enforces owner’s policies. • Delegated Syscalls • Trapped via ptrace interface. • Action taken by Parrot. • Resources chrgd to Parrot. • User Chooses Abstr. • Appears as a filesystem. • Option: Timeout tolerance. • Option: Cons. semantics. • Option: Servers to use. • Option: Auth mechanisms. system calls trapped via ptrace Adapter - Parrot process table file table Abstractions: CFS – DSFS - DSDB

  21. App App Adapter Central Filesystem Distributed Filesystem Abstraction Adapter Distributed Database Abstraction file server file server file server file server file server file server file server UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX UNIX Cluster administrator controls policy on all storage in cluster Workstations owners control policy on each machine. App Adapter ??? file system file system file system file system file system file system file system

  22. Performance Summary • Nothing comes for free! • System calls: order of magnitude slower. • Memory bandwidth overhead: extra copies. • TSS can drive network/switch to limits. • Compared to NFS Protocol: • TSS slightly better on small operations. (no lookup) • TSS much better in network bandwidth. (TCP) • NFS caches, TSS doesn’t (today), mixed blessing. • On real applications: • Measurable slowdown • Benefit: far more flexible and scalable.

  23. Outline • Problems with the Standard Model • Tactical Storage Systems • File Servers, Catalogs, Abstractions, Adapters • Applications: • Remote Dynamic Linking in HEP Simulation • Remote Database Access in HEP Simulation • Expandable Filesystem for Astrophysics Data • Expandable Database for Mol. Dynamics Simulation • Final Thoughts

  24. Credit: Igor Sfiligoi @ Fermi National Lab Remote Dynamic Linking • Modular Simulation Needs Many Libraries • Devel. on workstations, then ported to grid. • Selection of library depends on analysis tech. • Solution: Dynamic Link with TSS and FTP: • LD_LIBRARY_PATH=/ftp/server.name/libs Send adapter along with job. appl select several MB from 60 GB of libraries liba.so FTP server file system ld.so libb.so Anon. Login. adapter WAN libc.so FTP driver

  25. Related Work • Lots of file services for the Grid: • GridFTP, Freeldr, NeST, IBP, SRB, RFIO,... • Adapter interfaces with many of these! • Why have another file server? • Reason 1: Must have precise Unix semantics! • Apps distinguish ENOENT vs EACCES vs EISDIR. • FTP always returns error 550, regardless of error. • Reason 2: TSS focused on easy deployment. • No privilege required, no config files, no rebuilding, flexible access control, ...

  26. Credit: Sander Klous @ NIKHEF Remote Database Access • HEP Simulation Needs Direct DB Access • App linked against Objectivity DB. • Objectivity accesses filesystem directly. • How to distribute application securely? • Solution: Remote Root Mount via TSS: parrot –M /=/chirp/fileserver/rootdir DB code can read/write/lock files directly. GSI script DB data TSS file server file system adapter WAN libdb.so GSI Auth CFS sim.exe

  27. Performance on EDG Testbed

  28. Credit: John Poirer @ Notre Dame Astrophysics Dept. Can only analyze the most recent data. 25-year archive 10 GB/day today could be lots more! buffer disk daily tape analysis code daily tape daily tape daily tape daily tape Expandable Filesystemfor Experimental Data Project GRAND http://www.nd.edu/~grand

  29. Credit: John Poirer @ Notre Dame Astrophysics Dept. Can analyze all data over large time scales. analysis code 25-year archive 10 GB/day today could be lots more! Adapter buffer disk daily tape Distributed Shared Filesystem daily tape daily tape daily tape daily tape file server file server file server Expandable Filesystemfor Experimental Data Project GRAND http://www.nd.edu/~grand file server

  30. Appl: Distributed MD Database • State of Molecular Dynamics Research: • Easy to run lots of simulations! • Difficult to understand the “big picture” • Hard to systematically share results and ask questions. • Desired Questions and Activities: • “What parameters have I explored?” • “How can I share results with friends?” • “Replicate these items five times for safety.” • “Recompute everything that relied on this machine.” • GEMS: Grid Enabled Molecular Sims • Distributed database for MD siml at Notre Dame. • XML database for indexing, TSS for storage/policy.

  31. XML+ Temp>300K Mol==CH4 data host5:fileZ host6:fileX XML -> host6:fileX host2:fileY host5:fileZ XML -> host1:fileA host7:fileB host3:fileC A Y C Z X B Credit: Jesus Izaguirre and Aaron Striegel, Notre Dame CSE Dept. GEMS Distributed Database database server catalog server catalog server

  32. Active Recovery in GEMS

  33. GEMS and Tactical Storage • Dynamic System Configuration • Add/remove servers, discovered via catalog • Policy Control in File Servers • Groups can Collaborate within Constraints • Security Implemented within File Servers • Direct Access via Adapters • Unmodified Simulations can use Database • Alternate Web/Viz Interfaces for Users.

  34. Outline • Problems with the Standard Model • Tactical Storage Systems • File Servers, Catalogs, Abstractions, Adapters • Applications: • Remote Dynamic Linking in HEP Simulation • Remote Database Access in HEP Simulation • Expandable Filesystem for Astrophysics Data • Expandable Database for Mol. Dynamics Simulation • Final Thoughts

  35. Tactical Storage Systems • Separate Abstractions from Resources • Components: • Servers, catalogs, abstractions, adapters. • Completely user level. • Performance acceptable for real applications. • Independent but Cooperating Components • Owners of file servers set policy. • Users must work within policies. • Within policies, users are free to build.

  36. Ongoing Work • Malloc() for the Filesystem • Resource owners want to limit users. (quota) • End users need space assurance. (alloc) • Need per-user allocations, not just global limits. • Dynamic System Management • Add a node, delete a node, reconfigure. • Need tools that allow rebalancing as needed. • Distributed Access Control • ACLs refer to group definitions elsewhere. • What’s new? Fault tolerance / policy management. • Processing in Storage (PINS) • Move computation to data. • Needs new programming (scripting) model.

  37. Acknowledgments • Science Collaborators: • Jesus Izaguirre • Sander Klous • Peter Kunzst • Erwin Laure • John Poirer • Igor Sfiligoi • Aaron Striegel • CSE Graduate Students: • Paul Brenner • James Fitzgerald • Jeff Hemmes • Paul Madrid • Chris Moretti • Phil Snowberger • Justin Wozniak

  38. For more information... Cooperative Computing Lab http://www.cse.nd.edu/~ccl Cooperative Computing Tools http://www.cctools.org Douglas Thain • dthain@cse.nd.edu • http://www.cse.nd.edu/~dthain

  39. Extra Slides • Different sized disks • Check contents • Black stinks

  40. Performance – System Calls

  41. Performance - Applications parrot only

  42. Performance – I/O Calls

  43. Performance – Bandwidth

More Related