1 / 27

Self-Certifying File Systems (SFS)

Self-Certifying File Systems (SFS). Presented by Vishal Kher January 29, 2004. References. Self-certifying file system (SFS) David Mazires and M. Frans Kaashoek. Escaping the evils of centralized control with self-certifying pathnames . In Proceedings of the 8th ACM SIGOPS 1998

Download Presentation

Self-Certifying File Systems (SFS)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Self-Certifying File Systems (SFS) Presented byVishal Kher January 29, 2004

  2. References • Self-certifying file system (SFS) • David Mazires and M. Frans Kaashoek. Escaping the evils of centralized control with self-certifying pathnames. In Proceedings of the 8th ACM SIGOPS 1998 • D. Mazieres, M. Kaminsky, M. Kaashoek and E. Witchel. Separating key management from file system security. SOSP, December 1999 • SFS-based read-only file system • K. Fu, M. Kaashoek and D. Mazieres. Fast and secure distributed read-only file system. OSDI, October 2000

  3. Introduction (2) • FS like NFS and AFS do span the Internet • They do not provide seamless file access • Why is global file sharing (gfs) difficult? • Files are shared across administrative realms • Can you seamlessly access files on CS file server from a machine outside CS administration? • Scale of Internet makes management a nightmare • Every realm might follow its own policy

  4. Introduction (2) • Is there any thing else that hinders gfs? • No one controls the name space • Users cannot trust “a” centralized server • Further who will manage the keys? • A centralized authority cannot manage all the keys • Scale of the Internet • A key management mechanism does not satisfy all • Expensive • CentralizedControl

  5. SFS Goals • Provide global file system image • FS looks the same from every client machine • No notion of administrative realm • Servers grant access to users and not clients • Separate key management from file system • Various key management policies can co-exist • Key management will not hinder setting up new servers • Security • Authentication • Confidentiality and integrity of client-server communication • Verstality and modularity

  6. SFS Overview (1) • Key idea: self-certifying path names • Every SFS file system is accessible as • /sfs/location:HostID • Location is location of the file server e.g., IP address • HostID = (“Hostinfo”,Location,PublicKey) • Every pathname has a public key embedded in it • Self-certifying path-name • /sfs/sfs.umn.cs.edu:vefsdfa345474sfs35/foo • Access file foo located on sfs.umn.cs.edu

  7. SFS Overview (1) • Starting a file server is quite easy • We need IP address, generate HostID, public key, and run sfs software • Automatic mounting • If user references a non-existent pathname in /sfs the SFS client automatically mounts the remote file system • Symbolic link • /umn /sfs/sfs.umn.cs.edu:vefsdfa345474sfs35 • Authentication • sfs provides server and user authentication (details later)

  8. Key Management (1) • sfs doesn’t really care • But it provides some useful key management techniques • Manual key distribution • Admin installs pathname on local disk • CAs • CAs are sfs servers serving symbolic links • /verisign/umn /sfs/sfs.umn.cs.edu:vefsdfa345474sfs35 • If user trusts Verisign’s public key he will trust the path to umn.cs.edu file server

  9. Key Management (2) • Using password • User stores hash of password with the server UMN • Server authenticates user based on the password and downloads the pathname to his local /sfs directory • File access does not involve a central admin.

  10. System Components Kernel • Agents and authserver interact for user authentication • Both are modular and can be replaced • Client program handles server authentication and other activities • Revocation, auto mounting etc. Agent User program NFS Server Authserver Agent MACed, EncryptedTCP Connection NFS Client SFS server SFS client Kernel

  11. Location, HostID Ps Pc, Ps(Kc1,Kc2) Pc(Ks1,Ks2) Protocols: Key Negotiation (1) • sfs client initiates the following every time it sees a new self-certifying pathname • Pc, Ps indicate public keys of client and server resp. SFS server SFS Client • Session keys • Kcs = H(“KCS”, Ps, Ks1, Pc, Kc1) • Ksc = H(“KSC”, Ps, Ks2, Pc, Kc2)

  12. Key Negotiation (2) • Only server can generate Ksc, Kcs • Server posses private key S • Ksc, Kcs used to encrypt and MAC all communication • Pc is changed frequently (every hour) • Forward secrecy • Is this susceptible to replay?

  13. SeqNo, AuthMsg AuthId, SeqNo,Credentials AuthNo SeqNo, AuthMsg AuthMsg AuthInfoSeqNo User Authentication • Performed on user’s first access to a new FS • SFS server has a database mapping user public keys to credentials Authserver • AuthInfo = {Location, Host, SessionID} • SessionId = H(Ksc, Kcs) • Req = {H(Authinfo),SeqNo} • AuthMsg = PU, SigU(Req) • All Communication is secure SFS server SFS Client Agent

  14. Revocation • How to revoke a server’s public key? • Revocation certificate • CR = SigK(Location, PK) • CA stores these certificates • /verisign/revocation/HostID • File named by HostID contains revocation certificate for that HostID • Revocation certificates are self-authenticating • CA need not check the identity of submitters

  15. Performance • End-End performance • SFS is roughly 11 – 16% slower than NFS 3 over UDP • Sprite LFS • Small file create, read and unlink • Read is 3 times slow • Large file ~ 40 MB • Sequential write is 44% slower than NFS 3 over UDP • Sequential read is 145% slower than NFS 3 over UDP

  16. Summary • SFS separates key management from FS • Provides secure global file system by using self-certifying pathnames • Any user can start his file server without going through a central authority • Implementation is quite modular • Significant performance overhead

  17. References • Self-certifying file system (SFS) • David Mazires and M. Frans Kaashoek. Escaping the evils of centralized control with self-certifying pathnames. In Proceedings of the 8th ACM SIGOPS 1998 • D. Mazieres, M. Kaminsky, M. Kaashoek and E. Witchel. Separating key management from file system security. SOSP, December 1999 • SFS-based read-only file system • K. Fu, M. Kaashoek and D. Mazieres. Fast and secure distributed read-only file system. OSDI, October 2000

  18. Motivation • Internet users rely a lot on publicly available data • Software installation • Secure data distribution • Replication, mirror sites not secure • Security expensive to verify • Security holes • Poor or no revocation support

  19. Solution • Consider problem subset  read-only data distribution • Apply SFS • Result secure, high performance, scalable read-only file system

  20. Assumptions • Untrusted distribution servers • Trusted clients • Public, read-only data

  21. System Components • sfsrodb • Database generator—creation, updates • sfsrosd • Server—data distribution • Runs on server • Server is self-certifying file system • sfsrocd • Client—data acquirement, verification • Runs on client

  22. System Overview User App. sfsrosd FS sfsrosd sfsrodb NFS client sfsrosd sfsrocd TCP Connection Private key signed replica database

  23. Recursive Hashing (1) • Each data block is hashed • Fixed-size hash computed  handle • Used to lookup the block in database • Handles stored in file’s inode • Directories store <name, handle> pairs • Directories and inodes hashed • rootfh is hash of root directory’s inode

  24. Recursive Hashing (1) / metadata H Sign Name, handle Name, handle […] B0 B1 B7 B8 metadata Name, handle H H(B0) H H H(B1) File Handle […] H H(H(B7)..) H(B7) H(B8)

  25. Features • Data verification by default • Data has expiry date – • struct FSINFO stores {date, duration, rootfh} • Directories sorted lexicographically • reduced search time • Opaque directories

  26. Limitations • Database update inefficient • Re-compute handles • Client must keep up with updates • Verification • Walk up all the tree to the root

  27. Conclusions • Read-only data integrity • Content verification costs offloaded to clients • No confidentiality promise! • High availability, performance, scalability

More Related