other file systems afs napster n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Other File Systems: AFS, Napster PowerPoint Presentation
Download Presentation
Other File Systems: AFS, Napster

Loading in 2 Seconds...

play fullscreen
1 / 16

Other File Systems: AFS, Napster - PowerPoint PPT Presentation


  • 117 Views
  • Uploaded on

Other File Systems: AFS, Napster. Recap. NFS: Server exposes one or more directories Client accesses them by mounting the directories Stateless server Has problems with cache consistency, locking protocol Mounting protocol Automounting P2P File Systems: PAST, CFS

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Other File Systems: AFS, Napster' - toan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
recap
Recap
  • NFS:
    • Server exposes one or more directories
      • Client accesses them by mounting the directories
    • Stateless server
      • Has problems with cache consistency, locking protocol
    • Mounting protocol
      • Automounting
  • P2P File Systems:
    • PAST, CFS
    • Relies on DHTs for routing
andrew file system afs
Andrew File System (AFS)
  • Named after Andrew Carnegie and Andrew Mellon
    • Transarc Corp. and then IBM took development of AFS
    • In 2000 IBM made OpenAFS available as open source
  • Features:
    • Uniform name space
    • Location independent file sharing
    • Client side caching with cache consistency
    • Secure authentication via Kerberos
    • Server-side caching in form of replicas
    • High availability through automatic switchover of replicas
    • Scalability to span 5000 workstations
afs overview
AFS Overview
  • Based on the upload/download model
    • Clients download and cache files
    • Server keeps track of clients that cache the file
    • Clients upload files at end of session
  • Whole file caching is central idea behind AFS
    • Later amended to block operations
    • Simple, effective
  • AFS servers are stateful
    • Keep track of clients that have cached files
    • Recall files that have been modified
afs details
AFS Details
  • Has dedicated server machines
  • Clients have partitioned name space:
    • Local name space and shared name space
    • Cluster of dedicated servers (Vice) present shared name space
    • Clients run Virtue protocol to communicate with Vice
  • Clients and servers are grouped into clusters
    • Clusters connected through the WAN
  • Other issues:
    • Scalability, client mobility, security, protection, heterogeneity
afs shared name space
AFS: Shared Name Space
  • AFS’s storage is arranged in volumes
    • Usually associated with files of a particular client
  • AFS dir entry maps vice files/dirs to a 96-bit fid
    • Volume number
    • Vnode number: index into i-node array of a volume
    • Uniquifier: allows reuse of vnode numbers
  • Fids are location transparent
    • File movements do not invalidate fids
  • Location information kept in volume-location database
    • Volumes migrated to balance available disk space, utilization
    • Volume movement is atomic; operation aborted on server crash
afs operations and consistency
AFS: Operations and Consistency
  • AFS caches entire files from servers
    • Client interacts with servers only during open and close
  • OS on client intercepts calls, and passes it to Venus
    • Venus is a client process that caches files from servers
    • Venus contacts Vice only on open and close
      • Does not contact if file is already in the cache, and not invalidated
    • Reads and writes bypass Venus
  • Works due to callback:
    • Server updates state to record caching
    • Server notifies client before allowing another client to modify
    • Clients lose their callback when someone writes the file
  • Venus caches dirs and symbolic links for path translation
afs implementation
AFS Implementation
  • Client cache is a local directory on UNIX FS
    • Venus and server processes access file directly by UNIX i-node
  • Venus has 2 caches, one for status & one for data
    • Uses LRU to keep them bounded in size
napster
Napster
  • Flat FS: single-level FS with no hierarchy
    • Multiple files can have the same name
  • All storage done at edges:
    • Hosts export set of files stored locally
    • Host is registered with centralized directory
      • Uses keepalive messages to check for connectivity
    • Centralized directory notified of file names exported by the host
  • File lookup: client sends request to central directory
    • Directory server sends 100 files matching the request to client
    • Client pings each host, computes RTT and displays results
    • Client transfers files from the closest host
  • File transfers are peer-to-peer; central directory not part
napster architecture
Napster Architecture

H1

Napster

Directory

Server 1

H2

Napster

Directory

Server 2

IP

Sprayer/

Redirector

Firewall

Network

Napster

Directory

Server 3

Napster.com

H3

napster protocol
Napster Protocol

H1

Napster

Directory

Server 1

I have “metallica / enter sandman”

H2

Napster

Directory

Server 2

IP

Sprayer/

Redirector

Network

Firewall

Napster

Directory

Server 3

Napster.com

H3

napster protocol1
Napster Protocol

H1

Napster

Directory

Server 1

I have “metallica / enter sandman”

H2

Napster

Directory

Server 2

IP

Sprayer/

Redirector

Network

Firewall

“who has metallica ?”

“check H1, H2”

Napster

Directory

Server 3

Napster.com

H3

napster protocol2
Napster Protocol

H1

Napster

Directory

Server 1

I have “metallica / enter sandman”

H2

Napster

Directory

Server 2

IP

Sprayer/

Redirector

ping

Network

ping

Firewall

“who has metallica ?”

“check H1, H2”

Napster

Directory

Server 3

Napster.com

H3

napster protocol3
Napster Protocol

H1

Napster

Directory

Server 1

I have “metallica / enter sandman”

H2

Napster

Directory

Server 2

IP

Sprayer/

Redirector

ping

Network

ping

Firewall

“who has metallica ?”

“check H1, H2”

transfer

Napster

Directory

Server 3

Napster.com

H3

napster discussion
Napster Discussion
  • Issues:
    • Centralized file location directory
    • Load balancing
    • Relies on keepalive messages
    • Scalability an issue!
  • Success: ability to create and foster an online community
    • Built in ethics
    • Built in faults
    • Communication medium
  • Had around 640000 users in November 2000!
other p2p file systems
Other P2P File Systems
  • Napster has a central database!
    • Removing it will make regulating file transfers harder
  • Freenet, gnutella, kazaa … all are decentralized
  • Freenet: anonymous, files encrypted
    • So not know which files stored locally, which file searched
  • Kazaa: allows parallel downloads
  • Torrents for faster download