1 / 18

Distributed Anemone: Transparent Low-Latency Access to Remote Memory

High Performance Distributed System. Distributed Anemone: Transparent Low-Latency Access to Remote Memory. Michael R. Hines, Jian Wang Kartik Gopalan. Reporter : Min-Jyun Chen. Submission year :2006. Abstract.

aria
Download Presentation

Distributed Anemone: Transparent Low-Latency Access to Remote Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Performance Distributed System Distributed Anemone: Transparent Low-Latency Access to Remote Memory Michael R. Hines, Jian Wang Kartik Gopalan Reporter : Min-Jyun Chen Submission year :2006

  2. Abstract Performance of large memory applications degrades rapidly once the system hits the physical memory limit and starts paging to local disk. Distributed Anemone(Adaptive Network Memory Engine) – a lightweight and distributed system that pools together the collective memory resources of multiple machines across a gigabit Ethernet LAN. Our kernel-level prototype features fully distributed resource management, low-latency paging, resource discovery, load balancing, soft-state refresh, and support for ’jumbo’ Ethernet frames.

  3. Outline Introduction Design and Implementation Performance Conclusions

  4. Introduction • Memory resource management is distributed across the whole cluster. There is no single control node. • Clients can perform load-balancing across multiple memory servers, taking into account their memory usage and paging load.

  5. Introduction(cont.) • A distributed resource discovery mechanism enables clients to discover newly available servers and track memory usage across the cluster. • A soft-state refresh mechanism enables memory servers to track the liveness of clients and their pages.

  6. Design and Implementation Client and Server Modules Transparent Virtualization Distributed Resource Discovery Soft-State Refresh Server Load Balancing Fault-tolerance

  7. Client and Server Modules(1/6)

  8. Transparent Virtualization(2/6) • To enable LMAs to transparently access remote memory ,the client module exports a BDI to the pager. • Any single client can transparently access memory from multiple servers as one pool via the BDI. • Any single server can share its unused memory pool among multiple clients simultaneously.

  9. Distributed Resource Discovery(3/6) Seamlessly absorb the increase/decrease in cluster-wide memory capacity, insulating LMAs from resource fluctuations. Allow any server to reclaim part or all of its contributed memory.

  10. Soft-State Refresh(4/6) Soft-state also permits servers to track the liveness of clients whose pages they store. Each client periodically transmits a Session Refresh message to each server that hosts its pages, which carries a client-specific session ID.

  11. Server Load Balancing(5/6) The number of pages stored at each active server. The number of paging requests serviced by each active server.

  12. Fault-tolerance(6/6) To maintain a local disk-based copy of every memory page swapped out over the network. To keep redundant copies of each page on multiple remote servers.

  13. Performance Paging Latency Application Speedup Tuning the Client RMAP Protocol

  14. Paging Latency(1/3)

  15. Application Speedup(2/3)

  16. Tuning the Client RMAP Protocol(3/3)

  17. Conclusions We are incorporating fault-tolerance mechanisms into Anemone using page replication across servers as well as local disk.

  18. Thank You!

More Related