1 / 48

An Efficient Process Live Migration Mechanism for Load Balanced Distributed Virtual Environments

An Efficient Process Live Migration Mechanism for Load Balanced Distributed Virtual Environments. Balazs Gerofi , Hajime Fujita, Yutaka Ishikawa Yutaka Ishikawa Laboratory The University Of Tokyo. IEEE Cluster2010. Outline. Motivation Cluster Server Architecture DVE Software Components

derron
Download Presentation

An Efficient Process Live Migration Mechanism for Load Balanced Distributed Virtual Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Efficient Process Live Migration Mechanism forLoad Balanced Distributed Virtual Environments BalazsGerofi, Hajime Fujita, Yutaka Ishikawa Yutaka Ishikawa Laboratory The University Of Tokyo IEEE Cluster2010

  2. Outline • Motivation • Cluster Server Architecture • DVE Software Components • Process Live Migration • Multiple Socket Migration Optimizations • Dynamic Load Balancing • Evaluation • Conclusion IEEE Cluster2010

  3. Motivation • In Distributed Virtual Environments (DVE): • Massively Multi-player Online Games (MMPOG) • Networked Virtual Environments (NVE) • Distributed Simulations such as the High-Level Architecture (HLA) • 10,000 ~ 100,000 of clients may be involved • Cluster of servers is used for providing services on large scale • Zoning (i.e., partitioning the virtual space among servers) • Main limitations of application level load-balancing: • Client migrations are heavy, server state needs to be transferred, client(s) reconnect, etc.. • Physical machine limited to neighboring zones • Is operating system level load-balancing feasible? • Server processes are highly interactive • Maintain a massive amount of network connections (clients) • Maintain connections with other in-cluster components • How to migrate such processes? IEEE Cluster2010

  4. Outline • Motivation • Cluster Server Architecture • DVE Software Components • Process Live Migration • Multiple Socket Migration Optimizations • Dynamic Load Balancing • Evaluation • Conclusion IEEE Cluster2010

  5. Cluster Server Architecture • Each DVE server is equipped with a public and a private interfaces, same IP address is assigned to the public interfaces • Router broadcasts incoming packets to all DVE server nodes • Migrating zone server processes does not require any work on the router! • Zone server processes are distinguished based on separate port numbers (as opposed to separate IP addresses) IEEE Cluster2010

  6. Outline • Motivation • Cluster Server Architecture • DVE Software Components • Process Live Migration • Multiple Socket Migration Optimizations • Dynamic Load Balancing • Evaluation • Conclusion IEEE Cluster2010

  7. Server Node Software Components • mig_mod: migration module with live migration and socket support (extension of Berkeley C/R module) • cap_trans_mod: packet capturing and address translation kernel module (detailes in paper) • transd: translation daemon • migd: migration daemon • cond: load monitor and load balancer • zone_serv: zone server processes zone_serv1 migd transd cond zone_servn … Linux kernel mig_mod cap_trans_mod IEEE Cluster2010

  8. Outline • Motivation • Cluster Server Architecture • DVE Software Components • Process Live Migration • Multiple Socket Migration Optimizations • Dynamic Load Balancing • Evaluation • Conclusion IEEE Cluster2010

  9. Process Live Migration Process Image network sockets source host destination host network IEEE Cluster2010

  10. Process Live Migration Process Image Process Image Transfer the whole process image in the background without stopping the execution network sockets source host destination host network IEEE Cluster2010

  11. Process Live Migration - dirty memory page Process Image Process Image Track dirty pages for a certain period, process is still being executed network sockets source host destination host network IEEE Cluster2010

  12. Process Live Migration - dirty memory page Process Image Stop process (freeze phase), transfer dirty memory, export network connections and transfer data to destination network sockets source host destination host network IEEE Cluster2010

  13. Process Live Migration Note: main goal is short process freeze time! Process Image Apply changes and resume execution network sockets source host destination host network IEEE Cluster2010

  14. Outline • Motivation • Cluster Server Architecture • DVE Software Components • Process Live Migration • Multiple Socket Migration Optimizations • Dynamic Load Balancing • Evaluation • Conclusion IEEE Cluster2010

  15. Iterative socket migration (during process freeze phase) - dirty memory page Incoming packet loss prevention! Process Image Process Image network sockets source host destination host network IEEE Cluster2010

  16. Iterative socket migration (during process freeze phase) - dirty memory page Process Image Process Image Extract remote IP and port number, set up a filter at the destination node to capture incoming packets and disable socket network sockets source host destination host network IEEE Cluster2010

  17. Iterative socket migration (during process freeze phase) - dirty memory page Process Image Process Image Migrate socket data to destination node network sockets source host destination host network IEEE Cluster2010

  18. Iterative socket migration (during process freeze phase) - dirty memory page Process Image Process Image Inject any packets that were captured on the destination node and attach socket to the process network sockets source host destination host network IEEE Cluster2010

  19. Iterative socket migration (during process freeze phase) - dirty memory page Process Image Process Image network sockets source host destination host network IEEE Cluster2010

  20. Iterative socket migration (during process freeze phase) - dirty memory page Process Image Process Image network sockets source host destination host network IEEE Cluster2010

  21. Iterative socket migration (during process freeze phase) Note: requires several synchronization steps with short writes following each other! - dirty memory page Process Image Process Image network sockets source host destination host network IEEE Cluster2010

  22. Collective socket migration (during process freeze phase) - dirty memory page Process Image Process Image network sockets source host destination host network IEEE Cluster2010

  23. Collective socket migration (during process freeze phase) - dirty memory page Process Image Process Image Extract remote IP and port number for all sockets, set up filters to capture incoming packets and disable sockets network sockets source host destination host network IEEE Cluster2010

  24. Collective socket migration (during process freeze phase) - dirty memory page Process Image Process Image Extract socket data into one unified buffer and transfer everything in one go network sockets source host destination host network IEEE Cluster2010

  25. Collective socket migration (during process freeze phase) Note: the amount of socket data transferred can be still large! - dirty memory page Process Image Process Image Attach sockets, inject packets. network sockets source host destination host network IEEE Cluster2010

  26. Incremental collective socket migration (during dirty-logphase) - dirty memory page Process Image Process Image network sockets source host destination host network IEEE Cluster2010

  27. Incremental collective socket migration (during dirty-log phase) - dirty memory page Process Image Process Image All socket data are transferred asynchronously and tracking structures are allocated for each connection network sockets network sockets source host destination host network IEEE Cluster2010

  28. Incremental collective socket migration (during dirty-log phase) - dirty memory page Process Image Process Image Some pages are dirtied and some sockets’ state change are detected network sockets network sockets source host destination host network IEEE Cluster2010

  29. Incremental collective socket migration (during dirty-log phase) - dirty memory page Process Image Process Image Dirty pages transferred and modified sockets’ state are updated, tracking loop timeout is decreased network sockets network sockets source host destination host network IEEE Cluster2010

  30. Incremental collective socket migration (during dirty-log phase) - dirty memory page Process Image Process Image When number of dirty pages or tracking timeout goes below a pre-defined limit, enter process freeze phase network sockets network sockets source host destination host network IEEE Cluster2010

  31. Incremental collective socket migration (during dirty-log phase) - dirty memory page Process Image Process Image Transfer dirty pages and set up packet capture filter network sockets network sockets source host destination host network IEEE Cluster2010

  32. Incremental collective socket migration (during dirty-log phase) Note: transferred socket data in freeze phase is much less than the overall socket representation! - dirty memory page Process Image Process Image Update sockets that have changed in the last iteration and disable sockets on the source machine network sockets network sockets source host destination host network IEEE Cluster2010

  33. Incremental collective socket migration (during dirty-log phase) - dirty memory page Process Image Process Image Inject packets and re-enable sockets on the destination machine network sockets source host destination host network IEEE Cluster2010

  34. Outline • Motivation • Cluster Server Architecture • DVE Software Components • Process Live Migration • Multiple Socket Migration Optimizations • Dynamic Load Balancing • Evaluation • Conclusion IEEE Cluster2010

  35. Dynamic Load Balancing • Decentralized middleware • Load balancing is sender initiated performing a hand-shake with the receiver • Transfer policy: • Threshold driven (if load exceeds a certain value) • Location policy: • Based on knowledge of load on the rest of the nodes, preferring a node that is on the opposite side of the cluster load average • Selection policy: • Prefers a process that consumes as much CPU power as much the difference between the given node’s load and the cluster load average • Information policy: • Periodic policy, nodes broadcast their load IEEE Cluster2010

  36. Outline • Motivation • Cluster Server Architecture • DVE Software Components • Process Live Migration • Multiple Socket Migration Optimizations • Dynamic Load Balancing • Evaluation • Conclusion IEEE Cluster2010

  37. Evaluation: experimental framework • Dedicated single IP address cluster • 5 DVE server nodes + a MySQL server • 2.4GHz Dual-Core AMD Opteron • 2 GB RAM • Gigabit Ethernet for both in-cluster and public network IEEE Cluster2010

  38. Evaluation: OpenArena server • OpenArena is an open-source multi-player online game based on the Quake III engine [1] • Uses UDP for client-server communication • ~20 messages (updates) per second • Live migrated when 24 clients were participating in a session • Based on tcpdump’s result on the client machines ~25ms service downtime due to migration [1] http://openarena.ws/smfnews.php IEEE Cluster2010

  39. Evaluation: DVE simulation • DVE simulation with communication characteristics resembling real-world MMOPGs using TCP connections • Client state update: 20 msgs / sec, 256~512 bytes message size [2] • DVE server processes maintain MySQL to local DB server • CPU consumption grows proportionally with number of clients in a given zone, 10,000 clients involved • Virtual space consists of 10x10 zones, each DVE server node is assigned to 20 zones initially • 15 minutes simulation during which clients are instructed to move to the up-left and bottom-right corner of the virtual space • Files are assumed to be available on each node [2] Traffic characteristics of a massively multi-player online role playing game, NetGames’05 IEEE Cluster2010

  40. Live migration process downtime IEEE Cluster2010

  41. Socket data transferred during process freeze phase IEEE Cluster2010

  42. Load distribution during simulation without load balancing • node1, node2 and node5 becomes overloaded when clients move to zones maintained by these nodes IEEE Cluster2010

  43. Load distribution during simulation with load balancing • Load stays balanced throughout the simulation IEEE Cluster2010

  44. Number of zone server processes on each node during the simulation • Lighter processes are migrated over to node3 and node4 in order to balance the overall load of the system IEEE Cluster2010

  45. Outline • Motivation • Cluster Server Architecture • DVE Software Components • Process Live Migration • Multiple Socket Migration Optimizations • Dynamic Load Balancing • Evaluation • Conclusion IEEE Cluster2010

  46. Conclusion • Process live migration • Optimizations for migrating a massive amount network connections • No modifications to the TCP protocol or to the client side network stack • Dynamic load balancing engine exploiting process live migration • DVE simulation for demonstrating load balancer and live migration • Other possible scenarios: • Fault tolerance (IEEE NCA2010) • Power management IEEE Cluster2010

  47. Thank you for your attention!Questions? IEEE Cluster2010

  48. Related Work • Connection Migration: • NEC’s distributed Web Server arch: each session has its own virtual IP address • SockMi, Tcpcp: TCP migration with IP layer forwarding, don’t decouple the process from the source machine • TCP Migrate option: extension to the TCP protocol • Process migration and incremental checkpointing: • V-System, Amoeba, Mach, Sprite, MOSIX – limited connection migration support • BLCR: no support for connection and incremental checkp. • Zap’s VNAT: support required on client side as well • Load balancing DVEs: • Several studies addressing application level solutions • MOSIX: home-node approach leaves residual dependencies IEEE Cluster2010

More Related