1 / 23

[INFOCOM 2004]: Bjorn Knutsson, Honghui Lu, Wei Xu, Bryan Hopkins, UPenn

Peer-to-Peer Support for Massively Multiplayer Games Zone Federation of Game Servers : a Peer-to-Peer Approach to Scalable Multi-player Online Games. [INFOCOM 2004]: Bjorn Knutsson, Honghui Lu, Wei Xu, Bryan Hopkins, UPenn

cadee
Download Presentation

[INFOCOM 2004]: Bjorn Knutsson, Honghui Lu, Wei Xu, Bryan Hopkins, UPenn

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Peer-to-Peer Support for Massively Multiplayer GamesZone Federation of Game Servers : a Peer-to-Peer Approach to Scalable Multi-player Online Games [INFOCOM 2004]: Bjorn Knutsson, Honghui Lu, Wei Xu, Bryan Hopkins, UPenn [SIGCOMM’04] Takuji limura, Hiroaki Hazeyama, Youki Kadobayashi, Nara Institue of Science and Technology

  2. Outline • One-line summary • Motivation • Solution Approach 1 • Peer-to-Peer Support for Massively Multiplayer Games • Solution Approach 2 • Zone Federation of Game Servers • Experiment • Critique

  3. One-line comment • This papers presents peer-to-peer overlay network for Massively Multiplayer Games (MMG) to solve scalability problem using locality of interest.

  4. Motivation • Massively Multiplayer Games (MMG) • Almost MMG is a RPG • Ex> In the “Lord of the rings”, you are 레골라스. >_</ • Ex> Lineage, World of Warcraft, etc • 2M players, 180K concurrent players

  5. Motivation • Existing Clustered Server-Client architecture • Zone-based partition • One point of failure • Flash crowd • Over Provisioning • Lack of flexibility Server-Client

  6. Motivation • Characteristics of MMG • Large shared game world • Immutable landscape information (terrain) • Mutable object (food, tools, NPCs) • Locality of interest • Limited vision & sensing capabilities • Limited movement • Interaction with near object and players • Self-organizing group by location • Party play in RPG • Ex> 반지원정대 in “Lord of the Rings”

  7. Region 3 Region 1 MMG Direct Connection Multicast SCRIBE (Multicast support) Multicast Player Player Region 2 PASTRY (P2P overlay) Object (NPCs or food) Multicast Coordinator Solution Approach • P2P overlay • Scale up and down dynamically • Self-organizing decentralized system • Divide entire game into several region (peer group) • Hash region name into P2P key space • Coordinator manages region • Coordination of shared object • Root of multicast tree • Distribution server of map • Also, one of player

  8. Scenario : mapping region to coordinator 1 B C 14 B 3 A C 12 5 A A B C 10 Node key : 10 3 14 8 • Regions and Player Machines are mapped to key space • A region is managed by successor machine in key space

  9. Scenario : interaction between nodes 1 C B 14 B 3 A C 12 5 A B C 10 3 14 A 10 8 • Blue arrow means communication between player & Coordinator • Except E, Fnode, every node is both coordinator & player

  10. D Scenario : node join 1 B C 14 B 3 A C 12 5 A A B C D D 10 10 3 14 7 7 8 • Player D on node 7 joins • Rely on DHT to relocate peer-server

  11. Scenario : node leave 1 B C 14 D B 3 A A C 12 5 A A A B C D D 10 9 10 9 3 14 7 7 8 8 • Node leave • Peer-server relocated to succeeding node

  12. Scenario : Replica and Coordinator migrate • New Coordinator forwards update to old one until the game state transfer is completed • Recovery time depend on both the size of game state and network 1 14 3 12 5 11 10 7 9 8 • Game states are Replicated at replica (succeeding nodes) • Coordinator keep consistency at every updates

  13. Solution Approach • Division of coordinator • Zone owner • Sending state change to zone member • Aggregating modification from zone member • Consistency of changes • Data holder • Zone name • Zone owner • Zone data coordinator

  14. Solution Approach • Division of Zoning layer from DHT network • Flexibility of zone owner • One node can own several data holder (really?) • Enable dynamic zone allocation • Direct connection • Reduce latency • No crossing several hops on DHT

  15. Scenario : zone owner and data holder 1 B C 14 D Lookup owner B 3 A A Data updates C 12 5 Data updates A A A B C D D 10 9 10 9 3 14 7 7 8 8 • Data holder • Same location with coordinator • Zone owner • Who firstly updates data of data holder

  16. Experimental assumption • Prototype Implementation of “SimMud” • Game states • Modeling RPG games to generate own trace • position change • Multicast in group (region) : every 150 ms • Ex> Quake 2 : every 50 ms • player-object interaction (coordinator – player) • Eat every 20 sec • player-player interaction • Fight every 20 sec • changing region (multicasting group) • Every 40 sec • Region • 200x300 grid • Map and object size : 2* 6KB • Maximum simulation size constrained by memory to 4000 virtual nodes (players) • No player join & leave, no optimization

  17. Experimental result • in same density, population growth does not make difference • delay also increase slowly [log n] • around 6 hops • Maximum delay about 400ms • average message 70/sec • 10 position updates * 7 /sec

  18. Experimental result • Effect of Population Density • @ Ran with 1000 players , 25 regions • @ Position updates increases linearly per node • @ Non – uniform player distribution hurts performance • Message rate of object updates higher than player-player • = Object updates multicast in region • = Object update sent to replica • = Player-player interaction effects only players • 99% messages are position updates • Region changes take most bandwidth

  19. Experimental result

  20. Summary of result • Feasibility of design • Average delay : 150ms • Bandwidth requirement • 7.2KB/sec average • 22.34KB/sec peak

  21. Critique – first paper • Strong Point • P2P approach of MMG architecture design • Good evaluation • Simmud • Real surveys about several game player • Estimation of features about RPG games • Weak Point • Scalability • Static zone partition • Coordinator node has too much burden • Heterogeneous peer nodes

  22. Critique – second paper • Strong point • Enabled dynamic zone allocation • One node can own several data holder (zone) • Reduce latency between zone owner and members • Weak point • Robustness • Owner fault :State missing • Replica? • Lack experiment • Zone-owner change • Old owner update cost • New owner node download cost • Time delay of succeeding • Cheating Problem • Node who updates zone data becomes a zone owner

  23. Critique • New idea • Dynamic division of zone • Data holder coverage is statically assigned • Data owner coverage is dynamic • Sometimes users crowd into a specific place • EX> Thrall attack group consists of 300 users • Ex> also, Defending group also consists of a 120 users • One zone needs more than one owner • Split data holder (zone) • Allocate several zone owner to one zone • Several instance of one zone • No world-server division • crowd zone may generate into two parallel zone • Several “Caribbean Bay” instance!! • Already presented by “Guild War”, Arena net • However It is not a P2P approach!

More Related