1 / 86

Mecha Zeta Project Title: Next-Generation Real Time Internet Game (Self-proposed)

Mecha Zeta Project Title: Next-Generation Real Time Internet Game (Self-proposed). Supervisor(s): Dr. C.L.Wang, Dr. W.Wang and Dr. A.T.C.Tam 2 nd Examiner: Dr. K.S. Lui Project Members: (CE) Cheung Hiu Yeung, Patrick Sin Pak Fung, Lester Wong Tin Chi, Ivan Ho King Hang, Tabris

deion
Download Presentation

Mecha Zeta Project Title: Next-Generation Real Time Internet Game (Self-proposed)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mecha ZetaProject Title: Next-Generation Real Time Internet Game (Self-proposed) Supervisor(s): Dr. C.L.Wang, Dr. W.Wang and Dr. A.T.C.Tam 2nd Examiner: Dr. K.S. Lui Project Members: (CE) Cheung Hiu Yeung, Patrick Sin Pak Fung, Lester Wong Tin Chi, Ivan Ho King Hang, Tabris Yuen Man Long, Sam

  2. Project description • Motivation • Majority of present large capacity interactive Internet games • Client-Server bottleneck of frequent communications • Pre-computed shadowing • Approximate collision-detection • Project goal • To test the feasibility of developing interactive, real and large capacity real-time multiplayer games under unreliable Internet communication in P2P architecture • P2P network architecture over the Internet • Partitioning and P2P synchronization • Real-time shadowing • Accurate collision-detection

  3. Mecha Zeta P2P network architecture Cheung Hiu Yeung, Patrick

  4. Server server Coordinator (also a client) Client Coordinator (also a client) Client Client Client Coordinator (also a client) Client Client Client Client Client Client Diagram showing the whole network architecture Network Architecture

  5. Communication Subsystem • Communication needed in the game • Get the data stored in the server when starting the game • Each client needs to recognize the state of the game. • Broadcast controls and position to peers

  6. Communication Subsystem • TCP and UDP available in JAVA • TCP, reliable connection-oriented transfer, no lost, in-order • UDP, unreliable connectionless transfer • Decide which one to choose on game purpose

  7. Trials TCP Connection RTT (ms) UDP Datagram RTT (ms) 1 261 40 2 311 100 3 300 120 4 891 60 5 300 timeout Communication Subsystem • Test result on measuring RTT in TCP and UDP implementation in JAVA:

  8. Client Server communication • Login and get data when starting game • Send peer groups’ game state • TCP is acceptable as connection is needed

  9. Server server Coordinator (also a client) Coordinator (also a client) Coordinator (also a client) Client Server communication • Server is aware of each coordinator • If find one coordinator is left, take another client as new coordinator • Send ‘ping’ message to determine

  10. Peers Communication • Flow of one command: Command from keyboard Format the command in the game system Send out the formatted command by network methods Received the packet and then make update to the graphics

  11. Coordinator (also a client) Client Client Client Peers Communication • Broadcast controls to all other peers • No connection is needed for dynamic grouping • UDP is employed

  12. No. of Times sending a ping message Time for failure of a ping (min) 1 3 2 14 3 29 4 No failure in testing time Peers Communication • Recognize status of a peer • Use ‘ping’ and customize the UDP Testing time: 30 minutes, Timeout for Ping: 3sec, Time period for retry: 4sec

  13. Peers Communication • Reliable transfer is still needed between peers, e.g. being attack or firing • Reliable protocol design • Stop-and-wait protocol is employed • Timeout and re-transmission • Sliding window protocol • Message size is small, seldom need parallel sending to one recipient

  14. Peers Communication • Challenge: outburst of controls in client • Congestion control to prevent congested channel • Simple rate-based one can be employed • Define a rate limit at the send channel • If over limit, reduce the number of packets being sent in next frame

  15. Peers Communication • Some other congestion control schemes • AIMD congestion control • Used in TCP • Reduce the rate by half each time a congestion comes • Equation-based congestion control • Involve complicated calculation • Increase the work load in complicated game engine

  16. Mecha Zeta Partitioning Sound Engine Sin Pak Fung, Lester

  17. Client-Server communication Server Peer to Peer communication Client Client Client Client Client Network Architecture • Peer to peer Architecture

  18. Partitioning system • Challenge: • Number of client increases => increase amount of network traffic exponentially, e.g, • 10 players, 10 x (10 – 1) = 90 messages • 100 players, 100 x (100 – 1) = 9,900 messages <110 times> • 1000 players, 1000 x (1000 – 1) = 999,000 messages <11100 times>

  19. Partitioning system • Idea-Send message only to those who need it • Theory - The game world is partitioned into different regions. Each region is named as a partitioned area, or a cell.

  20. Client-Server communication Server Client Client Client Client Client Client Peer to Peer communication A cell Partitioning system

  21. Dynamic VS Static • A dynamic system • Initiates the game with one cell • As the number of player in a cell increases, split the cell • Can control the maximum number of player in a cell • A static system • Partitioned the world at the compile time • Will not deal with runtime calculation • Can minimize calculation at runtime

  22. Dynamic VS Static • Static partitioning is selected, because • We used P2P architecture, every client should know who they need to communicate • For dynamic system, updating all players upon change of cells is required. (that is, updating n clients by the server) • For static system, the partitioned world could be pre-calculated and loaded to every client at compile time • Static partitioning is preferred in P2P architecture, while dynamic partitioning is preferred in client-server architecture • Sacrifice the control of max. number of player in every cell.

  23. Design and construction The Blue Area – the area without any overlapping. Robots here only send message to its cell. The Red Area – the area that is overlapped by other cells. Robots here need to send message to its cell, and other associated cells. The Green Area – the area that this cell overlaps its adjacent cells. Robots over there need to send their message not only to their cell, but also to this cell.

  24. Working principle • Robot joining • Add this robot to a cell • Send cell information to this robot • Update other robots in that cell • Robot movement • According to their positions at that time, send messages to others at particular cell(s)

  25. Working principle • Cell transition • monitored by the server • If detected, update the cell ID of this robot • Update the robots in the new cell • Remove this robot from the original cell • Robot exiting • Remove this robot from the original cell

  26. Result • For 100 players, without partitioning system • In every update of position, 100 * (100 – 1) = 9900 messages • Assume 10 cells are added with 10 players at a cell, 100 * (10 – 1) = 900 messages • Network traffic could be reduced

  27. Discussion • Determination of cell transition • Involve heavy computation • May use coordinator to help monitor • However, • Server is still involved in update of other cells • Unfair • Dynamic VS Static • Static partitioning sacrifices the control of max. number of player in every cell • If this control should be stressed in a particular game, a dynamic system should be used

  28. Sound Engine • Using JDK 1.2 (making use of java.applet.AudioClip) • Using Java Sound API • Using Java Media Framework (JMF)

  29. Sound Engine • Little trick - pre-loading the audio clip • Response time is shortened • Play, loop or stop at suitable time • A sound engine is implemented

  30. Mecha Zeta Synchronization mechanism Wong Tin Chi, Ivan

  31. Introduction • The role of synchronization mechanism • Current design trend • Synchronization mechanism in Mecha Zeta • Conclusion

  32. The role of Synchronization Mechanism • Minimize the adverse effect of network delay on the simulation. • Prediction and Correction • 2 sub-systems • Consistency protocol • Synchronization algorithm

  33. Focuses • Performance of Synchronization algorithms on the 2 aspects • ResponsivenessvsConsistency • Minimal disturbance to simulation • Computation and storage overhead of error recovery

  34. Current design trend • Consistency protocol • State-based • Command-based • Synchronization algorithm • Conservative • Lockstep • Chandy-Misra • Optimistic • TimeWarp • Breathing • Bucket

  35. Challenges in Mecha Zeta • Frequent P2P communications • Requires fast response • Responsiveness and consistency • Disturbance to the simulation • Large game state • Computational and storage overhead

  36. Commands + Events (E0) Synchronization Engine Game States (G1) Game States (G0) Commands + Events (E0) Game States (G1) Game Engine GameState Engine Commands + Events (E0) Rollback Synchronization mechanism in Mecha Zeta • Overall Architecture • Semantic remark: • Command = Command or Event • Game Clock - NTP • Command Classification

  37. Protocol & Algorithm • Consistency Protocol • Command-based • Synchronization algorithm • Optimistic • Bucket Synchronization (Hybrid) • Multi-States Synchronization

  38. Inherits Buffering (Bucket) Command & game state achieved for future rollback (TimeWarp) Threshold (Breathing) Advances Lower storage overhead(TimeWarp) Faster response(Bucket) Simulation Time Local Host 200 ms 400 ms 600 ms Bucket Synchronization • Idea : It employs the bucket mechanism to buffer the incoming events and commands but execute them optimistically. It also reduces the no. of game states.

  39. 200 ms 200 ms 400 ms 400 ms 600 ms 600 ms Bucket Synchronization • Example 420 Simulation Time Local Host 200 ms 400 ms 600 ms Host 2 Host 3 Current Time Bucket

  40. 200 ms 200 ms 400 ms 400 ms 600 ms 600 ms Bucket Synchronization • Example 470 470 Simulation Time Local Host 200 ms 400 ms 600 ms Host 2 Host 3 Current Time Bucket

  41. 200 ms 200 ms 400 ms 400 ms 600 ms 600 ms Bucket Synchronization • Example 580 430 Simulation Time Local Host 200 ms 400 ms 600 ms Host 2 Host 3 Current Time Bucket

  42. 200 ms 200 ms 400 ms 400 ms 600 ms 600 ms Bucket Synchronization • Example 600 430 470 Simulation Time Local Host 200 ms 400 ms 600 ms Host 2 Host 3 Current Time Bucket

  43. 200 ms 200 ms 400 ms 400 ms 600 ms 600 ms Bucket Synchronization • Example 680 Rollback 260 Simulation Time Local Host 200 ms 400 ms 600 ms Host 2 Host 3 Current Time Bucket

  44. Multi-States Synchronization Idea:Instead of locating the error point in case there is a mistake , get data for rollback from a parallel execution of the game. • Inherits • Buffering (Bucket) • Command & game state achieve for future rollback (TimeWarp) • Threshold (Breathing) • Advances • Lower storage overhead (TimeWarp) • Faster response(Bucket) • Lower computational overhead

  45. S0 Executed Pending Multi-States Synchronization • Example 500 Current Time 450 430 Simulation Time Local Host 200 ms 400 ms 600 ms S0 Executed Pending S0 Executed Pending

  46. S0 Executed Pending Multi-States Synchronization • Example 630 Current Time 430 450 Simulation Time Local Host 200 ms 400 ms 600 ms Rollback S0 Executed Pending S0 Executed Pending

  47. Bucket vs MSS

  48. Evaluation • Number of Rollback vs Frequency of command • Capacity of synchronization algorithms • PI = no. of Rollback at same frequency (70ms) • Number of Rollback vs Synchronization delay • Optimizing the consistency and responsiveness • PI = no. of rollback at same delay (100ms) • Rollback Cost • Computational overhead • PI = Mean of Rollback cost (ms)

  49. Conclusion • Synchronization delay determines • Responsiveness vs consistency • Storage & Computational overhead • Study on synchronization delay dynamically to network conditions

  50. Mecha Zeta Graphic engine Collision detection Ho King Hang, Tabris

More Related