1 / 20

Scheduling for Very Large Virtual Environments Using Visibility and Priorities

Scheduling for Very Large Virtual Environments Using Visibility and Priorities. Chris Faisstnauer, Dieter Schmalstieg, Werner Purgathofer Vienna University of Technology. Introduction. Distributed Virtual Environments / networked games Contention limited resources (CPU, rendering pipeline)

abie
Download Presentation

Scheduling for Very Large Virtual Environments Using Visibility and Priorities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scheduling forVery Large Virtual Environments Using Visibility and Priorities Chris Faisstnauer, Dieter Schmalstieg, Werner Purgathofer Vienna University of Technology

  2. Introduction • Distributed Virtual Environments / networked games • Contention limited resources (CPU, rendering pipeline) • Network bandwidth limitations • Degradation of the system’s performance • Popular approach: client-server setup • Scene managed by server / replicated by clients • Repeatedly transmit update messages to clients • Timely delivery essential  visual error • Message filtering: visibility culling • Overhead: examine all objects for each client

  3. Problem • Discarding updates of invisible objects • Each client own point of view • Examination of all objects for each client • Assume: n = number of objects = number of clients Filtering techniques do not schedule remaining objects • If they exceed network bandwidth  bottleneck Effort: O(n2) Scalability

  4. Solution • Technique to manage transmission of update messages • Constant overhead: O(k) per connected client • Overall computational cost reduced to linear effort • Prioritized scheduling: Priority Round-Robin algorithm • Employing visibility information (culling) • Activity monitoring: unpredictable behavior • Virtual environments / networked-games of any size • Output sensitive  scalability • Server-controlled objects / user-controlled avatars Effort: O(k*n)=O(n) for n connected clients

  5. Related Work • Filtering techniques: propagation on need-to-know basis • Area of Interest (exploit communication locality) • Regular subdivision (NPSNET-IV), proximity (DIVE) • Pre-determined inter-cell occlusion (SPLINE) • View cone (AVIARY), visibility culling (RING) • Explicitly registering interest (NPSNET-IV, AVIARY) • Dead Reckoning (NPSNET, PARADISE, NETEFFECT) • Network topology: IP-Multicast + Message filtering

  6. Background: Priority Round-Robin • Scheduling technique that combines advantages of • Round-Robin (output sensitive, starvation free) • Multi-Level Feedback Queue (enforces priorities) • Elements compete for resources  accumulate error • Priorities based on error metric  Error Per Unit (EPU) • Goal: minimize cumulative error • No traditional sorting • Approximate sorting in multiple levels (FIFO) • Elements assigned to level using EPU • Level priority reflects scheduling frequency

  7. Priority Round-Robin (2/3) i=0 i=1 i=2 Repetition Counti= NoElements i* NoLevels Predicted Error = ErrorPerUnit * Repetition Count Selected elements: A,C,G - B,D,G - A,E,G - B,F,G

  8. Priority Round-Robin (3/3) • Assignment of elements to levels according average EPU • Variable size levels • Dynamic VE  dynamic error distribution • Varying traversal rate (level i)

  9. Optimum Traversal Rate neino. elements in level i avi average EPU of level i tri traversal rate of level i

  10. Using Visibility Information (1/2) • Occlusion: indoor (rooms,buildings), outdoor(‘fog of war’) • Visibility culling carried out with: • ‘Potentially visible sets’ of cells (pre-computed) • Temporal Bounding Volumes, Update Free Regions

  11. Using Visibility Information (2/2) • Traditional: Visibility culling RR/FIFO-scheduling • New approach: PRR-scheduling Visibility culling • Repeatedly schedule k elements per client • Effort O(k*n)=O(n) for n connected clients • Priority: • Visible: object velocity • Invisible: shortest path to • visible area (TBV)

  12. Activity Monitoring • Scheduling frequency determined by relation of EPU • Unpredictable / rapidly changing behavior  EPU invalid • Penalty = error caused by change in EPU • Benefit = error advantage by using PRR over using RR • Switching: Switch to RR performance (ignore priorities) • Damping: Maximum difference between traversal rates • MaxDiff = EPU-interval covered / no. levels

  13. Evaluation • Client-server system • Environment generated from triangulated floorplan Server translated objects along randomized paths • Client visualizes scene • Transmit subset of position updates  PRR-scheduling • Visual error: distance object position on server / client • Evaluation of PRR (Priority Round-Robin) • Comparison PRR vs. plain RR • Performance PRR w/o activity monitoring

  14. Evaluation Testbed

  15. Example: uniform error distribution • Scheduling 1000 out of 10000 simulated cars (10%) • Velocities (in units): 10000 cars - velocity  [1, 10] • Overall error • of PRR is • 92% lower • than RR

  16. Example: clustered error distribution • Scheduling 1000 out of 10000 simulated cars (10%) • Velocities (in units): 500 cars - velocity  [9, 10] • 200 cars - velocity  [3, 4] • 7500 cars - velocity  [0.1, 0.5] Overall error of PRR is 307% lower than RR

  17. Example: activity monitoring on • Scheduling 1000 out of 10000 simulated cars (10%) • Random movement - velocity[0.1, 10] shuffled every 10 loops

  18. Example: activity monitoring off • Scheduling 1000 out of 10000 simulated cars (10%) • Random movement - velocity[0.1, 10] shuffled every 10 loops

  19. Conclusions • Enhanced Priority Round-Robin algorithm • Handle transmission of update message serverclient • Constant effort per connected client • Update frequency (priorities) determined from • Object behavior • Visibility information • Scalable / graceful degradation • Substitute Round-Robin (scalable / graceful degradation) • Very Large Distributed Virtual Environments • Networked Games • Handle unpredictable behavior (user-controlled avatars)

  20. Future Work • Scheduling of hierarchical human models (avatars) • Employing Multiple Levels of Detail (LOD) • Construction of extended environment • Large number of rooms, buildings, open landscapes • Large number of avatars (hierarchical human models) • Evaluate perceptual error metrics • Evaluate motion data from very large online-games (e.g. “Everquest”, “Ultima Online”)

More Related