1 / 115

Towards Self-Optimizing Frameworks for Collaborative Systems

Towards Self-Optimizing Frameworks for Collaborative Systems. Sasa Junuzovic (Advisor: Prasun Dewan) University of North Carolina at Chapel Hill. Collaborative Systems. Shared Checkers Game. User 1. User 2. User 1 Enters Command ‘Move Piece’. User 1 Sees ‘Move’. User 2 Sees ‘Move’.

rane
Download Presentation

Towards Self-Optimizing Frameworks for Collaborative Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards Self-Optimizing Frameworks for Collaborative Systems Sasa Junuzovic (Advisor: Prasun Dewan) University of North Carolina at Chapel Hill

  2. Collaborative Systems Shared Checkers Game User1 User2 User1 Enters Command ‘Move Piece’ User1 Sees ‘Move’ User2 Sees ‘Move’

  3. Performance in Collaborative Systems Performance is Important! Candidates’ Day at UNC Professor Student UNC Professor Demoing Game to Candidate at Duke Poor Interactivity Quits Game

  4. Improving Performance inEveryday Life Chapel Hill Raleigh

  5. Window of Opportunity for Improving Performance Requirements Window of Opportunity [18] Insufficient Resources Sufficient but Scarce Resources Always Poor Performance Improve Performance from Poor to Good Abundant Resources Always Good Performance Resources [18] Jeffay, K. Issues in Multimedia Delivery Over Today’s Internet. IEEE Conference on Multimedia Systems. Tutorial. 1998.

  6. Window of Opportunity in Collaborative Systems Requirements Window of Opportunity Focus Resources

  7. Thesis For certain classes of applications, it is possible to meet performance requirements better than existing systems through a new collaborative framework without requiring hardware, network, or user-interface changes.

  8. Performance Improvements:Actual Result With Optimization Without Optimization Bad Performance Good Time Initially no Performance Improvement Self-Optimizing System Improves Performance Self-Optimizing System Improves Performance Again

  9. What Do We Mean by Performance? With Optimization Without Optimization Bad Performance Good Time What Aspects of Performance are Improved?

  10. Performance Metrics Focus Performance Metrics: Local Response Times [20] Remote Response Times [12] Jitter [15] Throughput [13] Task Completion Time [10] Bandwidth [16] [10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002. [12] Ellis, C.A. and Gibbs, S.J. Concurrency control in groupware systems. ACM SIGMOD Record. Vol. 18 (2). Jun 1989. pp: 399-407. [13] Graham, T.C.N., Phillips, W.G., and Wolfe, C. Quality Analysis of Distribution Architectures for Synchronous Groupware. Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2006. pp: 1-9. [15] Gutwin, C., Dyck, J., and Burkitt, J. Using Cursor Prediction to Smooth Telepointer Actions. ACM Conference on Supporting Group Work (GROUP). 2003. pp: 294-301. [16] Gutwin, C., Fedak, C., Watson, M., Dyck, J., and Bell, T. Improving network efficiency in real-time groupware with general message compression. ACM Conference on Computer Supported Cooperative Work (CSCW). 2006. pp: 119-128. [20] Shneiderman, B. Designing the user interface: strategies for effective human-computer interaction. 3rd ed. Addison Wesley. 1997.

  11. Local Response Time User1 User1 Time Local Response Time Some User Enters Command That User Sees Output for Command User1 Enters ‘Move Piece’ User1 Sees ‘Move’

  12. Remote Response Time User1 User2 Time Remote Response Time Some User Enters Command Another User Sees Output for Command User1 Enters ‘Move Piece’ User2 Sees ‘Move’

  13. Noticeable Performance Difference? With Optimization Without Optimization Differences Bad 21ms 80ms Performance 180ms 300ms Good Time When are Performance Differences Noticeable?

  14. Noticeable Response Time Thresholds Local Response Times Remote Response Times < < 50ms [20] 50ms [17] 50ms [23] 50ms [17] 50ms [23] 50ms [17] [17] Jay, C., Glencross, M., and Hubbold, R. Modeling the Effects of Delayed Haptic and Visual Feedback in a Collaborative Virtual Environment. ACM Transactions on Computer-Human Interaction (TOCHI). Vol. 14 (2). Aug 2007. Article 8. [20] Shneiderman, B. Designing the user interface: strategies for effective human-computer interaction. 3rd ed. Addison Wesley. 1997. [23] Youmans, D.M. User requirements for future office workstations with emphasis on preferred response times. IBM United Kingdom Laboratories. Sep 1981.

  15. Self-Optimizing System Noticeably Improves Response Times With Optimization Without Optimization Differences 21ms 80ms 180ms 300ms Time Noticeable Improvements!

  16. Simple Response Time Comparison: Average Response Times User1 User2 Optimization A 100ms 200ms Optimization B 200ms 100ms  Optimization A Optimization B

  17. Simple Response Time Comparison May not Give Correct Answer < Importance Importance User1 User2 Optimization A 100ms 200ms Optimization B 200ms 100ms Response Times More Important For Some Users   Optimization A Optimization B < Optimization A Optimization B

  18. User’s Response Time Requirements < = > ? Optimization A Optimization B External Criteria From Users Needed to Decide External Criteria Required Data Favor Important Users Identity of Users Favor Local or Remote Response Times Identity of Users Who Input and Users Who Observe Arbitrary Arbitrary Users Must Provide Response Time Function that Encapsulates Criteria Self-Optimizing System Provides Predicted Response Times and Required Data

  19. Main Contributions Better Meet Response Time Requirements than Other Systems! Important Response Time Factors Collaboration Architecture Multicast Scheduling Policy Studied the Impact on Response Times Chung [10] Automated Maintenance Wolfe et al. [22] [10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002. [22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.

  20. Illustrating Contribution Details: Scheduling Policy Better Meet Response Time Requirements than Other Systems! Important Response Time Factors Illustration of Contributions Collaboration Architecture Multicast Scheduling Policy Single-Core Studied the Impact on Response Times Automated Maintenance

  21. Scheduling Collaborative Systems Tasks Scheduling Requires Definition of Tasks Collaborative Systems Tasks External Application Tasks Defined by Collaboration Architecture Typical Working Set of Applications is not Known We Use an Empty Working Set

  22. Collaboration Architectures Program Component Manages Shared State P May or May not Run on Each User’s Machine U User Interface Allows Interaction with Shared State Runs on Each User’s Machine

  23. Collaboration Architectures Each User-Interface must be Mapped to a Program Component [10] Program Component Sends Outputs to User Interface P U User Interface Sends Input Commands to Program Component Input Output [10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

  24. Popular Mappings Centralized Mapping P 1 U 1 U 2 U 3 Master Slave Slave Input Output

  25. Popular Mappings Replicated Mapping P 1 P 2 P 3 U 1 U 2 U 3 Master Master Master Input Output

  26. Communication Architecture Replicated Centralized Masters Perform Majority of Communication Task Send Output to All Computers Send Input to All Computers Communication Model? Focus Push-Based Pull-Based Streaming Unicast or Multicast?

  27. Unicast vs. Multicast Unicast Multicast 1 2 1 2 3 4 5 6 7 8 9 10 3 4 5 6 7 8 9 10 Transmission Is Performed Sequentially Transmission Performed in Parallel If Number of Users Is Large, Transmission Takes Long Time Relieves Single Computer From Performing Entire Transmission Task

  28. Collaborative Systems Tasks Unicast Multicast 1 2 1 2 3 4 5 6 7 8 9 10 3 4 5 6 7 8 9 10 Only Source Has to Both Process and Transmit Commands Any Computer May Have to Both Process and Transmit Commands

  29. Processing and Transmission Tasks Focus Mandatory Tasks: User1 Processing of User Commands Transmission of User Commands Optional Tasks Related to: User2 User3 Concurrency Control [14] Consistency Maintenance [21] Awareness [9] [9] Begole, J. Rosson, M.B., and Shaffer, C.A. Flexible collaboration transparency: supporting worker independence in replicated application-sharing systems. ACM Transactions on Computer-Human Interaction (TOCHI). Vol . 6(2). Jun 1999. pp: 95-132. [14] Greif, I., Seliger, R., and Weihl, W. Atomic Data Abstractions in a Distributed Collaborative Editing System. Symposium on Principles of Programming Languages. 1986. pp: 160-172. [21] Sun, C. and Ellis, C. Operational transformation in real-time group editors: issues, algorithms, and achievements. ACM Conference on Computer Supported Cooperative Work (CSCW). 1998. pp: 59-68.

  30. CPU vs. Network Card Transmission Transmission of a Command User1 Command CPU Transmission Task Network Card Transmission Task (Processing Task in Networking) Follows CPU Transmission User2 User3 Schedulable with Respect to CPU Processing Parallel with CPU Processing (non-blocking) Schedulable Not Schedulable

  31. Impact of Scheduling on Local Response Times Enters Command ‘Move Piece’ User1 Intuitively, to Minimize Local Response Times, CPU should … Process Command First Transmit Command Second Reason: Local Response Time does not Include CPU Transmission Time on User1’s Computer User2 User3

  32. Impact of Scheduling on Local Response Times Enters Command ‘Move Piece’ User1 Intuitively, to Minimize Remote Response Times, CPU should … Transmit Command First Process Command Second Reason: Remote Response Times do not Include Processing Time on User1’s Computer User2 User3

  33. Intuitive Choice of Single-Core Scheduling Policy Local Response Times Remote Response Times Important Use Process-First Scheduling Use Transmit-First Scheduling Important ? Use Concurrent Scheduling Important Important

  34. Scheduling Policy Response Time Tradeoff < < Local Response Times Process First Concurrent Transmit First > > Remote Response Times Process First Concurrent Transmit First Tradeoff

  35. Process-First: Good Local, Poor Remote Response Times < < Local Response Times Process First Concurrent Transmit First > > Remote Response Times Process First Concurrent Transmit First Tradeoff

  36. Transmit-First: Good Remote, Poor Local Response Times < < Local Response Times Process First Concurrent Transmit First > > Remote Response Times Process First Concurrent Transmit First Tradeoff

  37. Concurrent: Poor Local, Poor Remote Response Times < < Local Response Times Process First Concurrent Transmit First > > Remote Response Times Process First Concurrent Transmit First Tradeoff

  38. New Scheduling Policy Current Scheduling Policies Tradeoff Local and Response Times in an All or Nothing Fashion (CollaborateCom 2008 [6]) Systems Approach Transmit-First Process-First Concurrent Need a New Scheduling Policy Psychology Lazy (ACM GROUP 2009 [8]) [6] Junuzovic, S. and Dewan, P. Serial vs. Concurrent Scheduling of Transmission and Processing Tasks in Collaborative Systems. Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2008. [8] Junuzovic, S., and Dewan, P. Lazy scheduling of processing and transmission tasks collaborative systems. ACM Conference on Supporting Group Work (GROUP). 2009. pp: 159-168.

  39. Controlling Scheduling Policy Response Time Tradeoff Local Response Times Unnoticeable Increase < 50ms Remote Response Times Noticeable Decrease > 50ms

  40. Lazy Scheduling Policy Implementation Enters Command ‘Move Piece’ Basic Idea User1 Temporarily Delay Processing Transmit during Delay 1 Keep Delay Below Noticeable Process 2 User2 User3 Complete Transmitting 3 Benefit: Compared to Process-First, User2’s Remote Response Time Improved and Others did not Notice Difference in Response Times

  41. Evaluating Lazy Scheduling Policy Improvements in Response Times of Some Users Without Noticeably Degrading Response Times of Others Noticeable  Local Response Times Process First Lazy By Design > Remote Response Times Process First Lazy Free Lunch

  42. Analytical Equations Mathematical Equations (in Thesis) Rigorously Show Benefits of Lazy Policy Model Supports Concurrent Commands and Type-Ahead Flavor: Capturing Tradeoff Between Lazy and Transmit-First

  43. Analytical Equations: Lazy vs. Transmit-First 1 2 3 4 5 6 Sum of Network Latencies on Path Sum of Intermediate Computer Delays Destination Computer Delay Scheduling Policy Independent

  44. Intermediate Delays 1 2 3 4 5 6 Transmit-First Intermediate Computer Delay Lazy Intermediate Computer Delay Case 1: Transmit Before Processing Case 2: Transmit After Processing

  45. Intermediate Delay Equation Derivation 1 2 3 4 5 6 Transmit-First Intermediate Computer Delay Lazy Intermediate Computer Delay

  46. Transmit-First Intermediate Delays 1 2 3 4 5 6 Transmit-First Intermediate Computer Delay Time Required to Transmit to Next Computer on Path

  47. Lazy Intermediate Delays: Transmit Before Processing 1 2 3 4 5 6 If Transmit Before Processing Lazy Intermediate Computer Delay Time Required to Transmit to Next Computer on Path Transmit-First Intermediate Computer Delay Processing Was Delayed iff Sum of Intermediate Delays so Far < Noticeable Threshold

  48. Lazy Intermediate Delays: From “Transmit Before” To “Transmit After” Processing 1 2 3 4 5 6 Delay Delay Delay > Eventually … Processing Was Delayed iff Sum of Intermediate Delays so Far < Noticeable Threshold k Sum of Intermediate Delays so Far Processing Delay k

  49. Lazy Intermediate Delays: Transmit After Processing 1 2 3 4 5 6 If Transmit After Processing Lazy Intermediate Computer Delay Time Required to Transmit to Next Computer on Path Time Required to Process Command > Transmit-First Intermediate Computer Delay

  50. Transmit-First and Lazy Intermediate Delay Comparison 1 2 3 4 5 6 Lazy Intermediate Computer Delay ≥ Transmit-First Intermediate Computer Delay Transmit-First Intermediate Delays Dominate Lazy Intermediate Delays

More Related