towards self optimizing frameworks for collaborative systems
Download
Skip this Video
Download Presentation
Towards Self-Optimizing Frameworks for Collaborative Systems

Loading in 2 Seconds...

play fullscreen
1 / 115

Towards Self-Optimizing Frameworks for Collaborative Systems - PowerPoint PPT Presentation


  • 105 Views
  • Uploaded on

Towards Self-Optimizing Frameworks for Collaborative Systems. Sasa Junuzovic (Advisor: Prasun Dewan) University of North Carolina at Chapel Hill. Collaborative Systems. Shared Checkers Game. User 1. User 2. User 1 Enters Command ‘Move Piece’. User 1 Sees ‘Move’. User 2 Sees ‘Move’.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Towards Self-Optimizing Frameworks for Collaborative Systems' - rane


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
towards self optimizing frameworks for collaborative systems

Towards Self-Optimizing Frameworks for Collaborative Systems

Sasa Junuzovic

(Advisor: Prasun Dewan)

University of North Carolina at Chapel Hill

collaborative systems
Collaborative Systems

Shared Checkers Game

User1

User2

User1 Enters Command

‘Move Piece’

User1 Sees ‘Move’

User2 Sees ‘Move’

performance in collaborative systems
Performance in Collaborative Systems

Performance is Important!

Candidates’ Day at UNC

Professor

Student

UNC Professor Demoing Game to Candidate at Duke

Poor

Interactivity

Quits Game

window of opportunity for improving performance
Window of Opportunity for Improving Performance

Requirements

Window of Opportunity [18]

Insufficient Resources

Sufficient but Scarce Resources

Always Poor Performance

Improve Performance from Poor to Good

Abundant Resources

Always Good Performance

Resources

[18] Jeffay, K. Issues in Multimedia Delivery Over Today’s Internet. IEEE Conference on Multimedia Systems. Tutorial. 1998.

window of opportunity in collaborative systems
Window of Opportunity in Collaborative Systems

Requirements

Window of Opportunity

Focus

Resources

thesis
Thesis

For certain classes of applications, it is possible to meet performance requirements better than existing systems through a new collaborative framework without requiring hardware, network, or user-interface changes.

performance improvements actual result
Performance Improvements:Actual Result

With Optimization

Without Optimization

Bad

Performance

Good

Time

Initially no Performance Improvement

Self-Optimizing System Improves Performance

Self-Optimizing System Improves Performance

Again

what do we mean by performance
What Do We Mean by Performance?

With Optimization

Without Optimization

Bad

Performance

Good

Time

What Aspects of Performance are Improved?

performance metrics
Performance Metrics

Focus

Performance Metrics:

Local Response Times [20]

Remote Response Times [12]

Jitter [15]

Throughput [13]

Task Completion Time [10]

Bandwidth [16]

[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

[12] Ellis, C.A. and Gibbs, S.J. Concurrency control in groupware systems. ACM SIGMOD Record. Vol. 18 (2). Jun 1989. pp: 399-407.

[13] Graham, T.C.N., Phillips, W.G., and Wolfe, C. Quality Analysis of Distribution Architectures for Synchronous Groupware. Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2006. pp: 1-9.

[15] Gutwin, C., Dyck, J., and Burkitt, J. Using Cursor Prediction to Smooth Telepointer Actions. ACM Conference on Supporting Group Work (GROUP). 2003. pp: 294-301.

[16] Gutwin, C., Fedak, C., Watson, M., Dyck, J., and Bell, T. Improving network efficiency in real-time groupware with general message compression. ACM Conference on Computer Supported Cooperative Work (CSCW). 2006. pp: 119-128.

[20] Shneiderman, B. Designing the user interface: strategies for effective human-computer interaction. 3rd ed. Addison Wesley. 1997.

local response time
Local Response Time

User1

User1

Time

Local Response Time

Some User Enters Command

That User Sees Output for Command

User1 Enters

‘Move Piece’

User1 Sees ‘Move’

remote response time
Remote Response Time

User1

User2

Time

Remote Response Time

Some User Enters Command

Another User Sees Output for Command

User1 Enters

‘Move Piece’

User2 Sees ‘Move’

noticeable performance difference
Noticeable Performance Difference?

With Optimization

Without Optimization

Differences

Bad

21ms

80ms

Performance

180ms

300ms

Good

Time

When are Performance Differences Noticeable?

noticeable response time thresholds
Noticeable Response Time Thresholds

Local Response Times

Remote Response Times

<

<

50ms [20]

50ms [17]

50ms [23]

50ms [17]

50ms [23]

50ms [17]

[17] Jay, C., Glencross, M., and Hubbold, R. Modeling the Effects of Delayed Haptic and Visual Feedback in a Collaborative Virtual Environment. ACM Transactions on Computer-Human Interaction (TOCHI). Vol. 14 (2). Aug 2007. Article 8.

[20] Shneiderman, B. Designing the user interface: strategies for effective human-computer interaction. 3rd ed. Addison Wesley. 1997.

[23] Youmans, D.M. User requirements for future office workstations with emphasis on preferred response times. IBM United Kingdom Laboratories. Sep 1981.

self optimizing system noticeably improves response times
Self-Optimizing System Noticeably Improves Response Times

With Optimization

Without Optimization

Differences

21ms

80ms

180ms

300ms

Time

Noticeable Improvements!

simple response time comparison average response times
Simple Response Time Comparison: Average Response Times

User1

User2

Optimization A

100ms

200ms

Optimization B

200ms

100ms

Optimization A

Optimization B

simple response time comparison may not give correct answer
Simple Response Time Comparison May not Give Correct Answer

<

Importance

Importance

User1

User2

Optimization A

100ms

200ms

Optimization B

200ms

100ms

Response Times More Important For Some Users



Optimization A

Optimization B

<

Optimization A

Optimization B

user s response time requirements
User’s Response Time Requirements

<

=

>

?

Optimization A

Optimization B

External Criteria From Users Needed to Decide

External Criteria

Required Data

Favor Important Users

Identity of Users

Favor Local or Remote Response Times

Identity of Users Who Input and Users Who Observe

Arbitrary

Arbitrary

Users Must Provide Response Time Function that Encapsulates Criteria

Self-Optimizing System Provides Predicted Response Times and Required Data

main contributions
Main Contributions

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

Studied the Impact on Response Times

Chung [10]

Automated Maintenance

Wolfe et al. [22]

[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.

illustrating contribution details scheduling policy
Illustrating Contribution Details: Scheduling Policy

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Illustration of Contributions

Collaboration Architecture

Multicast

Scheduling Policy

Single-Core

Studied the Impact on Response Times

Automated Maintenance

scheduling collaborative systems tasks
Scheduling Collaborative Systems Tasks

Scheduling Requires Definition of Tasks

Collaborative Systems Tasks

External Application Tasks

Defined by Collaboration Architecture

Typical Working Set of Applications is not Known

We Use an Empty Working Set

collaboration architectures
Collaboration Architectures

Program Component

Manages Shared State

P

May or May not Run on Each User’s Machine

U

User Interface

Allows Interaction with Shared State

Runs on Each User’s Machine

collaboration architectures1
Collaboration Architectures

Each User-Interface must be Mapped to a Program Component [10]

Program Component

Sends Outputs to User Interface

P

U

User Interface

Sends Input Commands to Program Component

Input

Output

[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

popular mappings
Popular Mappings

Centralized Mapping

P 1

U 1

U 2

U 3

Master

Slave

Slave

Input

Output

popular mappings1
Popular Mappings

Replicated Mapping

P 1

P 2

P 3

U 1

U 2

U 3

Master

Master

Master

Input

Output

communication architecture
Communication Architecture

Replicated

Centralized

Masters Perform Majority of Communication Task

Send Output to All Computers

Send Input to All Computers

Communication Model?

Focus

Push-Based

Pull-Based

Streaming

Unicast or Multicast?

unicast vs multicast
Unicast vs. Multicast

Unicast

Multicast

1

2

1

2

3

4

5

6

7

8

9

10

3

4

5

6

7

8

9

10

Transmission Is Performed Sequentially

Transmission Performed in Parallel

If Number of Users Is Large, Transmission Takes Long Time

Relieves Single Computer From Performing Entire Transmission Task

collaborative systems tasks
Collaborative Systems Tasks

Unicast

Multicast

1

2

1

2

3

4

5

6

7

8

9

10

3

4

5

6

7

8

9

10

Only Source Has to Both Process and Transmit Commands

Any Computer May Have to Both Process and Transmit Commands

processing and transmission tasks
Processing and Transmission Tasks

Focus

Mandatory Tasks:

User1

Processing of User Commands

Transmission of User Commands

Optional Tasks Related to:

User2

User3

Concurrency Control [14]

Consistency Maintenance [21]

Awareness [9]

[9] Begole, J. Rosson, M.B., and Shaffer, C.A. Flexible collaboration transparency: supporting worker independence in replicated application-sharing systems. ACM Transactions on Computer-Human Interaction (TOCHI). Vol . 6(2). Jun 1999. pp: 95-132.

[14] Greif, I., Seliger, R., and Weihl, W. Atomic Data Abstractions in a Distributed Collaborative Editing System. Symposium on Principles of Programming Languages. 1986. pp: 160-172.

[21] Sun, C. and Ellis, C. Operational transformation in real-time group editors: issues, algorithms, and achievements. ACM Conference on Computer Supported Cooperative Work (CSCW). 1998. pp: 59-68.

cpu vs network card transmission
CPU vs. Network Card Transmission

Transmission of a Command

User1

Command

CPU Transmission Task

Network Card Transmission Task

(Processing Task in Networking)

Follows CPU Transmission

User2

User3

Schedulable with Respect to CPU Processing

Parallel with CPU Processing

(non-blocking)

Schedulable

Not Schedulable

impact of scheduling on local response times
Impact of Scheduling on Local Response Times

Enters Command

‘Move Piece’

User1

Intuitively, to Minimize Local Response Times, CPU should …

Process Command First

Transmit Command Second

Reason: Local Response Time does not Include CPU Transmission Time on User1’s Computer

User2

User3

impact of scheduling on local response times1
Impact of Scheduling on Local Response Times

Enters Command

‘Move Piece’

User1

Intuitively, to Minimize Remote Response Times, CPU should …

Transmit Command First

Process Command Second

Reason: Remote Response Times do not Include Processing Time on User1’s Computer

User2

User3

intuitive choice of single core scheduling policy
Intuitive Choice of Single-Core Scheduling Policy

Local Response Times

Remote Response Times

Important

Use Process-First Scheduling

Use Transmit-First Scheduling

Important

?

Use Concurrent Scheduling

Important

Important

scheduling policy response time tradeoff
Scheduling Policy Response Time Tradeoff

<

<

Local Response Times

Process First

Concurrent

Transmit First

>

>

Remote Response Times

Process First

Concurrent

Transmit First

Tradeoff

process first good local poor remote response times
Process-First: Good Local, Poor Remote Response Times

<

<

Local Response Times

Process First

Concurrent

Transmit First

>

>

Remote Response Times

Process First

Concurrent

Transmit First

Tradeoff

transmit first good remote poor local response times
Transmit-First: Good Remote, Poor Local Response Times

<

<

Local Response Times

Process First

Concurrent

Transmit First

>

>

Remote Response Times

Process First

Concurrent

Transmit First

Tradeoff

concurrent poor local poor remote response times
Concurrent: Poor Local, Poor Remote Response Times

<

<

Local Response Times

Process First

Concurrent

Transmit First

>

>

Remote Response Times

Process First

Concurrent

Transmit First

Tradeoff

new scheduling policy
New Scheduling Policy

Current Scheduling Policies Tradeoff Local and Response Times in an All or Nothing Fashion (CollaborateCom 2008 [6])

Systems Approach

Transmit-First

Process-First

Concurrent

Need a New Scheduling Policy

Psychology

Lazy

(ACM GROUP 2009 [8])

[6] Junuzovic, S. and Dewan, P. Serial vs. Concurrent Scheduling of Transmission and Processing Tasks in Collaborative Systems. Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2008.

[8] Junuzovic, S., and Dewan, P. Lazy scheduling of processing and transmission tasks collaborative systems. ACM Conference on Supporting Group Work (GROUP). 2009. pp: 159-168.

controlling scheduling policy response time tradeoff
Controlling Scheduling Policy Response Time Tradeoff

Local Response Times

Unnoticeable Increase

< 50ms

Remote Response Times

Noticeable Decrease

> 50ms

lazy scheduling policy implementation
Lazy Scheduling Policy Implementation

Enters Command

‘Move Piece’

Basic Idea

User1

Temporarily Delay Processing

Transmit during Delay

1

Keep Delay Below Noticeable

Process

2

User2

User3

Complete Transmitting

3

Benefit: Compared to Process-First, User2’s Remote Response Time Improved and Others did not Notice Difference in Response Times

evaluating lazy scheduling policy
Evaluating Lazy Scheduling Policy

Improvements in Response Times of Some Users Without Noticeably Degrading Response Times of Others

Noticeable

Local

Response Times

Process First

Lazy

By Design

>

Remote Response Times

Process First

Lazy

Free Lunch

analytical equations
Analytical Equations

Mathematical Equations (in Thesis) Rigorously Show Benefits of Lazy Policy

Model Supports Concurrent Commands and Type-Ahead

Flavor: Capturing Tradeoff Between Lazy and Transmit-First

analytical equations lazy vs transmit first
Analytical Equations: Lazy vs. Transmit-First

1

2

3

4

5

6

Sum of Network Latencies on Path

Sum of Intermediate Computer Delays

Destination Computer Delay

Scheduling Policy Independent

intermediate delays
Intermediate Delays

1

2

3

4

5

6

Transmit-First Intermediate Computer Delay

Lazy Intermediate

Computer Delay

Case 1: Transmit Before Processing

Case 2: Transmit After Processing

intermediate delay equation derivation
Intermediate Delay Equation Derivation

1

2

3

4

5

6

Transmit-First Intermediate Computer Delay

Lazy Intermediate

Computer Delay

transmit first intermediate delays
Transmit-First Intermediate Delays

1

2

3

4

5

6

Transmit-First Intermediate Computer Delay

Time Required to Transmit to Next Computer on Path

lazy intermediate delays transmit before processing
Lazy Intermediate Delays: Transmit Before Processing

1

2

3

4

5

6

If Transmit Before Processing

Lazy Intermediate

Computer Delay

Time Required to Transmit to Next Computer on Path

Transmit-First Intermediate Computer Delay

Processing Was Delayed

iff

Sum of Intermediate Delays so Far

<

Noticeable Threshold

lazy intermediate delays from transmit before to transmit after processing
Lazy Intermediate Delays: From “Transmit Before” To “Transmit After” Processing

1

2

3

4

5

6

Delay

Delay

Delay

>

Eventually …

Processing Was Delayed

iff

Sum of Intermediate Delays so Far

<

Noticeable Threshold

k

Sum of Intermediate Delays so Far

Processing Delay

k

lazy intermediate delays transmit after processing
Lazy Intermediate Delays: Transmit After Processing

1

2

3

4

5

6

If Transmit After Processing

Lazy Intermediate

Computer Delay

Time Required to Transmit to Next Computer on Path

Time Required to Process Command

>

Transmit-First Intermediate Computer Delay

transmit first and lazy intermediate delay comparison
Transmit-First and Lazy Intermediate Delay Comparison

1

2

3

4

5

6

Lazy Intermediate

Computer Delay

Transmit-First Intermediate Computer Delay

Transmit-First Intermediate Delays Dominate Lazy Intermediate Delays

transmit first destination delays
Transmit-First Destination Delays

1

2

3

4

5

6

Transmit-First Destination

Computer Delay

Total CPU Transmission Time

Processing Time

6

Number of Computers to Forward

Transmit-First Destination

Computer Delay

lazy destination delays
Lazy Destination Delays

1

2

3

4

5

6

Lazy Destination

Computer Delay

Processing Delay

Processing Time

Capped

6

Number of Computers to Forward

Lazy Destination

Computer Delay

transmit first and lazy delay comparison
Transmit-First and Lazy Delay Comparison

1

2

3

4

5

6

Userj’s Remote Response Time of Command i

Sum of Network Latencies on Path

Sum of Intermediate Computer Delays

Destination Computer Delay

Lazy

Transmit First

Lazy

Transmit First

Transmit-First and Lazy do not Dominate Each Other

theoretical example
Theoretical Example

User1 Enters All Commands

1

2

3

4

5

6

7

8

9

10

11

12

CPU Trans Time

Network Card Trans Time

Proc Time

Network Latencies

Noticeable Threshold

ms

25

50

75

theoretical example lazy noticeably better than process first for user 8
Theoretical Example: Lazy Noticeably Better than Process-First For User8

1

2

5

10

3

4

6

7

8

9

11

12

CPU Trans Time

Network Card Trans Time

Process First

Lazy Noticeably Better than Process-First

Proc Time

Network Latencies

Lazy

Noticeable

Noticeable Threshold

50

100

150

200

250

300

350

400

450

ms

ms

25

50

75

theoretical example lazy advantage additive
Theoretical Example:Lazy Advantage Additive

Lazy Improvement in Remote Response Times Compared to Process-First is Additive!

More Computers that Delay Processing → More Noticeable Improvement

Process First

Lazy Noticeably Better than Process-First

Lazy

Noticeable

50

100

150

200

250

300

350

400

450

ms

theoretical example lazy noticeably better than transmit first for user 2
Theoretical Example: Lazy Noticeably Better than Transmit-First For User2

<

1

2

5

10

3

4

6

7

8

9

11

12

CPU Trans Time

Network Card Trans Time

Transmit First

Lazy May be Noticeably Better than Transmit-First

Proc Time

Network Latencies

Lazy

Noticeable

Noticeable Threshold

100

150

200

250

300

350

400

450

ms

ms

25

50

75

simulations
Simulations

Theoretical Example Carefully Constructed to Illustrated Benefit of Lazy

Response Time Differences Noticeable in Realistic Scenarios?

Simulate Performance Using Analytical Model

Need Realistic Simulations!

simulation scenario
Simulation Scenario

Distributed PowerPoint Presentation

Need Realistic Values of Parameters

Processing and Transmission Times

No Concurrent Commands

Measured Values from Logs of Actual PowerPoint Presentations

No Type-Ahead

Single-Core Machines

Netbook

P4 Desktop

P3 Desktop

Presenter Using Netbook

Next-generation Mobile Device

simulation scenario1
Simulation Scenario

Distributed PowerPoint Presentation

Presenting to 600 People

Using P4 Desktops and Next Generation Mobile Devices

simulation scenario2
Simulation Scenario

Distributed PowerPoint Presentation

Six Forwarders Each Sending Messages to 99 Users

simulation scenario3
Simulation Scenario

Distributed PowerPoint Presentation

Low Latencies Between Forwarders

Others Around the World

Used Subset of Latencies Measured between 1740 Computers Around the World [19][24]

[19] p2pSim: a simulator for peer-to-peer protocols. http://pdos.csail.mit.edu/p2psim/kingdata. Mar 4, 2009.

[24] Zhang, B., Ng, T.S.E, Nandi, A., Riedi, R., Druschel, P., and Wang, G. Measurement-based analysis, modeling, and synthesis of the internet delay space. ACM Conference on Internet Measurement. 2006. pp: 85-98.

simulation results lazy vs process first
Simulation Results: Lazy vs. Process-First

Lazy

Process-First

Local Response Time

107ms

58ms

Lazy Local Response Times Unnoticeably Worse (49ms) than Process-First

Number of Lazy Remote Response Times

Process-First

Equal to

0

As much as 604ms

Noticeably Better

598

Noticeably Worse

0

Lazy Better by 36ms

Not Noticeably Different

1

Lazy Dominates Process-First!

simulation results lazy vs transmit first and concurrent
Simulation Results: Lazy vs. Transmit-First and Concurrent

Lazy

Transmit-First

Concurrent

Local Response Time

107ms

177ms

118ms

Lazy Local Response Time Noticeably Better (70ms) than Transmit-First.

Can Also Be Noticeably Better than Concurrent (in Different Scenario).

Number of Lazy Remote Response Times

Transmit-First

Concurrent

Equal to

407

407

As much as 158ms

As much as 158ms

Noticeably Better

5

5

Noticeably Worse

187

As much as 240ms

187

As much as 240ms

Not Noticeably Different

0

0

None of the Transmit-First, Concurrent, and Lazy Dominate Each Other!

simulation results lazy vs transmit first and concurrent1
Simulation Results: Lazy vs. Transmit-First and Concurrent

Number of Lazy Remote Response Times

Transmit-First

Concurrent

As much as 158ms

As much as 158ms

Noticeably Better

5

5

Lazy Provides Better Remote Response Times to Four of the Five Forwarding Users

Lazy More “Fair” than Transmit-First and Concurrent

automatically selecting scheduling policy that best meets response time requirements
Automatically Selecting Scheduling Policy that Best Meets Response Time Requirements

User’s Response Time Requirements

Scheduling Policy Used

Improve as Many Response Times as Possible

Concurrent

or Transmit-First

Improve as Many Remote Response Times as Possible without Noticeably Degrading Local Response Times

Lazy

Improve as Many Remote Response Times as Possible without Noticeably Degrading Local Response Times and Remote Response Times of Forwarders

Lazy

self optimizing collaborative framework
Self-Optimizing Collaborative Framework

Centralized Architecture

Unicast

Process-First

Transmit-First

Collaboration Functionality

Self-Optimizing Functionality

Replicated Architecture

Multicast

Concurrent

Lazy

sharing applications
Sharing Applications

Centralized Architecture

Replicated Architecture

P 1

P 1

P 1

C 1

C 2

C 1

C 2

U 1

U 2

U 1

U 2

Client-Side Component

Input

Output

multicast
Multicast

Centralized-Multicast Architecture

P 1

P 2

P 3

P 4

Inactive

Inactive

Inactive

C 1

C 2

C 3

C 4

U 1

U 2

U 1

U 2

Input

Output

scheduling policies
Scheduling Policies

Centralized-Multicast Architecture

P 1

P 1

P 1

High Priority

Low Priority

Equal Priority

C 1

C 1

C 1

Low Priority

High Priority

Equal Priority

U 1

U 1

U 1

Transmit-First

Process-First

Concurrent

lazy policy implementation
Lazy Policy Implementation

Centralized-Multicast Architecture

Delays Processing While Delay Not Noticeable

P 1

P 2

Inactive

Must Know Delay So Far

Source Provides Input Time of Command

C 1

C 2

Forwarder Computes Time Elapsed Since Input Time

U 1

U 2

Delay Processing While Not Noticeable

Clocks Must be Synchronized

Use Simple Clock Sync Scheme

self optimization framework server side component
Self-Optimization Framework: Server-Side Component

Predicted Response Time Matrix

For Each Scheduling Policy

Previously Measured Values

Predict Response Times of All Users

For Commands by Each User

Measure Dynamically

Parameter Collector

Analytical Model

Response Time Function

System Manager

Same as for Simulations

Provided by Users

client side component measuring parameters
Client-Side Component: Measuring Parameters

P 1

Sharing

Client-Side Component

C 1

Optimization

U 1

switching scheduling policies
Switching Scheduling Policies

P 1

High Priority

Low Priority

Equal Priority

Lazy Algorithm

Low Priority

High Priority

Equal Priority

U 1

Transmit-First

Process-First

Concurrent

Lazy

performance of commands entered during switch
Performance of Commands Entered During Switch

Switching Scheduling Policies on All Computers Takes Time

P 1

P 2

Inactive

Computers May Temporarily Use Mix of Old and New Policies

Not a Semantic Issue

C 1

C 2

May Temporarily Degrade Performance

U 1

U 2

Eventually All Computers Switch to new Policy and Performance Improves

Transmit-First

Transmit-First

Lazy

Lazy

main contributions scheduling
Main Contributions: Scheduling

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Single-Core

Collaboration Architecture

Multicast

Scheduling Policy

?

Multi-Core

Studied the Impact on Response Times

Designed Lazy Scheduling

Analytical Model

Automated Maintenance

Self-Optimizing System

intuitive choice of multi core scheduling policy
Intuitive Choice of Multi-Core Scheduling Policy

Local Response Times Important?

Remote Response Times Important?

Important

Transmit and Process in Parallel

Transmit and Process in Parallel

Important

intuitive choice of multi core scheduling policy1
Intuitive Choice of Multi-Core Scheduling Policy

Local Response Times Important?

Remote Response Times Important?

Important

Transmit and Process in Parallel

Parallelize Processing Task?

Application Defined Processing Task

Task

intuitive choice of multi core scheduling policy2
Intuitive Choice of Multi-Core Scheduling Policy

Local Response Times Important?

Remote Response Times Important?

Transmit and Process in Parallel

Important

Parallelize Transmission Task?

System Defined Transmission Task Divided Among Multiple Cores

No Benefit to Remote Response Times

Difficult to Predict Remote Response Times

automating switch to parallel policy
Automating Switch to Parallel Policy

Analytical Model Predicts

Simulations Show

Experiments Confirm

Arbitrary Response Time Requirements

Parallel

main contributions scheduling1
Main Contributions: Scheduling

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Single-Core

Collaboration Architecture

Multicast

Scheduling Policy

Multi-Core

Studied the Impact on Response Times

Designed Lazy Scheduling

Analytical Model Required

Automated Maintenance

Self-Optimizing System Required

main contributions multicast
Main Contributions: Multicast

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

Studied the Impact on Response Times

Automated Maintenance

multicast can hurt remote response times
Multicast can Hurt Remote Response Times

1

2

3

4

5

6

7

8

9

10

Transmission Performed in Parallel

Quicker Distribution of Commands

Response Times

Multicast Paths Longer than Unicast Paths

Response Times

Multicast May Improve or Degrade Response Times

multicast and scheduling policies
Multicast and Scheduling Policies

Traditional Multicast Ignores Collaboration Specific Parameters

Scheduling Policy

Consider Process-First

Response Time Includes Processing Time of Source and Destination

Response Time Includes Processing Time of All Computers on Path from Source to Destination

Another Reason Why Multicast May Hurt Response Times

multicast and scheduling policies1
Multicast and Scheduling Policies

Multicast and Transmit-First Remote Response Times

Must Transmit Before Processing!

If Transmission Costs High

Remote Response Times Compared to Process First

Another Reason Why Multicast May Hurt Response Times

Must Support Both Unicast and Multicast

supporting multicast
Supporting Multicast

Traditional Collaboration Architectures Couple Processing and Transmission Tasks

Replicated

Centralized

Masters Perform Majority of Communication Task

Must Decouple These Tasks to Support Multicast

Bi-Architecture Model of Collaborative Systems

(IEEE CollaborateCom 2007 [3])

Processing Architecture

Communication Architecture

[3] Junuzovic, S. and Dewan, P. Multicasting in groupware? IEEE Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2007. pp: 168-177.

main contributions multicast1
Main Contributions: Multicast

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

Studied the Impact on Response Times

Bi-Architecture Model

Analytical Model

Automated Maintenance

Self-Optimizing System Required

performance of commands entered during communication architecture switch
Performance of Commands Entered During Communication Architecture Switch

1

2

Communication Architecture Switch Takes Time

Cannot Stop Old Architecture because of Messages In Transit

3

4

5

6

7

8

9

10

Use Old Architecture During Switch

Deploy New Architecture in Background

Switch to New Architecture When All Computers Deploy It

1

2

Commands Entered During Switch Temporary Experience Non-optimal (Previous) Performance

3

4

5

6

7

8

9

10

main contributions multicast2
Main Contributions: Multicast

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

Studied the Impact on Response Times

Automated Maintenance

impact of mapping on response times
Impact of Mapping on Response Times

Which Mapping Favors Local Response Times?

Replicated

Centralized

Empirically Shown Wrong [10]

Intuitively

No

No

Yes

Yes

Experimental Results

Infinitely Many Collaboration Scenarios

Impractical

[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

main contributions multicast3
Main Contributions: Multicast

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

Studied the Impact on Response Times

Analytical Model

Automated Maintenance

Self-Optimizing System

commands entered during switch
Commands Entered During Switch

Pause Input During Switch

Simple

Hurts Response Times

Master

P 2

Approach Used in Self-Optimizing System

Not if Done During Break

Output Arrives

C 2

Run Old and New Configurations in Parallel [10]

Good Performance

Not Simple

U 2

Inconsistent with Replicated and Centralized Architectures

[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

contributions
Contributions

Response Time Requirements

Our Contribution

Insufficient Resources

Sufficient but Scarce Resources

Abundant Resources

Resources

main contributions1
Main Contributions

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

Studied the Impact on Response Times

Chung [10]

Automated Maintenance

Wolfe et al. [22]

[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.

main contributions2
Main Contributions

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

New

New

Studied the Impact on Response Times

Chung [10]

Bi-architecture

Lazy Policy

Support for Multicast

Process-First Obsolete

ECSCW 2005 [1]

ACM CSCW 2006 [2]

Automated Maintenance

Wolfe et al. [22]

[1] Junuzovic, S., Chung, G., and Dewan, P. Formally analyzing two-user centralized and replicated architectures. European Conference on Computer Supported Cooperative Work (ECSCW). 2005. pp: 83-102.

[2] Junuzovic, S. and Dewan, P. Response time in N-user replicated, centralized, a proximity-based hybrid architectures. ACM Conference on Computer Supported Cooperative Work (CSCW). 2006. pp: 129-138.

[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.

main contributions3
Main Contributions

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

New

Studied the Impact on Response Times

Analytical

Model

Automated Maintenance

Wolfe et al. [22]

[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.

main contributions4
Main Contributions

Analytical Model

Locked into an Configuration

Choice of Configuration at Start Time

Choice of Configuration at Runtime

OR

Help Users Decide

main contributions5
Main Contributions

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

Studied the Impact on Response Times

Analytical

Model

New

Automated Maintenance

Self-Optimizing System

New

Implementation Issues

main contributions6
Main Contributions

Better Meet Response Time Requirements than Other Systems!

Important Response Time Factors

Collaboration Architecture

Multicast

Scheduling Policy

Studied the Impact on Response Times

Proof of Thesis:

For certain classes of applications, it is possible to meet performance requirements better than existing systems through a new collaborative framework without requiring hardware, network, or user-interface changes.

Proof of Thesis:

It is possible to meet performance requirements of collaborative applications better than existing systems through a new a collaborative framework withoutrequiring hardware, network, or user-interface changes.

Automated Maintenance

classes of applications impacted
Classes of Applications Impacted

Three Driving Problems

Collaborative Games

Distributed Presentations

Instant Messaging

Popular

Entire Industries Built Around It

Pervasive

Webex, Live Meeting, etc.

classes of applications impacted1
Classes of Applications Impacted

Three Driving Problems

Collaborative Games

Distributed Presentations

Instant Messaging

Push-Based Communication

No Concurrency Control, Consistency Maintenance, or Awareness Commands

Support Centralized and Replicated Semantics

impact
Impact

Complex Prototype Shows Benefit of Automating Maintenance of

Processing

Architecture

Communication Architecture

Scheduling

Policy

Recommend to Software Designers?

Added Complexity Worth It?

Initial Costs

Replicated Semantics

Yes if Performance of Current Systems is an Issue

Window of Opportunity Exists?

Yes In Multimedia Networking

Further Analysis Needed for Collaborative Systems

window of opportunity in future impact of processing architecture
Window of Opportunity in Future:Impact of Processing Architecture

Choice of Processing Architecture Not Important

Processing Costs

Processor Power

Choice of Processing Architecture Important

Processing Costs

Demand for More Complex Applications

Some Text Editing Operations Still Slow

window of opportunity in future impact of communication architecture
Window of Opportunity in Future:Impact of Communication Architecture

Transmission Costs

Choice of Communication Architecture Not Important

Network Speed

Transmission Costs

Choice of Communication Architecture Important

Demand for More Complex Applications

Cellular Networks Still Slow

High Definition Video (Telepresence)

Fast Links Consume Too Much Power on Mobile Devices!

window of opportunity in future impact of scheduling policy
Window of Opportunity in Future:Impact of Scheduling Policy

Always Use Parallel Policy

Choice of Scheduling Policy Not Important

Number of Cores

Choice of Scheduling Policy Important

Energy Costs

Cell Phones, PDAs, and Netbooks still Single Core

High Definition Video Requires Multiple Cores

other contributions
Other Contributions

Performance Simulator

~ ns

Large-Scale Experiment Setup

Impractical

Large-Scale Simulations are Simple to Setup!

Teaching

With Simulations, Students Learn about Response Time Factors

With System, Students Experience Response Times

other contributions industry research
Other Contributions – Industry Research

Dissertation Research

Theory

Systems

Microsoft Research

Applications and User Studies

Awareness in Multi-User Editors

Framework For Interactive Web Apps

Meeting Replay Systems

Telepresence

IEEE CollaborateCom 2007 [4]

US Patent Application

US Patent Awarded

MSR Tech Talk

ACM MM 2008 [7]

US Patent Application

Papers in Submission

Patent Applications in Preparation

[4] Junuzovic, S. ,Dewan, P, and Rui., Y. Read, Write, and Navigation Awareness in Realistic Multi-View Collaborations. IEEE Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2007. pp: 494-503.

[7] Junuzovic, S., Hegde, R., Zhang, Z., Chou, P., Liu, Z., Zhang, C. Requirements and Recommendations for an Enhanced Meeting Viewer Experience. ACM Conference on Multimedia (MM). 2008. pp: 539-548.

future work
Future Work

User Requirement Driven Performance Goals

Discover User Performance Requirements

Adjust Configuration Based on User Activity

Attention Level

Facial Expressions

future work1
Future Work

Extension to Other Scenarios

Meetings in Virtual Worlds (e.g., Second Life)

Performance Issues with as few as 10 Users

Dynamically Deploy Multicast and Scheduling Policy

Step Closer to Solving Performance Issues

future work2
Future Work

Simulations and Experiments

Evaluate Benefit of Simulator for Allocating Resources in Large Online Systems

Can Administrators Make Better Decisions with Simulator?

Large Experimental Test-Bed

Current Public Clusters not Sufficient for Performance Experiments

acknowledgements
Acknowledgements

Committee:

James Anderson (UNC Chapel Hill)

Nick Graham (Queen’s University)

Saul Greenberg (University of Calgary)

Jasleen Kaur (UNC Chapel Hill)

Ketan Mayer-Patel (UNC Chapel Hill)

Advisor: Prasun Dewan (UNC Chapel Hill)

Microsoft and Microsoft Research:

Kori Inkpen, Zhengyou Zhang, Rajesh Hegde … many others

Scholars For Tomorrow Fellow (2004-2005)

Microsoft Research Fellow (2008-2010)

NSERC PGS D Scholarship (2008-2010)

acknowledgements1
Acknowledgements

Professors and Everyone Else in the Department

Emily Tribble, Russ Gayle, Avneesh Sud, Todd Gamblin, Stephen Olivier, Keith Lee, Bjoern Brandenburg, Jamie Snape, Srinivas Krishnan, and others

Sister, Mom, and Dad

Thanks Everyone!

publications
Publications

[1] Junuzovic, S., Chung, G., and Dewan, P. Formally analyzing two-user centralized and replicated architectures.

European Conference on Computer Supported Cooperative Work (ECSCW). 2005. pp: 83-102.

[2] Junuzovic, S. and Dewan, P. Response time in N-user replicated, centralized, a proximity-based hybrid architectures.

ACM Conference on Computer Supported Cooperative Work (CSCW). 2006. pp: 129-138.

[3] Junuzovic, S. and Dewan, P. Multicasting in groupware? IEEE Conference on Collaborative Computing: Networking,

Applications, and Worksharing (CollaborateCom). 2007. pp: 168-177.

[4] Junuzovic, S. ,Dewan, P, and Rui., Y. Read, Write, and Navigation Awareness in Realistic Multi-View Collaborations.

IEEE Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom).

2007. pp: 494-503.

[5] Dewan, P., Junuzovic, S., and Sampathkuman, G. The Symbiotic Relationship Between the Virtual Computing Lab and

Collaboration Technology. International Conference on the Virtual Computing Initiative (ICVCI). 2007.

[6] Junuzovic, S. and Dewan, P. Serial vs. Concurrent Scheduling of Transmission and Processing Tasks in Collaborative

Systems. Conference on Collaborative Computing: Networking, Applications, and Worksharing

(CollaborateCom). 2008.

[7] Junuzovic, S., Hegde, R., Zhang, Z., Chou, P., Liu, Z., Zhang, C. Requirements and Recommendations for an Enhanced

Meeting Viewer Experience. ACM Conference on Multimedia (MM). 2008. pp: 539-548.

[8] Junuzovic, S., and Dewan, P. Lazy scheduling of processing and transmission tasks collaborative systems.

ACM Conference on Supporting Group Work (GROUP). 2009. pp: 159-168.

references
References

[9] Begole, J. Rosson, M.B., and Shaffer, C.A. Flexible collaboration transparency: supporting worker independence in replicated

application-sharing systems. ACM Transactions on Computer-Human Interaction (TOCHI). Vol . 6(2). Jun 1999. pp: 95-132.

[10] Chung, G. Log-based collaboration infrastructure. Ph.D. dissertation. University of North Carolina at Chapel Hill. 2002.

[11] Chung, G. and Dewan, P. Towards dynamic collaboration architectures. ACM Conference on Computer Supported Cooperative

Work (CSCW). 2004. pp: 1-10.

[12] Ellis, C.A. and Gibbs, S.J. Concurrency control in groupware systems. ACM SIGMOD Record. Vol. 18 (2). Jun 1989. pp: 399-407.

[13] Graham, T.C.N., Phillips, W.G., and Wolfe, C. Quality Analysis of Distribution Architectures for Synchronous Groupware.

Conference on Collaborative Computing: Networking, Applications, and Worksharing (CollaborateCom). 2006. pp: 1-9.

[14] Greif, I., Seliger, R., and Weihl, W. Atomic Data Abstractions in a Distributed Collaborative Editing System. Symposium on

Principles of Programming Languages. 1986. pp: 160-172.

[15] Gutwin, C., Dyck, J., and Burkitt, J. Using Cursor Prediction to Smooth Telepointer Actions. ACM Conference on Supporting Group

Work (GROUP). 2003. pp: 294-301.

[16] Gutwin, C., Fedak, C., Watson, M., Dyck, J., and Bell, T. Improving network efficiency in real-time groupware with general

message compression. ACM Conference on Computer Supported Cooperative Work (CSCW). 2006. pp: 119-128.

[17] Jay, C., Glencross, M., and Hubbold, R. Modeling the Effects of Delayed Haptic and Visual Feedback in a Collaborative Virtual Environment. ACM Transactions on Computer-Human Interaction (TOCHI). Vol. 14 (2). Aug 2007. Article 8.

[18] Jeffay, K. Issues in Multimedia Delivery Over Today’s Internet. IEEE Conference on Multimedia Systems. Tutorial. 1998.

[19] p2pSim: a simulator for peer-to-peer protocols. http://pdos.csail.mit.edu/p2psim/kingdata. Mar 4, 2009.

[20] Shneiderman, B. Designing the user interface: strategies for effective human-computer interaction. 3rd ed. Addison Wesley.

1997.

[21] Sun, C. and Ellis, C. Operational transformation in real-time group editors: issues, algorithms, and achievements. ACM Conference

on Computer Supported Cooperative Work (CSCW). 1998. pp: 59-68.

[22] Wolfe, C., Graham, T.C.N., Phillips, W.G., and Roy, B. Fiia: user-centered development of adaptive groupware systems. ACM Symposium on Interactive Computing Systems. 2009. pp: 275-284.

[23] Youmans, D.M. User requirements for future office workstations with emphasis on preferred response times. IBM United Kingdom Laboratories. Sep 1981.

[24] Zhang, B., Ng, T.S.E, Nandi, A., Riedi, R., Druschel, P., and Wang, G. Measurement-based analysis, modeling, and synthesis of the

internet delay space. ACM Conference on Internet Measurement. 2006. pp: 85-98.

ad