Nps a n on interfering web p refetching s ystem
Download
1 / 27

NPS: A N on-interfering Web P refetching S ystem - PowerPoint PPT Presentation


  • 136 Views
  • Uploaded on

NPS: A N on-interfering Web P refetching S ystem. Ravi Kokku, Praveen Yalagandula, Arun Venkataramani, Mike Dahlin Laboratory for Advanced Systems Research Department of Computer Sciences University of Texas at Austin. Summary of the Talk.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'NPS: A N on-interfering Web P refetching S ystem' - marcel


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Nps a n on interfering web p refetching s ystem

NPS: A Non-interfering Web Prefetching System

Ravi Kokku, Praveen Yalagandula,

Arun Venkataramani, Mike Dahlin

Laboratory for Advanced Systems Research

Department of Computer Sciences

University of Texas at Austin


Summary of the talk
Summary of the Talk

Prefetching should be done aggressively, but safely

 Safe: Non-interference with demand requests

  • Contributions:

    • A self-tuning architecture for web prefetching

      • Aggressive when abundant spare resources

      • Safe when scarce resources

    • NPS: A prototype prefetching system

      • Immediately deployable

Department of Computer Sciences, UT Austin


Outline
Outline

Prefetch aggressively as well as safely

  • Motivation

  • Challenges/principles

  • NPS system design

  • Conclusion

Department of Computer Sciences, UT Austin


What is web prefetching
What is Web Prefetching?

  • Speculatively fetch data that will be accessed in the future

  • Typical prefetch mechanism [PM96, MC98, CZ01]

Client

Server

Demand Requests

Responses + Hint Lists

Prefetch Requests

Prefetch Responses

Department of Computer Sciences, UT Austin


Why web prefetching
Why Web Prefetching?

  • Benefits [GA93, GS95, PM96, KLM97, CB98, D99, FCL99, KD99, VYKSD01, …]

    • Reduces response times seen by users

    • Improves service availability

  • Encouraging trends

    • Numerous web applications getting deployed

      • News, banking, shopping, e-mail…

    • Technology is improving rapidly

      •  capacities and  prices of disks and networks

Prefetch Aggressively

Department of Computer Sciences, UT Austin


Why doesn t everyone prefetch
Why doesn’t everyone prefetch?

  • Extra resources on servers, network and clients

  • Interference with demand requests

    • Two types of interference

      • Self-Interference– Applications hurt themselves

      • Cross-Interference– Applications hurt others

    • Interference at various components

      • Servers – Demand requests queued behind prefetch

      • Networks – Demand packets queued or dropped

      • Clients – Caches polluted by displacing more useful data

Department of Computer Sciences, UT Austin


Example server interference
Example: Server Interference

  • Common load vs. response curve

    • Constant-rate prefetching reduces server capacity

0.7

Pfrate=5

0.6

0.5

0.4

Pfrate=1

Avg. Demand Response Time (s)

Demand

0.3

0.2

0.1

0

100

200

300

400

500

600

700

800

Demand Connection Rate (conns/sec)

Prefetch Aggressively, BUT SAFELY

Department of Computer Sciences, UT Austin


Outline1
Outline

Prefetch aggressively as well as safely

  • Motivation

  • Challenges/principles

    • Self-tuning

    • Decoupling prediction from resource management

    • End-to-end resource management

  • NPS system design

  • Conclusion

Department of Computer Sciences, UT Austin


Goal 1 self tuning system
Goal 1: Self-tuning System

  • Proposed solutions use “magic numbers”

    • Prefetch thresholds [D99, PM96, VYKSD01, …]

    • Rate limiting [MC98, CB98]

  • Limitations of manual tuning

    • Difficult to determine “good” thresholds

      • Good thresholds depend on spare resources

    • “Good” threshold varies over time

    • Sharp performance penalty when mistuned

  • Principle 1: Self-tuning

    • Prefetch according to spare resources

    • Benefit: Simplifies application design

Department of Computer Sciences, UT Austin


Goal 2 separation of concerns
Goal 2: Separation of Concerns

  • Prefetching has two components

    • Prediction – What all objects are beneficial to prefetch?

    • Resource management –How many can we actually prefetch?

  • Traditional techniques do not differentiate

    • Prefetch if prob(access) > 25%

    • Prefetch only top 10 important URLs

    • Wrong Way! We lose the flexibility to adapt

  • Principle 2: Decouple prediction from resource management

    • Prediction: Application identifies all useful objects

      • In the decreasing order of importance

    • Resource management: Uses Principle 1

      • Aggressive – when abundant resources

      • Safe – when no resources

Department of Computer Sciences, UT Austin


Goal 3 deployability
Goal 3: Deployability

  • Ideal resource management vs. deployability

    • Servers

      • Ideal: OS scheduling of CPU, Memory, Disk…

      • Problem: Complexity – N-Tier systems, Databases, …

    • Networks

      • Ideal: Use differentiated services/ router prioritization

      • Problems: Every router should support it

    • Clients

      • Ideal: OS scheduling, transparent informed prefetching

      • Problem: Millions of deployed browsers

  • Principle 3: End-to-end resource management

    • Server – External monitoring and control

    • Network – TCP-Nice

    • Client – Javascript tricks

Department of Computer Sciences, UT Austin


Outline2
Outline

Prefetch Aggressively as well as safely

  • Motivation

  • Principles for a prefetching system

    • Self-tuning

    • Decoupling prediction from resource management

    • End-to-end resource management

  • NPS prototype design

    • Prefetching mechanism

    • External monitoring

    • TCP-Nice

    • Evaluation

  • Conclusion

Department of Computer Sciences, UT Austin


Prefetch mechanism

Prefetch

Requests

Prefetch

Server

Client

Demand Requests

Munger

Demand

Server

Hint Lists

Prefetch Mechanism

Fileset

Hint

Server

Server m/c

1. Munger adds Javascript to html pages

2. Client fetches html page

3. Javascript on html page fetches hint list

4. Javascript on html page prefetches objects

Department of Computer Sciences, UT Austin


End to end monitoring and control

Client

Demand

Server

Hint

Server

Monitor

End-to-end Monitoring and Control

while(1) {

getHint( );

prefetchHint( ); }

  • Principle: Low response times  server not loaded

    • Periodic probing for response times

    • Estimation of spare resources (budget) at server – AIMD

    • Distribution of budget

      • Control the number. of clients allowed to prefetch

GET http://repObj.html

200 OK…

if (budgetLeft)

send(hints);

else

send(“return later”);

Department of Computer Sciences, UT Austin


Monitor evaluation 1
Monitor Evaluation (1)

0.7

Manual tuning, Pfrate=5

  • End-to-end monitoring makes prefetching safe

0.6

0.5

Manual tuning,

Pfrate=1

No-Prefetching

0.4

Avg Demand Response Time(sec)

0.3

Monitor

0.2

0.1

0

0

100

200

300

400

500

600

700

800

Demand Connection Rate (conns/sec)

Department of Computer Sciences, UT Austin


Monitor evaluation 2
Monitor Evaluation (2)

  • Manual tuning is too damaging at high load

No-Prefetching

80

60

Bandwidth (Mbps)

40

Demand: pfrate=1

20

Prefetch:

pfrate=1

0

0

100

200

300

400

500

600

700

800

Demand Connection Rate (conns/sec)

Department of Computer Sciences, UT Austin


Monitor evaluation 21
Monitor Evaluation (2)

  • Manual tuning too timid or too damaging

  • End-to-end monitoring is both aggressive and safe

No-Prefetching

80

Prefetch:Monitor

Demand:Monitor

60

Bandwidth (Mbps)

40

Demand: pfrate=1

20

Prefetch:

pfrate=1

0

0

100

200

300

400

500

600

700

800

Demand Connection Rate (conns/sec)

Department of Computer Sciences, UT Austin


Network resource management
Network Resource Management

  • Demand and prefetch on separate connections

  • Why is this required?

    • HTTP/1.1 persistent connections

    • In-order delivery of TCP

    • So prefetch affects demand

  • How to ensure separation?

    • Prefetching on a separate server port

  • How to use the prefetched objects?

    • Javascript tricks – In the paper

Department of Computer Sciences, UT Austin


Network resource management1
Network Resource Management

  • Prefetch connections use TCP Nice

  • TCP Nice

    • A mechanism for background transfers

    • End-to-end TCP congestion control

    • Monitors RTTs and backs-off when congestion

    • Previous study [OSDI 2002]

      • Provably bounds self- and cross-interference

      • Utilizes significant spare network capacity

    • Server-side deployable

Department of Computer Sciences, UT Austin


End to end evaluation

PrefSvr

Apache:8085

DemandSvr

Apache: 80

End-to-end Evaluation

  • Measure avg. response times for demand reqs.

    • Compare with No-Prefetching and Hand-tuned

  • Experimental setup

Network

Cable modem,

Abilene

Fileset

Client

httperf

Trace

IBM server

HintSvr

PPM predict

Department of Computer Sciences, UT Austin


Prefetching with abundant resources
Prefetching with Abundant Resources

  • Both Hand-tuned and NPS give benefits

    • Note: Hand-tuned is tuned to the best

Department of Computer Sciences, UT Austin


Tuning the no avoidance case
Tuning the No-Avoidance Case

  • Hand-tuning takes effort

  • NPS is self-tuning

Department of Computer Sciences, UT Austin


Prefetching with scarce resources
Prefetching with Scarce Resources

  • Hand-tuned damages by 2-8x

  • NPS causes little damage to demand

Department of Computer Sciences, UT Austin


Conclusions
Conclusions

  • Prefetch aggressively, but safely

  • Contributions

    • A prefetching architecture

      • Self-tuning

      • Decouples prediction from resource management

      • Deployable – few modifications to existing infrastructure

    • Benefits

      • Substantial improvements with abundant resources

      • No damage with scarce resources

    • NPS prototype

      http://www.cs.utexas.edu/~rkoku/RESEARCH/NPS/

Department of Computer Sciences, UT Austin


Thanks
Thanks

Department of Computer Sciences, UT Austin


Prefetching with abundant resources1
Prefetching with Abundant Resources

  • Both Hand-tuned and NPS give benefits

    • Note: Hand-tuned is tuned to the best

Department of Computer Sciences, UT Austin


Client resource management
Client Resource Management

  • Resources – CPU, memory and disk caches

  • Heuristics to control cache pollution

    • Limit the space prefetch objects take

    • Short expiration time for prefetched objects

  • Mechanism to avoid CPU interference

    • Start prefetching after all demand done

      • Handles self-interference – more common case

    • What about cross-interference?

      • Client modifications might be necessary

Department of Computer Sciences, UT Austin


ad