1 / 15

Improving Performance Isolation on Chip Multiprocessors via on Operating System Scheduler

Improving Performance Isolation on Chip Multiprocessors via on Operating System Scheduler. Quinn Gaumer ECE 259/CPS 221. Outline. Definitions Motivation Algorithm Evaluation. Definitions. Definitions Fair xyz: xyz under fair cache allocation

jennis
Download Presentation

Improving Performance Isolation on Chip Multiprocessors via on Operating System Scheduler

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving Performance Isolation on Chip Multiprocessors via on Operating System Scheduler Quinn Gaumer ECE 259/CPS 221

  2. Outline • Definitions • Motivation • Algorithm • Evaluation

  3. Definitions • Definitions • Fair xyz: xyz under fair cache allocation • Slow Schedule: Thread run with high miss rate co-runners • Fast Schedule : Thread run with low miss rate co-runners

  4. Motivation • Programs running on multiprocessors depend on co-runners • Shared caches aren’t necessarily shared fairly • Does it really matter? If one process suffers then another gains…

  5. Cache Fair Algorithm • Guarantees program runs as fast as it would if resources split equally • Does not actually affect the cache allocation • Threads with less cache space will have lower IPC • Threads with IPC higher than Fair should run for less time • If a thread’s IPC is lower than its Fair IPC it should be kept on the processor longer

  6. Cache Fair Algorithm • What does it actually need to do? • Maintain approximate fair IPC • Keep track of current IPC • Scheduler Compensation

  7. Cache Fair Algorithm • Two classes of threads: • Cache Fair: threads regulated so that their IPC is equivalent to their Fair IPC • Best Effort: threads where compensatory effects for Cache Fair threads occur.

  8. Fair IPC Model • For each Cache Fair Thread determine Fair Cache Miss Rate • Run with several Co-runners • Determine Cache Miss Rates • Use Linear Regression • Fair Cache Miss Rate ->Fair IPC • Done Online

  9. Implementation • Sampling • Run thread with various co-runners • Determine Cache Miss Rate for all threads • Use Linear Regression to determine Fair IPC • Scheduling • Checks IPC and Fair IPC • Modifies Cache Fair Thread CPU slice • Adjusts Best Effort Thread to compensate

  10. Evaluation • What should our performance metric be? • IPC can’t be used, only the scheduler is being changed • Performance variability • Difference between running with high and low cache contention threads • Absolute Performance • Difference between normal scheduler and Cache Fair Scheduler

  11. Program Isolation • Difference between programs run on fast and slow schedule • Cache-Fair is always less than Default • Cache-Fair variability always less than 4%

  12. Absolute Performance • Normalized to fast schedule in Default Scheduler • High IPC programs experience speedup. • Low IPC programs experience slow down • What causes this? • Overall absolute performance is competitive

  13. Aggregate IPC • All programs are run on the slow schedule • When they do not meet their Fair IPC they get compensated • Slow schedule means co-runners utilize more of cache

  14. Side Effects • Best effort threads are also effected • Side effects can be limited by increasing number of Cache Fair and Best Effort Threads.

  15. Questions?

More Related