1 / 10

Scheduling for parallelism

Scheduling for parallelism. Rika nakahara , Yixin luo. Agenda. What is Scheduling? Contention for resources Core/CPU time LLC I/ Os and Prefetcher Locks and Shared Data Metric Throughput Quality of Service. What is Scheduling?.

giona
Download Presentation

Scheduling for parallelism

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scheduling for parallelism Rika nakahara, Yixinluo

  2. Agenda • What is Scheduling? • Contention for resources • Core/CPU time • LLC • I/Os and Prefetcher • Locks and Shared Data • Metric • Throughput • Quality of Service

  3. What is Scheduling? • In computer science, scheduling is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communications bandwidth) - Wikipedia • Why scheduling for parallelism? task1 taskn task1 task1 task2 taskn task1 task2 taskn … … Single Core Processor Core 1 Core 2 Reference: http://en.wikipedia.org/wiki/Scheduling_(computing)

  4. Contention for resources • Core/CPU time • LLC • I/Os and Prefetcher • Locks and Shared Data • Solution? Core 1 Core 2 L1$ L1$ L2$ L2$ LLC MemCtl. Memory Lock A

  5. How to schedule for… • 100 threads contending for 64 cores and locks? • Running thread spinning on lock? • Put some threads to longer sleep. • Sleeping thread holding the lock? • Multiple cores sharing LLC and memory bus? • Cache Pollution? • How to pick programs to run together? • An SMT processor? • How to determine which threads to co-schedule?

  6. Metric • There is usually tradeoff between throughput and QoS. • Quality of service: • Latency/Turnaround time • Fairness

  7. Addressing Shared Resource Contention in Multicore Processors via scheduling • Scheduling = classification & algorithm • Classification: SDC, Animal, Miss Rate, Pain • Algorithm: distribute program with high memory access  determine which 2 applicable to pair together in one core Sergey Zhuravlev, Sergey Blagodurov, and Alexandra Fedorova. 2010. Addressing shared resource contention in multicore processors via scheduling. In Proceedings of the fifteenth edition of ASPLOS on Architectural support for programming languages and operating systems(ASPLOS '10). ACM, New York, NY, USA, 129-142. DOI=10.1145/1736020.1736036 http://doi.acm.org/10.1145/1736020.1736036

  8. Probabilistic job symbiosis modeling for SMT processor scheduling • Co-scheduling: running multiple threads together • Model-driven scheduling: estimate model based on wait cycle and dynamically recalculate co-scheduling StijnEyerman and LievenEeckhout. 2010. Probabilistic job symbiosis modeling for SMT processor scheduling. In Proceedings of the fifteenth edition of ASPLOS on Architectural support for programming languages and operating systems (ASPLOS '10). ACM, New York, NY, USA, 91-102. DOI=10.1145/1736020.1736033 http://doi.acm.org/10.1145/1736020.1736033

  9. Decoupling Contention Management from Scheduling • Increasing load diminishes effect of spinning and blocking. • Decoupling load management and scheduling F. Ryan Johnson, Radu Stoica, Anastasia Ailamaki, and Todd C. Mowry. "Decoupling contention management from scheduling," in Proceedings of the Ffteenth edition of ASPLOS on Architectural support for programming languages and operating systems (ASPLOS '10).

  10. Flexible Architectural Support for fine-Grain Scheduling • Bypass shared resource contention by message passing Daniel Sanchez, Richard M. Yoo, and Christos Kozyrakis. "Flexible architectural support for Fne-grain scheduling," in Proceedings of the Ffteenth edition of ASPLOS on Architectural support for programming languages and operating systems (ASPLOS '10).

More Related