1 / 31

System Software for Big Data Computing

System Software for Big Data Computing. Cho-Li Wang The University of Hong Kong. CS Gideon-II & CC MDRP Clusters. 2.6T. 3.1T. 20T. HKU High-Performance Computing Lab. Total # of cores: 3004 CPU + 5376 GPU cores RAM Size: 8.34 TB Disk storage: 130 TB Peak computing power: 27.05 TFlops.

blackwellm
Download Presentation

System Software for Big Data Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. System Software for Big Data Computing Cho-Li Wang The University of Hong Kong

  2. CS Gideon-II & CC MDRPClusters 2.6T 3.1T 20T HKU High-Performance Computing Lab. • Total # of cores: 3004 CPU + 5376 GPU cores • RAM Size: 8.34 TB • Disk storage: 130 TB • Peak computing power: 27.05 TFlops GPU-Cluster (Nvidia M2050, “Tianhe-1a”): 7.62 Tflops 31.45TFlops (X12 in 3.5 years)

  3. Big Data: The "3Vs" Model • High Volume (amount of data) • High Velocity (speed of data in and out) • High Variety (range of data types and sources) 2010: 800,000 petabytes (would fill a stack of DVDs reaching from the earth to the moon and back) By 2020, that pile of DVDs would stretch half way to Mars. 2.5 x 1018

  4. Our Research • Heterogeneous Manycore Computing (CPUs+ GUPs) • Big Data Computing on Future Manycore Chips • Multi-granularity Computation Migration

  5. (1) Heterogeneous Manycore Computing (CPUs+ GUPs) JAPONICA : Java with Auto-Parallelization ON GraphIcs Coprocessing Architecture

  6. CPUs GPU Heterogeneous Manycore Architecture

  7. New GPU & Coprocessors

  8. #1 in Top500 (11/2012):Titan @ Oak Ridge National Lab. • 18,688 AMD Opteron 6274 16-core CPUs (32GB DDR3) . • 18,688 Nvidia Tesla K20X GPUs • Total RAM size: over 710 TB • Total Storage: 10 PB. • Peak Performance: 27 Petaflop/s • GPU: CPU = 1.311 TF/s: 0.141 TF/s = 9.3 : 1 • Linpack: 17.59 Petaflop/s • Power Consumption: 8.2 MW Titan compute board: 4 AMD Opteron + 4 NVIDIA Tesla K20X GPUs NVIDIA Tesla K20X (Kepler GK110) GPU: 2688 CUDA cores

  9. Design Challenge: GPU Can’t Handle Dynamic Loops GPU = SIMD/Vector Solutions? Data Dependency Issues (RAW, WAW) Static loops Dynamic loops for(i=0; i<N; i++) { C[i] = A[i] + B[i]; } for(i=0; i<N; i++) { A[ w[i] ] = 3 * A[ r[i] ]; } • Non-deterministic data dependencies inhibit exploitation of inherent parallelism; only DO-ALL loops or embarrassingly parallel workload gets admitted to GPUs. 9

  10. Dynamic loops are common in scientific and engineering applications Lots of dynamic DO-ALL loops! dynamic DOALL static DOALL Source: Z. Shen, Z. Li, and P. Yew, "An Empirical Study on Array Subscripts and Data Dependencies"

  11. GPU-TLS : Thread-level Speculation on GPU intra-thread RAW valid inter-thread RAW in GPU true inter-thread RAW GPU: lock-step execution in the same warp (32 threads per warp). • Incremental parallelization • sliding window style execution. • Efficient dependency checking schemes • Deferred update • Speculative updates are stored in the write buffer of each thread until the commit time. • 3 phases of execution

  12. JAPONICA : Profile-Guided Work Dispatching Scheduler Inter-iteration dependence: -- Read-After-Write (RAW) -- Write-After-Read (WAR) -- Write-After-Write (WAW) Dynamic Profiling Dependency density High Low/None Medium Massively parallel Parallel Highly parallel … 8 high-speed x86 cores Multi-core CPU 64 x86 cores 2880 cores Many-core coprocessors

  13. JAPONICA : System Architecture Sequential Java Code with user annotation Profiler (on GPU) Dependency Density Analysis Program Dependence Graph (PDG) JavaR Uncertain Intra-warp Dep. Check Inter-warp Dep. Check Static Dep. Analysis Code Translation • oneloop Profiling Results No dependence WAW/WAR RAW DO-ALL Parallelizer Speculator CPU-Multi threads GPU-Many threads GPU-TLS Privatization CUDA kernels with GPU-TLS| Privatization & CPU Single-thread CUDA kernels & CPU Multi-threads Task Scheduler : CPU-GPU Co-Scheduling Task Sharing Task Stealing Assign the tasks among CPU & GPU according to their dependency density (DD) High DD : CPU single core CPU queue: low, high, 0 Low DD : CPU+GPU-TLS GPU queue: low, 0 0 : CPU multithreads + GPU Communication CPU GPU

  14. (2) Crocodiles: Cloud Runtime with Object Coherence On Dynamic tILES”

  15. “General Purpose” Manycore Tile-based architecture: Cores are connected through a 2D network-on-a-chip

  16. 鳄鱼 @ HKU (01/2013-12/2015) • Crocodiles: Cloud Runtime with Object Coherence On Dynamic tILES for future 1000-core tiled processors” Memory Controller GbE PCI-E ZONE 1 PCI-E RAM RAM Memory Controller GbE RAM RAM ZONE 4 ZONE 2 ZONE 3 PCI-E Memory Controller GbE DRAM Controller PCI-E GbE

  17. Dynamic Zoning • Multi-tenant Cloud Architecture  Partition varies over time, mimic “Data center on a Chip”. • Performance isolation • On-demand scaling. • Power efficiency (high flops/watt).

  18. Design Challenge: “Off-chip Memory Wall” Problem Memory density has doubled nearly every two years, while performance has improved slowly (e.g. still 100+ of core clock cycles per memory access) • DRAM performance (latency) improved slowly over the past 40 years.

  19. Lock Contention in Multicore System Lock Contention Physical memory allocation performance sorted by function. As more cores are added more processing time is spent contending for locks.

  20. Exim on Linux collapse Kernel CPU time (milliseconds/message)

  21. Challenges and Potential Solutions • Cache-aware design • Data Locality/Working Set getting critical! • Compiler or runtime techniques to improve data reuse • Stop multitasking • Context switching breaks data locality • Time Sharing  Space Sharing 马其顿方阵众核操作系统 : Next-generation Operating System for 1000-core processor

  22. Thanks! For more information: C.L. Wang’s webpage: http://www.cs.hku.hk/~clwang/

  23. http://i.cs.hku.hk/~clwang/recruit2012.htm

  24. Multi-granularity Computation Migration Granularity Coarse WAVNet Desktop Cloud G-JavaMPI JESSICA2 SOD Fine System scale (Size of state) Small Large

  25. WAVNet: Live VM Migration over WAN • A P2P Cloud with Live VM Migration over WAN • “Virtualized LAN” over the Internet” • High penetration via NAT hole punching • Establish direct host-to-host connection • Free from proxies, able to traverse most NATs Key Members VM VM Zheming Xu, Sheng Di, Weida Zhang, Luwei Cheng, and Cho-Li Wang, WAVNet: Wide-Area Network Virtualization Technique for Virtual Private Cloud, 2011 International Conference on Parallel Processing (ICPP2011)

  26. WAVNet: Experiments at Pacific Rim Areas 北京高能物理所 IHEP, Beijing 日本产业技术综合研究所 (AIST, Japan) SDSC, San Diego 26 中央研究院 (Sinica, Taiwan) 深圳先进院(SIAT) 静宜大学 (Providence University) 香港大学 (HKU)

  27. JavaEnabledSingleSystemImageComputingArchitecture JESSICA2: Distributed Java Virtual Machine A Multithreaded Java Program Thread Migration JIT Compiler Mode Portable Java Frame JESSICA2 JVM JESSICA2 JVM JESSICA2 JVM JESSICA2 JVM JESSICA2 JVM JESSICA2 JVM Worker Worker Master Worker 27

  28. History and Roadmap of JESSICA Project Past Members Chenggang Zhang King Tin LAM, Ricky Ma Kinson Chan 28 J1 and J2 received a total of 1107source code downloads • JESSICA V1.0 (1996-1999) • Execution mode: Interpreter Mode • JVM kernel modification (Kaffe JVM) • Global heap: built on top of TreadMarks(Lazy Release Consistency + homeless) • JESSICA V2.0 (2000-2006) • Execution mode: JIT-Compiler Mode • JVM kernel modification • Lazy release consistency + migrating-home protocol • JESSICA V3.0 (2008~2010) • Built above JVM (via JVMTI) • Support Large Object Space • JESSICA v.4 (2010~) • Japonica: Automatic loop parallization and speculative execution on GPU and multicore CPU • TrC-DC : a software transactional memory system on cluster with distributed clocks (not discussed)

  29. Stack-on-Demand (SOD) Stack frame A Method Stack frame A Stackframe A Local variables Local variables Method Area Program Counter Program Counter Stackframe B Method Area Heap Area Rebuilt Program Counter Local variables Execution Resumed Heap Area objects Cloud node Object (Pre-)fetching Mobile node

  30. Elastic Execution Model via SOD (a) “Remote Method Call” (b) Mimic thread migration (c) “Task Roaming”: like a mobile agent roaming over the network or workflow With such flexible or composable execution paths, SOD enables agile and elastic exploitation of distributed resources (storage), a Big Data Solution ! Lightweight, Portable, Adaptable

  31. Method Area Stack segments Partial Heap Code Method Area … … Small footprint Code Stacks Heap JVM process Live migration Stack-on-demand (SOD) Cloud service provider Load balancer duplicate VM instances for scaling Thread migration (JESSICA2) JVM iOS Mobile client Xen VM Xen VM Internet Multi-thread Java process JVM JVM trigger live migration comm. eXCloud : Integrated Solution for Multi-granularity Migration Load balancer guest OS guest OS Xen-aware host OS Overloaded Desktop PC Ricky K. K. Ma, King Tin Lam, Cho-Li Wang, "eXCloud: Transparent Runtime Support for Scaling Mobile Applications," 2011 IEEE International Conference on Cloud and Service Computing (CSC2011),. (Best Paper Award)

More Related