1 / 31

Middleware Solutions for Data-Intensive (Scientific) Computing on Clouds

Middleware Solutions for Data-Intensive (Scientific) Computing on Clouds. Gagan Agrawal Ohio State University (Joint Work with Tekin Bicer, David Chiu, Yu Su, ..). Motivation. Cloud Resources Pay-as-you-go Elasticity Black boxes from a performance view-point Scientific Data

lorin
Download Presentation

Middleware Solutions for Data-Intensive (Scientific) Computing on Clouds

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Middleware Solutions for Data-Intensive (Scientific) Computing on Clouds Gagan Agrawal Ohio State University (Joint Work with Tekin Bicer, David Chiu, Yu Su, ..)

  2. Motivation • Cloud Resources • Pay-as-you-go • Elasticity • Black boxes from a performance view-point • Scientific Data • Specialized formats, like NetCDF, HDF5, etc. • Very Large Scale

  3. Ongoing Work at Ohio State • MATE-EC2: Middleware for Data-Intensive Computing on EC2 • Alternative to Amazon Elastic MapReduce • Data Management Solutions for Scientific Datasets • Target NetCDF and HDF5 • Accelerating Data Mining Computations Using Accelerators • Resource Allocation Problems on Clouds

  4. MATE-EC2: Motivation • MATE – MapReduce with an Alternate API • MATE-EC2: Implementation for AWS Environments • Cloud resources are blackboxes • Need for services and tools that can… • get the most out of cloud resources • help their users with easy APIs

  5. MATE vs. Map-Reduce Processing Structure • Reduction Objectrepresents the intermediate state of the execution • Reduce func. is commutative and associative • Sorting, grouping.. overheads are eliminated with red. func/obj.

  6. MATE-EC2 Design • Data organization • Three levels: Buckets, Chunks and Units • Metadata information • Chunk Retrieval • Threaded Data Retrieval • Selective Job Assignment • Load Balancing and handling heterogeneity • Pooling mechanism

  7. MATE-EC2 Processing Flow S3 Data Object Computing Layer T T T T C C C Metadata File Job Scheduler 1 2 3 n 0 5 0 EC2 Master Node EC2 Slave Node Retrieve chunk pieces and Write them into the buffer Pass retrieved chunk to Computing Layer and process Request another job Request Job from Master Node C0 is assigned as job C5 is assigned as a job Retrieve the new job

  8. Experiments • Goals: • Finding the most suitable setting for AWS • Performance of MATE-EC2 on heterogeneous and homogeneous environments • Performance comparison of MATE-EC2 and Map-Reduce • Applications: KMeans and PCA • Used Resources: • 4 Large EC2 instances for processing, 1 Large instance for Master • 16 Data objects on S3 (8.2GB total data set for both app.)

  9. Diff. Data Chunk Sizes • KMeans • 16 Retrieval threads • Performance increase • 8M vs. others • 1.13 to 1.30 • 1 Thread vs. 16 Threads versions • 1.24 to 1.81

  10. Diff. Number of Threads • 128MB chunk size • Performance increase in Fig. (KMeans) • 1.37 to 1.90 • Performance increase for PCA • 1.38 to 1.71

  11. Selective Job Assignment • Performance increase in Fig. (KMeans) • 1.01 to 1.14 • For PCA • 1.19 to 1.68

  12. Heterogeneous Env. • L: Large instances S: Small instances • 128MB chunk size • Overheads in Fig. (KMeans) • Under 1% • Overheads for PCA • 1.1 to 11.7

  13. MATE-EC2 vs. Map-Reduce • Scalability (MATE) • Efficiency: 90% • Scalability (MR) • Efficiency: 74% • Speedups: • MATE vs. MR • 3.54 to 4.58

  14. MATE-EC2: Continuing Directions • Cloud Bursting • Cloud as an Complement or On-Demand Alternative to Local Resources • Autotuning for a New Cloud Environment • Data Storage can be black-box • Data-Intensive Applications on Cluster of GPUs • Programming Model, System Design

  15. Outline • MATE-EC2: Middleware for Data-Intensive Computing on EC2 • Alternative to Amazon Elastic MapReduce • Data Management Solutions for Scientific Datasets • Target NetCDF and HDF5 • Accelerating Data Mining Computations Using Accelerators • Resource Allocation Problems on Clouds

  16. Data Management: Motivation • Datasets are becoming extremely large • Scientific datasets are in formats like NetCDF and HDF5 • Existing database solutions are not scalable • Can’t help with native data formats

  17. Data Management: Use Scenarios • Data Dissemination Efforts • Support User-Defined Subsetting and Data Aggregation • Implementing Data Processing Applications • Higher-level API than NetCDF/HDF5 libraries • Visualization Tools (ParaView etc.) • Data format Conversion on Large Datasets

  18. Initial Prototype: Data Subsetting With Relational View on NetCDF Parse the SQL expression Metadata for netcdf dataset Filter dimensions Generate data access code Partition tasks and assign to slave processes Execute query Filter variable value

  19. Metadata descriptor • Dataset Storage Description • List the nodes and the directories where the data is resident. • Dataset Layout Description • Header part of each netcdf file • Naturally included in netcdf dataset • Save the energy for generating the metadata • Describe the layout of each netcdf file

  20. Pre-filter and Post-filter • Pre-filter: • Take SQL grammar and metadata as input • Do filtering based on dimensions of variable • Support both direct dimensions and coordinate variable • Post-filer: • Do filtering based on variable value

  21. Query Partition • Partition current query into several sub-queries and assign each sub-query to a slave process. • Two partition criteria • Consider the continuous of the memory • Consider data aggregation(future)

  22. Experiment Setup • Application: • Global Cloud Resolving Model and Data (GCRM) • Environment: • Glenn System in Ohio Supercomputer Center

  23. SQL queries

  24. Scalability with different data size • 8 processes • Execution time scaled almost linearly within each query

  25. Time improvement for using pre-filter • 4 processes; • SQL5 (only query 1% of the data); • Prefilter efficiently decreases the query size, improve the performance.

  26. Scalability with Increasing No. of Sources • 4G dataset; • SQL1 (full scan of the data table); • Execution time scaled almost linearly

  27. Data Management: Continuing Work • Similar Prototype with HDF5 under Implementation • Consider processing, not just subsetting/aggregation • Map-Reduce like Processing for NetCDF/HDF5 datasets? • Consider Format Conversion for Existing Tools

  28. Outline • MATE-EC2: Middleware for Data-Intensive Computing on EC2 • Alternative to Amazon Elastic MapReduce • Data Management Solutions for Scientific Datasets • Target NetCDF and HDF5 • Accelerating Data Mining Computations • Resource Allocation Problems on Clouds

  29. System for Mapping to Heterogeneous Configurations Application Developer Run-time System Compilation Phase Worker Thread Creation and Management Code Generator User Input: Simple C code with annotations Map Computation to CPU and GPU Multi-core Middleware API GPU Code for CUDA Dynamic Work Distribution

  30. K-Means on GPU + Multi-Core CPUs

  31. Summary Dataset Sizes are Increasing Clouds add many challenges Many challenges in data processing on clouds

More Related