1 / 16

MapReduce over snapshots

MapReduce over snapshots. HBASE-8369. Enis Soztutar Enis [at] apache [dot] org @ enissoz. About Me. In the Hadoop space since 2007 Committer and PMC Member in Apache HBase and Hadoop Working at Hortonworks as member of Technical Staff Twitter: @ enissoz. Snapshots.

kadeem
Download Presentation

MapReduce over snapshots

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MapReduce over snapshots HBASE-8369 Enis Soztutar Enis [at] apache [dot] org @enissoz

  2. About Me • In the Hadoop space since 2007 • Committer and PMC Member in Apache HBase and Hadoop • Working at Hortonworks as member of Technical Staff • Twitter: @enissoz Architecting the Future of Big Data

  3. Snapshots • Currently a snapshot is a bunch of reference files together with some metadata • A table’ snapshot can contain • Table descriptor • List of regions • References to files in the regions • References to WALs for regionservers • Current snapshot impl is flush based • Forces flush to all regions, so that in-memory data is written to disk Architecting the Future of Big Data

  4. MR over Snapshots • Idea is do scan’s on the client side bypassing region servers • Use snapshots since they are immutable • Similar to short circuit hdfs reads • TableSnapshotInputFormat works similar to TableInputFormat • TableMapReduceUtilmethods to configure the job Architecting the Future of Big Data

  5. Deployment Options HBase online • Take snaphot while HBase is running • Run MR job over the snapshot HBase offline • Take snapshot while HBase is running • Export Snapshot using ExportSnapshot to a different hdfs • Run MR job over snapshot with or without HBase running Architecting the Future of Big Data

  6. TableSnapshotInputFormat • Gets a Scan representing the query • Restore the snapshot to a temporary directory • For each region in the snapshot: • Determine whether the region should be scanned (falls between scan start row and stop row) • Create one split per region in the scan range ( # of map tasks) • Each RecordReader will open the region (Hregion) as in HRegionServer • An internal RegionScanner is used for running the scan Architecting the Future of Big Data

  7. API Architecting the Future of Big Data

  8. Timeline • Will (hopefully) be committed to trunk next week or so • Interest in bringing this to 0.94 and 0.96 bases as well • Will come in HDP-2.1, which will be based on 0.96 line Architecting the Future of Big Data

  9. Security Aspects • HBase user owns the files in filesystem • Snapshot files are also owned by the HBase user • Mapreduce job should be able to read the files in the snapshot + actual data files • HDFS only has posix-like perms based on user/group/other • User running MR job has to be either the HBase user, or have group perms • HDFS does not have ACL’s, so there is no easy way to grant read access at filesystem layer • Idea: similar to current short circuit impl, we can implement a FD transfer • User will submit jobs under her own user credentials • Ask HBase daemons to open the files, and pass a handler / token Architecting the Future of Big Data

  10. Performance ScanTest: • Scan : open a scanner, do full table scan • SnapshotScan : open a client-side scanner, do full table scan • ScanMR : parallel full table scan from MR • SnapshotScanMR: do full table scan • 8 Region servers, 6 disks each • HBase trunk • Hadoop-2.2 (HDP-2.0.7.0-12) • Load data with IntegrationTestBulkLoad • Evenly distributed rows, created as bulk loaded hfiles. 3 column families • # store files per region varies 3,6,9, and 12 (1,2,3,4 file per store) • Data sizes: 6.6G, 13.2G, 19.8G, 26.4G Architecting the Future of Big Data

  11. Scan speed Architecting the Future of Big Data

  12. API • We do not want to limit snapshot scanning only to MapReduce • Allow client side scanners over snapshot files Architecting the Future of Big Data

  13. ResultScanner is main scan API Architecting the Future of Big Data

  14. API (caution: not final yet) Architecting the Future of Big Data

  15. To the future and beyond • HBASE-8691 High-Throughput Streaming Scan API • Can we bypass regionservers without taking snapshots? • Bypass memstore data, or stream memstore data, but read directly from hfiles • Secure reading from snapshots • Keep up with the updates at • https://issues.apache.org/jira/browse/HBASE-8369 Architecting the Future of Big Data

  16. Thanks Questions? Enis Söztutar enis [ at ] apache [dot] org @enissoz Architecting the Future of Big Data

More Related