1 / 45

Overview of Spark project

Overview of Spark project. Presented by Yin Zhu ( yinz@ust.hk ) Materials from http ://spark-project.org/documentation / Hadoop in Practice by A. Holmes My demo code : https:// bitbucket.org/blackswift/spark-example. 25 March 2013. Outline. Review of MapReduce (15’)

gary
Download Presentation

Overview of Spark project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of Spark project Presented by Yin Zhu (yinz@ust.hk) Materials from http://spark-project.org/documentation/ Hadoop in Practice by A. Holmes My demo code: https://bitbucket.org/blackswift/spark-example 25 March 2013

  2. Outline Review of MapReduce(15’) Going Through Spark (NSDI’12) Slides (30’) Demo (15’)

  3. Review of MapReduce • Pagerank implemented in Scala and Hadoop • Why Pagerank? • >More complicated than the “hello world” WordCount • >Widely used in search engines, very large scale input (the whole web graph!) • >Iterative algorithm (typical style for most numerical algorithms for data mining)

  4. New score: 0.15 + 0.5*0.85 = 0.575

  5. A functional implementation • defpagerank(links: Array[(UrlId, Array[UrlId])], numIters: Int):Map[UrlId, Double] = { • valn = links.size • varranks = (Array.fromFunction(i => (i, 1.0)) (n)).toMap// init: each node has rank 1.0 • for (iter <- 1 to numIters) { // Take some interactions • valcontrib = • links • .flatMap(node => { • valout_url = node._1 • valin_urls = node._2 • valscore = ranks(out_url) / in_urls.size // the score each outer link recieves • in_urls.map(in_url => (in_url, score) ) • } • ) • .groupBy(url_score => url_score._1) // group the (url, score) pairs by url • .map(url_scoregroup => // sum the score for each unique url • (url_scoregroup._1, url_scoregroup._2.foldLeft (0.0) ((sum,url_score) => sum+url_score._2))) • ranks = ranks.map(url_score => (url_score._1, if (contrib.contains(url_score._1)) 0.85 * contrib(url_score._1) + 0.15else url_score._2)) • } • ranks • } Map Reduce

  6. Hadoop implementation • https://github.com/alexholmes/hadoop-book/tree/master/src/main/java/com/manning/hip/ch7/pagerank

  7. Hadoop/MapReduce implementation • Fault tolerance • the result after each iteration is saved to Disk • Speed/Disk IO • disk IO is proportional to the # of iterations • ideally the link graph should be loaded for only once

  8. Resilient Distributed Datasets A Fault-Tolerant Abstraction forIn-Memory Cluster Computing Matei Zaharia, MosharafChowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley,Michael Franklin, Scott Shenker, Ion Stoica UC Berkeley UC BERKELEY

  9. Motivation • MapReduce greatly simplified “big data” analysis on large, unreliable clusters • But as soon as it got popular, users wanted more: • More complex, multi-stage applications(e.g. iterative machine learning & graph processing) • More interactive ad-hoc queries Response: specialized frameworks for some of these apps (e.g. Pregel for graph processing)

  10. Motivation • Complex apps and interactive queries both need one thing that MapReduce lacks: • Efficient primitives for data sharing In MapReduce, the only way to share data across jobs is stable storage  slow!

  11. Examples HDFSread HDFSwrite HDFSread HDFSwrite iter. 1 iter. 2 . . . Input result 1 query 1 HDFSread result 2 query 2 query 3 result 3 Input . . . Slow due to replication and disk I/O,but necessary for fault tolerance

  12. Goal: In-Memory Data Sharing iter. 1 iter. 2 . . . Input query 1 one-timeprocessing query 2 query 3 Input . . . 10-100× faster than network/disk, but how to get FT?

  13. Challenge • How to design a distributed memory abstraction that is both fault-tolerant and efficient?

  14. Solution: Resilient Distributed Datasets (RDDs) • Restricted form of distributed shared memory • Immutable, partitioned collections of records • Can only be built through coarse-grained deterministic transformations (map, filter, join, …) • Efficient fault recovery using lineage • Log one operation to apply to many elements • Recompute lost partitions on failure • No cost if nothing fails

  15. Three core concepts of RDD • Transformations • define a new RDD from an existing RDD • Cache and Partitioner • Put the dataset into memory/other persist media, and specify the locations of the sub datasets • Actions • carry out the actual computation

  16. RDD and its lazy transformations • RDD[T]: A sequenceof objects of type T • Transformations are lazy: https://github.com/mesos/spark/blob/master/core/src/main/scala/spark/PairRDDFunctions.scala

  17. /** * Return an RDD of grouped items. */ defgroupBy[K:ClassManifest](f:T=>K,p:Partitioner):RDD[(K, Seq[T])]={ valcleanF=sc.clean(f) this.map(t=>(cleanF(t),t)).groupByKey(p) } /** * Group the values for each key in the RDD into a single sequence. Allows controlling the * partitioning of the resulting key-value pair RDD by passing a Partitioner. */ defgroupByKey(partitioner:Partitioner):RDD[(K, Seq[V])]={ defcreateCombiner(v:V)=ArrayBuffer(v) defmergeValue(buf:ArrayBuffer[V],v:V)=buf+=v defmergeCombiners(b1:ArrayBuffer[V],b2:ArrayBuffer[V])=b1++=b2 valbufs=combineByKey[ArrayBuffer[V]]( createCombiner_,mergeValue_,mergeCombiners_,partitioner) bufs.asInstanceOf[RDD[(K, Seq[V])]] }

  18. Actions: carry out the actual computation https://github.com/mesos/spark/blob/master/core/src/main/scala/spark/SparkContext.scala runJob

  19. Example • lines = spark.textFile(“hdfs://...”) • errors = lines.filter(_.startsWith(“ERROR”)) • messages = errors.map(_.split(‘\t’)(2)) • messages.persist() or .cache() messages.filter(_.contains(“foo”)).count messages.filter(_.contains(“bar”)).count

  20. Task Scheduler for actions Dryad-like DAGs Pipelines functionswithin a stage Locality & data reuseaware Partitioning-awareto avoid shuffles B: A: G: Stage 1 groupBy F: D: C: map E: join Stage 2 union Stage 3 = cached data partition

  21. RDD Recovery iter. 1 iter. 2 . . . Input query 1 one-timeprocessing query 2 query 3 Input . . .

  22. Fault Recovery • RDDs track the graph of transformations that built them (their lineage) to rebuild lost data E.g.: messages = textFile(...).filter(_.contains(“error”)) .map(_.split(‘\t’)(2)) HadoopRDD FilteredRDD MappedRDD HadoopRDD path = hdfs://… FilteredRDD func = _.contains(...) MappedRDD func = _.split(…)

  23. Fault Recovery Results Failure happens

  24. Generality of RDDs • Despite their restrictions, RDDs can express surprisingly many parallel algorithms • These naturally apply the same operation to many items • Unify many current programming models • Data flow models:MapReduce, Dryad, SQL, … • Specialized models for iterative apps:BSP (Pregel), iterative MapReduce (Haloop), bulk incremental, … • Support new apps that these models don’t

  25. Tradeoff Space Network bandwidth Memory bandwidth Fine Best for transactional workloads K-V stores, databases, RAMCloud Granularity of Updates Best for batch workloads HDFS RDDs Coarse Low High Write Throughput

  26. Outline Spark programming interface Implementation Demo How people are using Spark

  27. Spark Programming Interface • DryadLINQ-like API in the Scala language • Usable interactively from Scala interpreter • Provides: • Resilient distributed datasets (RDDs) • Operations on RDDs: transformations (build new RDDs), actions (compute and output results) • Control of each RDD’s partitioning (layout across nodes) and persistence (storage in RAM, on disk, etc)

  28. Example: Log Mining Load error messages from a log into memory, then interactively search for various patterns Msgs. 1 Base RDD Transformed RDD lines = spark.textFile(“hdfs://...”) errors = lines.filter(_.startsWith(“ERROR”)) messages = errors.map(_.split(‘\t’)(2)) messages.persist() Worker results tasks Block 1 Master Action messages.filter(_.contains(“foo”)).count Msgs. 2 messages.filter(_.contains(“bar”)).count Worker Msgs. 3 Block 2 Worker Result: full-text search of Wikipedia in <1 sec (vs 20 sec for on-disk data) Result: scaled to 1 TB data in 5-7 sec(vs 170 sec for on-disk data) Block 3

  29. Example: PageRank • 1. Start each page with a rank of 1 • 2. On each iteration, update each page’s rank to • Σi∈neighborsranki/ |neighborsi| links = // RDD of (url, neighbors) pairs ranks = // RDD of (url, rank) pairs for (i <- 1 to ITERATIONS) { ranks = links.join(ranks).flatMap { (url, (links, rank)) => links.map(dest => (dest, rank/links.size)) }.reduceByKey(_ + _) }

  30. Optimizing Placement • links & ranks repeatedly joined • Can co-partition them (e.g. hash both on URL) to avoid shuffles • Can also use app knowledge, e.g., hash on DNS name • links = links.partitionBy( new URLPartitioner()) Links (url, neighbors) Ranks0 (url, rank) join Contribs0 reduce Ranks1 join Contribs2 reduce Ranks2 . . .

  31. PageRank Performance

  32. Implementation Runs on Mesos [NSDI 11]to share clusters w/ Hadoop Can read from any Hadoop input source (HDFS, S3, …) • No changes to Scala language or compiler • Reflection + bytecode analysis to correctly ship code • www.spark-project.org Spark Hadoop MPI … Mesos Node Node Node Node

  33. Behavior with Insufficient RAM

  34. Scalability Logistic Regression K-Means

  35. Breaking Down the Speedup

  36. Spark Operations

  37. Demo

  38. Open Source Community • 15 contributors, 5+ companies using Spark,3+ applications projects at Berkeley • User applications: • Data mining 40x faster than Hadoop (Conviva) • Exploratory log analysis (Foursquare) • Traffic prediction via EM (Mobile Millennium) • Twitter spam classification (Monarch) • DNA sequence analysis (SNAP) • . . .

  39. Related Work RAMCloud, Piccolo, GraphLab, parallel DBs • Fine-grained writes requiring replication for resilience • Pregel, iterative MapReduce • Specialized models; can’t run arbitrary / ad-hoc queries • DryadLINQ, FlumeJava • Language-integrated “distributed dataset” API, but cannot share datasets efficiently across queries • Nectar [OSDI 10] • Automatic expression caching, but over distributed FS • PacMan [NSDI 12] • Memory cache for HDFS, but writes still go to network/disk

  40. Conclusion • RDDs offer a simple and efficient programming model for a broad range of applications • Leverage the coarse-grained nature of many parallel algorithms for low-overhead recovery • Try it out at www.spark-project.org

More Related