1 / 23

Spark

Spark. Fast, Interactive, Language-Integrated Cluster Computing. Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica. UC Berkeley. Background.

jalen
Download Presentation

Spark

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Spark Fast, Interactive, Language-Integrated Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael Franklin, Scott Shenker, Ion Stoica UC Berkeley

  2. Background • MapReduce and its variants greatly simplified big data analytics by hiding scaling and faults • However, these systems provide a restricted programming model • Can we design similarly powerful abstractions for a broader class of applications?

  3. Motivation Most current cluster programming models are based on acyclic data flow from stable storage to stable storage Map Reduce Input Output Map Reduce Map

  4. Motivation • Most current cluster programming models are based on acyclic data flow from stable storage to stable storage Map Reduce Input Output Map Benefits of data flow: runtime can decide where to run tasks and can automatically recover from failures Reduce Map

  5. Motivation • Acyclic data flow is inefficient for applications that repeatedly reuse a working set of data: • Iterative algorithms (machine learning, graphs) • Interactive data mining tools (R, Excel, Python) • With current frameworks, apps reload data from stable storage on each query

  6. Spark Goal • Efficiently support apps with working sets • Let them keep data in memory • Retain the attractive properties of MapReduce: • Fault tolerance (for crashes & stragglers) • Data locality • Scalability Solution: extend data flow model with “resilient distributed datasets” (RDDs)

  7. Outline • Spark programming model • Applications • Implementation • Demo

  8. Programming Model • Resilient distributed datasets (RDDs) • Immutable, partitioned collections of objects • Created through parallel transformations (map, filter, groupBy, join, …) on data in stable storage • Can be cached for efficient reuse • Actions on RDDs • Count, reduce, collect, save, …

  9. Example: Log Mining • Load error messages from a log into memory, then interactively search for various patterns Cache 1 Base RDD Transformed RDD lines = spark.textFile(“hdfs://...”) errors = lines.filter(_.startsWith(“ERROR”)) messages = errors.map(_.split(‘\t’)(2)) cachedMsgs = messages.cache() Worker results tasks Block 1 Driver Action cachedMsgs.filter(_.contains(“foo”)).count Cache 2 cachedMsgs.filter(_.contains(“bar”)).count Worker . . . Cache 3 Block 2 Worker Result: full-text search of Wikipedia in <1 sec (vs 20 sec for on-disk data) Result: scaled to 1 TB data in 5-7 sec(vs 170 sec for on-disk data) Block 3

  10. RDD Fault Tolerance • RDDs maintain lineage information that can be used to reconstruct lost partitions • Ex: cachedMsgs = textFile(...).filter(_.contains(“error”)) .map(_.split(‘\t’)(2)) .cache() HdfsRDD path: hdfs://… FilteredRDD func: contains(...) MappedRDD func: split(…) CachedRDD

  11. Example: Logistic Regression • Goal: find best line separating two sets of points random initial line + + + + + + – + + – – + – + – – – – – – target

  12. Example: Logistic Regression • val data = spark.textFile(...).map(readPoint).cache() • var w = Vector.random(D) • for (i <- 1 to ITERATIONS) { • val gradient = data.map(p => • (1 / (1 + exp(-p.y*(w dot p.x))) - 1) * p.y * p.x • ).reduce(_ + _) • w -= gradient • } • println("Final w: " + w)

  13. Logistic Regression Performance 127 s / iteration first iteration 174 s further iterations 6 s

  14. Spark Applications • Twitter spam classification (Monarch) • In-memory analytics on Hive data (Conviva) • Traffic prediction using EM (Mobile Millennium) • K-means clustering • Alternating Least Squares matrix factorization • Network simulation

  15. Conviva GeoReport • Aggregations on many group keys w/ same WHERE clause • Gains come from: • Not re-reading unused columns • Not re-reading filtered records • Not repeated decompression • In-memory storage of deserialized objects Time (hours)

  16. Generality of RDDs • RDDs can efficiently express many proposed cluster programming models • MapReduce => map and reduceByKey operations • Dryad => Spark runs general DAGs of tasks • Pregel iterative graph processing => Bagel • SQL => Hive on Spark (Hark?) • Can also express apps that neither of these can

  17. Implementation • Spark runs on the Mesos cluster manager, letting it share resources with Hadoop & other apps • Can read from any Hadoop input source (e.g. HDFS) • ~7000 lines of code; no changes to Scala compiler Spark Hadoop MPI … Mesos Node Node Node Node

  18. Language Integration • Scala closures are Serializable Java objects • Serialize on master, load & run on workers • Not quite enough • Nested closures may reference entire outer scope • May pull in non-Serializable variables not used inside • Solution: bytecode analysis + reflection • Other tricks using custom serialized forms (e.g. “accumulators” as syntactic sugar for counters)

  19. Interactive Spark • Modified Scala interpreter to allow Spark to be used interactively from the command line • Required two changes: • Modified wrapper code generation so that each line typed has references to objects for its dependencies • Distribute generated classes over the network • Enables in-memory exploration of big data

  20. Demo

  21. Conclusion • Spark’s resilient distributed datasets are a simple programming model for a wide range of apps • Download our open source release at • www.spark-project.org matei@berkeley.edu

  22. Related Work • DryadLINQ • Build queries through language-integrated SQL operations on lazy datasets • Cannot have a dataset persist across queries • Relational databases • Lineage/provenance, logical logging, materialized views • Piccolo • Parallel programs with shared distributed hash tables; similar to distributed shared memory • Iterative MapReduce(Twister and HaLoop) • Cannot define multiple distributed datasets, run different map/reduce pairs on them, or query data interactively

  23. Related Work • Distributed shared memory (DSM) • Very general model allowing random reads/writes, but hard to implement efficiently (needs logging or checkpointing) • RAMCloud • In-memory storage system for web applications • Allows random reads/writes and uses logging like DSM • Nectar • Caching system for DryadLINQ programs that can reuse intermediate results across jobs • Does not provide caching in memory, explicit support over which data is cached, or control over partitioning • SMR (functional Scala API for Hadoop)

More Related