1 / 25

大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment

大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment. 彭波 北京大学信息科学技术学院 4/23/2011 http://net.pku.edu.cn/~course/cs402/. Jimmy Lin University of Maryland. 课程建设. SEWM Group.

ayame
Download Presentation

大规模数据处理 / 云计算 Lecture 3 – Hadoop Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 大规模数据处理/云计算Lecture 3 – Hadoop Environment 彭波 北京大学信息科学技术学院 4/23/2011 http://net.pku.edu.cn/~course/cs402/ Jimmy Lin University of Maryland 课程建设 SEWMGroup This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details

  2. Install Hadoop

  3. Installation • Prerequisites • JAVA SDK 6 • Linux OS( the only production platform) • Installation • Download hadoop.x.y.z.tar.gz (we use version 0.20.2) • tar xzvfhadoop.x.y.z.tar.gz • Setup Environment Variables • export HADOOP_HOME=/home/hadoop • export HADOOP_CONF_DIR=/home/hadoop/hadoop-config • export PATH=$HADOOP_HOME/bin:$PATH • export JAVA_HOME=/usr/lib/jvm/java-6-openjdk

  4. Configuration • Directory: $HADOOP_CONF_DIR • core-site.xml • hdfs-site.xml • mapred-site.xml

  5. Test • HDFS • hadoop fs -ls / • MapReduce • hadoop jar $HADOOP_HOME/hadoop-0.20.2-examples.jar pi 10 10000 • Web Administration • http://jobtracker:50030

  6. Develop Environment • Eclipse 3.3.2 works perfect with the plugin distributed with hadoop.0.20.2 • You’d better not to try higher version~ • Online KnowledgeBase • 错误汇总 • 《hadoop 0.20 程式開發 – eclipse plugin + Makefile》

  7. “Hello world”

  8. “Hello World”: Word Count Map(String docid, String text): for each word w in text: Emit(w, 1); Reduce(String term, Iterator<Int> values): int sum = 0; for each v in values: sum += v; Emit(term, value);

  9. Basic Hadoop API* • Mapper • void map(K1 key, V1 value, OutputCollector<K2, V2> output, Reporter reporter) • void configure(JobConf job) • void close() throws IOException • Reducer/Combiner • void reduce(K2 key, Iterator<V2> values, OutputCollector<K3,V3> output, Reporter reporter) • void configure(JobConf job) • void close() throws IOException • Partitioner • void getPartition(K2 key, V2 value, intnumPartitions) *Note: forthcoming API changes…

  10. Data Types in Hadoop Writable Defines a de/serialization protocol. Every data type in Hadoop is a Writable. WritableComprable Defines a sort order. All keys must be of this type (but not values). Concrete classes for different data types. IntWritableLongWritable Text … Binary encoded of a sequence of key/value pairs SequenceFiles

  11. WordCount::Mapper

  12. WordCount::Reducer

  13. Basic Cluster Components • One of each: • Namenode (NN) • Jobtracker (JT) • Set of each per slave machine: • Tasktracker (TT) • Datanode (DN)

  14. Putting everything together… namenode job submission node namenode daemon jobtracker tasktracker tasktracker tasktracker datanode daemon datanode daemon datanode daemon Linux file system Linux file system Linux file system … … … slave node slave node slave node

  15. Anatomy of a Job • MapReduce program in Hadoop = Hadoop job • Jobs are divided into map and reduce tasks • An instance of running a task is called a task attempt • Multiple jobs can be composed into a workflow • Job submission process • Client (i.e., driver program) creates a job, configures it, and submits it to job tracker • JobClient computes input splits (on client end) • Job data (jar, configuration XML) are sent to JobTracker • JobTracker puts job data in shared location, enqueues tasks • TaskTrackers poll for tasks • Off to the races…

  16. Input File Input File InputSplit InputSplit InputSplit InputSplit InputSplit InputFormat RecordReader RecordReader RecordReader RecordReader RecordReader Mapper Mapper Mapper Mapper Mapper Intermediates Intermediates Intermediates Intermediates Intermediates Source: redrawn from a slide by Cloduera, cc-licensed

  17. Mapper Mapper Mapper Mapper Mapper Intermediates Intermediates Intermediates Intermediates Intermediates Partitioner Partitioner Partitioner Partitioner Partitioner (combiners omitted here) Intermediates Intermediates Intermediates Reducer Reduce Reducer Source: redrawn from a slide by Cloduera, cc-licensed

  18. Reducer Reduce Reducer RecordWriter RecordWriter RecordWriter OutputFormat Output File Output File Output File Source: redrawn from a slide by Cloduera, cc-licensed

  19. Input and Output • InputFormat: • TextInputFormat • KeyValueTextInputFormat • SequenceFileInputFormat • … • OutputFormat: • TextOutputFormat • SequenceFileOutputFormat • …

  20. Shuffle and Sort in Hadoop • Probably the most complex aspect of MapReduce! • Map side • Map outputs are buffered in memory in a circular buffer • When buffer reaches threshold, contents are “spilled” to disk • Spills merged in a single, partitioned file (sorted within each partition): combiner runs here • Reduce side • First, map outputs are copied over to reducer machine • “Sort” is a multi-pass merge of map outputs (happens in memory and on disk): combiner runs here • Final merge pass goes directly into reducer

  21. Shuffle and Sort Mapper intermediate files (on disk) merged spills (on disk) Reducer Combiner circular buffer (in memory) Combiner other reducers spills (on disk) other mappers

  22. Hadoop Workflow 1. Load data into HDFS 2. Develop code locally 3. Submit MapReduce job 3a. Go back to Step 2 Hadoop Cluster You 4. Retrieve data from HDFS

  23. Debugging Hadoop • First, take a deep breath • Start from unit test, then start locally • Strategies • Learn to use the webapp • Where does println go? • Don’t use println, use logging • Throw RuntimeExceptions

  24. Recap Hadoop data types Anatomy of a Hadoop job Hadoop jobs, end to end Software development workflow

  25. Q&A?Thanks you!

More Related