Java @ Google
This presentation is the property of its rightful owner.
Sponsored Links
1 / 49

Java @ Google JavaZone 2005 PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

Java @ Google JavaZone 2005. Knut Magne Risvik Google Inc. September 14, 2005. Presentation Outline. Background: Google’s mission and computing platform. GFS and MapReduce: Ebony and Ivory of our Infrastructure Java for computing : Coupling infrastructure and Java

Download Presentation

Java @ Google JavaZone 2005

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Java google javazone 2005

Java @ Google

JavaZone 2005

Knut Magne Risvik

Google Inc.

September 14, 2005

Presentation outline

Presentation Outline

  • Background: Google’s mission and computing platform.

  • GFS and MapReduce: Ebony and Ivory of our Infrastructure

  • Java for computing: Coupling infrastructure and Java

  • Java in Google products: apps and middle-tiers

  • The Java expertise at Google: We host Java leadership.

  • Giving it back: Google contributions to Java

  • Closing notes and Q&A: Swags for good questions.

Google s mission

Google’s Mission

To organize the world’s information

and make it universally

accessible and useful

Explosive computational requirements

Explosive Computational Requirements

Every Google service sees continuing growth in computational needs

  • More queries: More users, happier users

  • More data:Bigger web, mailbox, blog, etc.

  • Better results:Find the right information, and find it faster







A simple challenge for our computing platform

A Simple Challenge For Our Computing Platform

  • Create world’s largest computing infrastructure

  • Make sure we can afford it

    Need to drive efficiency of the computing infrastructure to unprecedented levels

Many interesting challenges

Many Interesting Challenges

  • Server design and architecture

  • Power efficiency

  • System software

  • Large scale networking

  • Performance tuning and optimization

  • System management and repairs automation

Design philosophy

Design Philosophy

Single-machine performance does not matter

  • The problems we are interested in are too large for any single system

  • Can partition large problems, so throughput beats peak performance

    Stuff Breaks

  • If you have one server, it may stay up three years (1,000 days)

  • If you have 1,000 servers, expect to lose one a day

    “Ultra-reliable” hardware makes programmers lazy

  • A reliable platform will still fail – software still needs to be fault-tolerant

  • Fault-tolerant software beats fault-tolerant hardware

Why use commodity pcs

Why Use Commodity PCs?

  • Single high-end 8-way Intel server:

    • IBM eserver xSeries 440

    • 8 2-GHz Xeon, 64 GB RAM, 8 TB of disk

    • $758,000

  • Commodity machines:

    • Rack of 88 machines

    • 176 2-GHz Xeons, 176 GB RAM, ~7 TB of disk

    • $278,000

  • 1/3X price, 22X CPU, 3X RAM, 1X disk

    Sources:, TPC-C performance results, both from late 2002

When ultra reliable machines won t help

When Ultra-reliable Machines Won’t Help…

Take home lesson murphy was right

Take-home lesson: Murphy was right

Google stanford edu circa 1997 (circa 1997)

Java google javazone 2005

Lego Disk Case

Google com 1999 (1999)

Google data center circa 2000

Google Data Center (circa 2000)

Google com new data center 2001 (new data center 2001)

Google com 3 days later (3 days later)

Java google javazone 2005

When Servers Sleep… (2004)

Google query serving infrastructure

Misc. servers


Spell checker

Google Web Server

Ad Server

Index servers

Doc servers

















Index shards

Doc shards








Elapsed time: 0.25s, machines involved: 1000+

Google Query Serving Infrastructure

Reliable building blocks

Reliable Building Blocks

  • Need to store data reliably

  • Need to run jobs on pools of machines

  • Need to make it easy to apply lots of computational resources to problems

    In-house solutions:

  • Storage: Google File System (GFS)

  • Job scheduling: Global Work Queue (GWQ)

  • MapReduce: simplified large-scale data processing

Google file system gfs

Misc. servers

GFS Master




GFS Master








Chunkserver N

Chunkserver 2

Chunkserver 1






Google File System - GFS

  • Master manages metadata

  • Data transfers happen directly between clients/chunkservers

  • Files broken into chunks (typically 64 MB)

  • Chunks triplicated across three machines for safety

Googlefile api access to gfs

GoogleFile API access to GFS

  • GoogleFile. Public API with two roles:

    • Creational class. Static methods to obtain InputStream, OutputStream and GoogleChannel on top of a Google file.

    • File manipulation. A subset of the methods provided by the class.

  • GoogleInputStream. Implements the read method.

  • GoogleOutputStream. Extends, write method.

  • GoogleChannel. This is a public class. It implements the ByteChannel interface and a subset of the methods in the FileChannel class. This class provides random access.

  • GoogleFile.Stats.

  • The JNI Layer is implemented by the class FileImpl and set of SWIG JNI wrappers generated during the build process.

Gfs usage at google

GFS Usage at Google

  • 30+ Clusters

  • Clusters as large as 2000+ chunkservers

  • Petabyte-sized filesystems

  • 2000+ MB/s sustained read/write load

  • All in the presence of HW failures

  • More information can be found:

    The Google File SystemSanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung19th ACM Symposium on Operating Systems Principles


Mapreduce large scale data processing

MapReduce: Large Scale Data Processing

  • Many tasks: Process lots of data to produce other data

  • Want to use hundreds or thousands of CPUs, and it has to be easy

  • MapReduce provides, for programs following a particular programming model:

    • Automatic parallelization and distribution

    • Fault-tolerance

    • I/O scheduling

    • Status and monitoring

Example word frequencies in web pages

Example: Word Frequencies in Web Pages

A typical exercise for a new engineer in his or her first week

  • Have files with one document per record

  • Specify a map function that takes a key/value pairkey = document namevalue = document text

  • Output of map function is (potentially many) key/value pairs.In our case, output (word, “1”) once per word in the document

“document1”, “to be or not to be”

“to”, “1”

“be”, “1”

“or”, “1”

Example continued word frequencies in web pages

key = “be”

values = “1”, “1”

key = “or”

values = “1”

key = “not”

values = “1”

key = “to”

values = “1”, “1”





Example continued: word frequencies in web pages

  • MapReduce library gathers together all pairs with the same key

  • The reduce function combines the values for a keyIn our case, compute the sum

  • Output of reduce (usually 0 or 1 value) paired with key and saved

“be”, “2”

“not”, “1”

“or”, “1”

“to”, “2”

Example pseudo code

Example: Pseudo-code

map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_values: EmitIntermediate(w, "1");

Reduce(String key, Iterator intermediate_values): // key: a word, same for input and output // intermediate_values: a list of counts int result = 0; for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

Total 80 lines of code

Typical google cluster

Typical Google Cluster

  • 100s/1000s of 2-CPU x86 machines, 2-4 GB of memory

  • Limited bisection bandwidth

  • Storage: local IDE disks and Google File System (GFS)

  • GFS running on the same machines provides reliable, replicated storage of input and output data

  • Job scheduling system: jobs made up of tasks, scheduler assigns tasks to machines



GFS: Google File System

Map task 1

Map task 2

Map task 3










Shuffle and Sort

Reduce task 1

Reduce task 2







GFS: Google File System



  • Shuffle stage is pipelined with mapping

  • Many more tasks than machines, for load balancing

  • Locality: map tasks scheduled near the data they read

  • Backup copies of map & reduce tasks (avoids stragglers)

  • Compress intermediate data

  • Re-execute tasks on machine failure

Mapreduce status mr indexer beta6 large 2003 10 28 00 03

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03

Java google javazone 2005

MapReduce status: MR_Indexer-beta6-large-2003_10_28_00_03



Using 1800 machines:

  • MR_Grep scanned 1 terabyte in 100 seconds

  • MR_Sort sorted 1 terabyte of 100 byte records in 14 minutes

    Rewrote Google's production indexing system

  • a sequence of 7, 10, 14, 17, 21, 24 MapReductions

  • simpler

  • more robust

  • faster

  • more scalable

Usage in march 2005

Usage in March 2005

Widely applicable at google

Widely applicable at Google

  • Implemented as a C++ library linked to user programs

  • Java JNI interface similar to GFS API.

  • Can read and write many different data types

    Example uses:

distributed grepdistributed sortterm-vector per hostdocument clusteringmachine learning...

web access log stats

web link-graph reversal

inverted index construction

statistical machine translation



  • MapReduce has proven to be a useful abstraction

  • Greatly simplifies large-scale computations at Google

  • Fun to use: focus on problem, let library deal with messy details

    MapReduce: Simplified Data Processing on Large ClustersJeffrey Dean and Sanjay GhemawatOSDI'04: Sixth Symposium on Operating System Design and Implementation

    (Search Google for “MapReduce”)

Java in google applications

AdWords FE

- Millions of ads

- Billions of transactions

- Extreme rates

GMail middletier

- UI and storage brokerage

- Content searching, analysis

- Tagging

Java in Google Applications

Java expertise @ google

Java expertise @ Google

  • Joshua Bloch - Collections Framework, Java 5.0 language enhancements, java.math, Author of "Effective Java," Coauthor of "Java Puzzlers."

  • Neal Gafter - Lead developer of javac, implementor of Java 5.0 language enhancements, "Coauthor of "Java Puzzlers."

  • Robert Griesmer - Architect and technical lead of the HotSpot JVM.

  • Doug Kramer - Javadoc architect, Java platform documentation lead.

  • Tim Lindholm - Original member of the Java project, key contributor to the Java programming language, implementor of the classic JVM, coauthor of "The Java Viutual Machine Specification."

  • Michael "madbot" McCloskey - Designer and implementer of java.util.regexp.

  • Srdjan Mitrovic - Co-implementor of the HotSpot JVM.

  • David Stoutamire - Technical lead for Java performance, designer and implementer of parallel garbage collection.

  • Frank Yellin - Original member of the Java project, Co-implementor of classic JVM, KVM and CLDC,  Coauthor of "The Java Viutual Machine specification."

Giving it back jcp expert groups

Giving it back – JCP Expert Groups

  • Executive Committe for J2SE/J2EE

  • JSR 166X: Concurrency Utilities (continuing)

  • JSR 199: Java Compiler API

  • JSR 220: Enterprise JavaBeans 3.

  • JSR 250: Common Annotations for the Java Platform

  • JSR 260: Javadoc Tag Technology Update

  • JSR 269: Pluggable Annotation Processing API

  • JSR 270: J2SE 6.0 ("Mustang") Release Contents representative: gafter

  • JSR 273: Design-Time API for JavaBeans JBDT

  • JSR 274: The BeanShell Scripting Language

  • JSR 277: Java Module System

Closing notes

Closing Notes

  • Google = Computing infrastructure

  • Java is becoming a first class citizen at Google

  • Essential native interfaces being built

  • API design extremely important at our scale, the Java expertise is driving general API work

  • Google brings high-scale industrial experience into JCP expert groups.

Java google javazone 2005


Knut Magne Risvik

Google Inc.

September 14, 2005

  • Login