1 / 32

Distributed Parallel architecture for "Big Data"

Distributed Parallel architecture for "Big Data". Hiba Almasri Dr.Amer Badarneh. outlines. Abstract Introduction Distributed approach Google file system (GFS) Hadoop Distributed Files System (HDFS). Proposed architecture Extract, transform, load (ETL) Map reduce

bjorn
Download Presentation

Distributed Parallel architecture for "Big Data"

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Parallel architecture for "Big Data" HibaAlmasri Dr.AmerBadarneh

  2. outlines • Abstract • Introduction • Distributed approach • Google file system (GFS) • Hadoop Distributed Files System (HDFS). • Proposed architecture • Extract, transform, load (ETL) • Map reduce • Parallel database system • Comparison between MR and parallel DBMS • Conclusion

  3. Abstract • This paper analyzes the problem of storing, processing and retrieving meaningful insight from petabytes of data. • It provides a survey on current distributed and parallel data processing technologies • will propose an architecture that can be used to solve the Analyzed problem

  4. Introduction • look back to the immediate history shows that • the data storing capacity is continuously increasing, • while the cost per GB stored is decreasing,

  5. Introduction • The reduce costs for data storage has represented the main premise of the current data age • it is possible to record and store almost everything, from business to personal data. • The availability and the speed of Internet connections around the world have generated increased data traffic

  6. Introduction examples • Facebook social network hosts more than 140 billion photos, which is more than double compared to 60 billion pictures at the end of 2010. • In terms of storage, all these snapshots data take up more than 14 petabytes • In research field The SETI project is recording each month around 30 terabytes Of data • which are processed by over 250.000 computers each day, [1].

  7. Introduction the problem that arises is How to process all of these data? • This issue is generated by • Available computing power • Algorithms complexity • Access speeds

  8. Introduction • professor AnandRajaraman questioned, more data usually beats better algorithm [2] • more data is important because it provides a more accurate description of the analyzed phenomenon

  9. Introduction • With more data, Data mining algorithms are able to extract a wider group of influence factors and more subtle influences. • The problem is • is It possible to have a drive with that size ? • The large time required to access it

  10. Introduction rapid evolution of drives capacity

  11. Introduction Despite of this rapid revolution • the large datasets of up to one petabytes can only be stored on multiple disks. • Example: • Using 4 TB drives requires 250 of them to store 1 PB of data Now the question is • how easy/hard is to read that data. • Considering an optimal transfer rate of 300 MB/s then the entire dataset is read in 38.5 days.

  12. Introduction • A simple solution is to read from the all disks at once • For our example the entire dataset is read in only 3.7 hours. • Other characteristics of large data sets add supplementary levels of difficulty: • many input sources • Redundancy • lack of normalization or data representation standards • different degrees of integrity and consistency • To meat all of these characteristics the distributed approach is a solution

  13. Distributed approach • In a distributed approach, the file system aims to achieve the following goals: • it should be scalable • it should offer high performance • it should reliable • it should have high availability • Google has published a paper about the file system they claim it's being used by their business called "The Google File System".

  14. GFS In this model we avoid having the GFS master as bottleneck in the data transmission

  15. Hadoop Distributed Files System (HDFS). • What GFS was identifying as "Master" it's being called NameNode • and the GFS "Chunk Servers" can be found as "Datanodes" in HDFS. • Hadoop is a distributed computing platform, which is an open source implementation of the MapReduce framework proposed by Google • HDFS is the primary storage system used by Hadoop applications • HBase is a distributed database. HBase is an open source project for a database, distributed, versioned, column-oriented.

  16. Proposed architecture • The architecture, described in figure 6, has three layers: • the input layer • the data layer • the user layer

  17. ETL • The Extract, Transform and Load (ETL) process provide an intermediary transformations layer between outside sources and the end target database • The ETL process [32] is base on three elements: • Extract -the data is read from multiple source systems into a single format. • Transform –the source data is transform into a format relevant to the solution. • Load – The transformed data is now written into the warehouse.

  18. ETL Layer • The ETL intermediary layer placed between the first two main layers • collects data from the data crawler and harvester component. • converts it in a new form • normalize data • discards not needed or inconsistent information. • loads it in the parallel DBMS data store

  19. ETL Layer • It is important that the ETL server also supports parallel processing allowing it to transform large data sets in timely manner. • If the ETL server does not support parallel processing, then it should just define the transformations and push the processing to the target parallel DBMS.

  20. MapReduce • MapReduce (MR) is a programming model and an associated implementation for processing and generating large data sets. • The large data set is split in smaller subsets • which are processed in parallel by a large cluster of commodity machines. • Map function takes an input data and produces a set of intermediate subsets. • MapReduce library groups together all intermediate subsets associated with the same intermediate key and send them to the Reduce function.

  21. MapReduce • The Reduce function • accepts an intermediate key and subsets. • This function merges together these subsets and key to form a possibly smaller set of values • Normally just zero or one output value is produced per Reduce function. • The entire framework manages how data is split among nodes and how intermediary query results are aggregate.

  22. Parallel database systems • A distributed database (DDB) is a collection of multiple, logically interconnected databases distributed over a computer network. • distributed DBMS, • is the software system that permits the management of the distributed database • and makes the distribution transparent to the users. • parallel DBMS implements the concept of horizontal partitioning by distributing parts of a large relational table across multiple nodes to be processed in parallel

  23. Parallel database systems • Different strategies to implement a parallel DBMS • share memory • share-disks • share nothing • In a massively parallel processing architecture (MPP), Adding more hardware allows for more storage capacity and increases queries speeds. • reduces the implementation effort • reduces the administration effort

  24. the user layer • The user will submit his inquiries through a front end application server • it will convert them into queries • and submit them to the parallel DBMS for processing. • The front end application server will include • an user friendly metadata layer, • will allow the user to query the data ad-hoc, • canned reports and dashboards. • For the end user the architecture is completely transparent. All he will experience is the look and feel of the front end application.

  25. Proposed architecture Objective • The objective of the proposed architecture is to • separate the layers that are data processing intensive • and to link them by ETL services that will act as data buffers or cache zones. • Also the ETL will support the transformation effort. • Getting a combined MR-parallel DBMS solution • The question is why do we need to have a combined MR-parallel DBMS?

  26. comparison between MR and parallel DBMS • there is no all-scenarios good solution for large scale data analysis . • both solutions can be used for the same processing task • the two approaches are complementary. • the process of configuring and loading data Into The parallel DBMS is more complex and time consuming than the setting up of the MR architecture.

  27. comparison between MR and parallel DBMS • DBMSs provide high level querying languages, like SQL; the MR use code fragments, providing a rich interface based on that allows programmers to reuse code fragments. • parallel DBMSs have been proved to be more efficient in terms of speed but they are more vulnerable to node failures. • DBMSs use B-trees indexes to achieve fast searching times; MR framework depends on the programmers who control the data fetching mechanism.

  28. conclusion • A combined MR-parallel DBMS solution, can be a possibility as it benefits from each approach advantages. • Including the ETL as an intermediate layer will support the transformation effort

  29. Any Question?

  30. References • [1] Seti@home, SETI project,http://setiathome.berkeley.edu/ • [2] A. Rajaraman, More data usually beats better algorithms, part I and part II,2008 http://anand.typepad.com/datawocky/2008/03/more-data-usual.html

More Related