1 / 45

SQL on Hadoop

SQL on Hadoop. RAM not Disk. Paul Groom. Behind the numbers. create external script LM_PRODUCT_FORECAST environment rsint receives ( SALEDATE DATE, DOW INTEGER, ROW_ID INTEGER, PRODNO INTEGER, DAILYSALES INTEGER ) partition by PRODNO order by PRODNO, ROW_ID

ketan
Download Presentation

SQL on Hadoop

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SQL on Hadoop RAM not Disk Paul Groom

  2. Behind the numbers create external script LM_PRODUCT_FORECAST environment rsint receives ( SALEDATE DATE, DOW INTEGER, ROW_ID INTEGER, PRODNO INTEGER, DAILYSALES INTEGER ) partition by PRODNO order by PRODNO, ROW_ID sends ( R_OUTPUT varchar ) isolate partitions script S'endofr( # Simple R script to run a linear fit on daily sales prod1<-read.csv(file=file("stdin"), header=FALSE,row.names=1) colnames(prod1)<-c("DOW","ID","PRODNO","DAILYSALES") dim1<-dim(prod1) daily1<-aggregate(prod1$DAILYSALES, list(DOW = prod1$DOW), median) daily1[,2]<-daily1[,2]/sum(daily1[,2]) basesales<-array(0,c(dim1[1],2)) basesales[,1]<-prod1$ID basesales[,2]<-(prod1$DAILYSALES/daily1[prod1$DOW+1,2]) colnames(basesales)<-c("ID","BASESALES") fit1=lm(BASESALES ~ ID,as.data.frame(basesales)) forecast<-array(0,c(dim1[1]+28,4)) colnames(forecast)<-c("ID","ACTUAL","PREDICTED","RESIDUALS") select sum(sales) from sales_history where year = 2006 and month = 5 and region=1; select total_sales from summary where year = 2006 and month = 5 and region=1; select Trans_Year, Num_Trans, count(distinct Account_ID) Num_Accts, sum(count( distinct Account_ID)) over (partition by Trans_Year order by Num_Trans) Total_Accts, cast(sum(total_spend)/1000 as int) Total_Spend, cast(sum(total_spend)/1000 as int) / count(distinct Account_ID) Avg_Yearly_Spend, rank() over (partition by Trans_Year order by count(distinct Account_ID) desc) Rank_by_Num_Accts, rank() over (partition by Trans_Year order by sum(total_spend) desc) Rank_by_Total_Spend from( select Account_ID, Extract(Year from Effective_Date) Trans_Year, count(Transaction_ID) Num_Trans, sum(Transaction_Amount) Total_Spend, avg(Transaction_Amount) Avg_Spend from Transaction_fact where extract(year from Effective_Date)<2009 and Trans_Type='D' and Account_ID<>9025011 and actionid in (select actionid from DEMO_FS.V_FIN_actions where actionoriginid =1) group by Account_ID, Extract(Year from Effective_Date) ) Acc_Summary group by Trans_Year, Num_Trans order by Trans_Yeardesc, Num_Trans; select dept, sum(sales) from sales_fact Where period between date ‘01-05-2006’ and date ‘31-05-2006’ group by dept having sum(sales) > 50000;

  3. Faster, deeper, insight Campaign Management Machine learning algorithms Dynamic Simulation Behaviour modelling Clustering Dynamic Interaction Analytical Complexity Statistical Analysis Fraud detection Reporting & BPM Technology/Automation

  4. Time to influence Reaction – what? – potential value Action – opportunity - interaction BI is becoming democratized

  5. Innovate Consolidate

  6. I need….

  7. Data Discovery tools Dynamic access Drill unlimited

  8. Business [Intelligence] Desires More timely Lower latency Richer data model More granularity More users interactions Self service

  9. “What percentage of business pertinent data is in your Hadoop today?” How will you improve that percentage?”

  10. data But… Are you just Hadumping? Oliver Ratzesberger @ratesberger Too much talk about #Hadoop being the end of ETL and then turned into the corporate #BigData dumpster. 8:40 PM - 12 Mar 13 Merv Adrian @merv @ratesbergermindless #Hadumping is IT's equivalent of fast food - and just as well-balanced. Forethought and planning still matter. 8:43 PM - 12 Mar 13

  11. Enterprise Integration Value Investigative effort Data Data Lake Awareness & Structured Access Planning Hadumping

  12. So… engage with that data

  13. still …but Hadoop too slow for interactive BI …loss of train-of-thought

  14. Business [Intelligence] Desiresin relation to Big Data More timely Lower latency Richer data model More granularity More users interactions Self service

  15. more math Complex Analytics & Data Science …a lot more math

  16. It’s all about getting work done Tasks evolving: Used to be simple fetch of value Bottlenecks Bottlenecks Then dynamic aggregation Now complex algorithms!

  17. Must get more out of Hadoop! Need better SQL integration

  18. SQLsupport …degrees of BI Users want a lot more than just ANSI ‘89 or ’92 support What about ‘99, 2003, 2006, 2008 and now 2011? What about ad-hoc, on-demand now…not batch!

  19. SQL performance …degrees of

  20. Are you thinking about lots of these?

  21. When you should be thinking about lots of these?

  22. Problem

  23. RAM

  24. Let’s talk about: Flash is not RAM

  25. Let’s talk about: in-memory V cache

  26. In-memory misunderstood movebx, base(T1) movecx, num top: moveax, const cmpeax, *ebx jne next inc count next: add ebx, len(row) loop ecx, top DRAM select count(*) from T1; Dynamic Random Access

  27. Let’s talk about: scale-out V scale-up Larger RAM few cores does not help Scale-out with consistent RAM-to-Core ratio memory

  28. 13 We fetch rows back into an internal interpreter structure. 14 We drop the temporary table TT2. 15 We prepare the interpreter to execute another query. 16 We get values from a lookup table to prequalify the loading of EDW_RESPD_EXPSR_QHR_FACT. This is performed by the following steps, up to 'We fetch rows back into an internal interpreter structure'. 17 We create an empty temporary table TT3 in RAM which will be randomly distributed. 18 We select rows from the replicated table EDW_SRVC_MKT_SEG_DIM(6490) with local conditions applied. From these rows, a result set will be generated containing 2 columns. The results will be inserted into the randomly distributed temporary table TT3 in RAM only. Approximately 14 rows will be in the result set with an estimated cost of 0.011. 19 We select rows from the randomly distributed temporary table TT3. From these rows, a result set will be generated containing 1 column. The results will be prepared to be fetched by the interpreter. Approximately 14 rows will be in the result set with an estimated cost of 0.023. 20 We fetch rows back into an internal interpreter structure. 21 We drop the temporary table TT3. 22 We prepare the interpreter to execute another query. 23 We create an empty temporary table TT4 in RAM which will be randomly distributed. 24 We select 6 columns from disk table EDW_RESPD_EXPSR_QHR_FACT(6501) with local conditions. The results are inserted into the randomly distributed temporary table TT4. The result set will contain Optimizer Optimize

  29. Good News: The Price of RAM Price of RAM (Log10) 1987 1995 2000 2005 2010

  30. DDR4 Greater throughput to feed more CPU cores …and thus do more analysis

  31. Pertinence comes through analytics; Analytics comes through processing …and not just occasional batch runs. So leave no core idling – query from RAM

  32. So remember in-memory is about lots of these?

  33. Business Integration - Analytical Platform Application & Client Layer All BI Tools All OLAP Clients Excel Analytical Platform Layer Near-line Storage (optional) Reporting Persistence Layer Kognitio Storage Cloud Storage HadoopClusters Enterprise Data Warehouses Legacy Systems

  34. Building corporate information architecture “Information Anywhere”: Acquire all data Structured Hadoop repository In-memory analytical platform Business Intelligence tools Analytical tools Functional SQL interconnects Building blocks for information discovery and extraction

  35. Epilogue

  36. Inevitable commoditization

  37. “vendors always commoditize storage platforms …again and again” In 2013 Kinetic hard drives first launched Direct access over Ethernet Direct object access via key value pairs The HDFS versions followed a few years later …now map-reduce going into firmware?

  38. Innovate Consolidate

  39. connect contact kognitio.com Paul Groom Chief Innovation Officer paul.groom@kognitio.com kognitio.tel kognitio.com/blog Michael Hiskey VP, Marketing & Business Development michael.hiskey@kognitio.com twitter.com/kognitio linkedin.com/companies/kognitio Steve Friedberg - press contact MMI Communications steve@mmicomm.com tinyurl.com/kognitio youtube.com/kognitio Kognitio is an Exabyte Sponsor of Strata Hadoop World – see us at booth #409

More Related