Escolar Documentos
Profissional Documentos
Cultura Documentos
@Kognitio
@mphnyc
#MPP_R #OANYC
@Kognitio
@mphnyc
#MPP_R #OANYC
@Kognitio
@mphnyc
#MPP_R #OANYC
Business Intelligence
Straddle IT and Business
Numbers Tables Charts Indicators Time - History - Lag Access - to view (portal) - to data - to depth - Control/Secure Consumption - digestion
More-connected users?
@Kognitio @mphnyc #MPP_R #OANYC
According to one estimate, mankind created 150 exabytes of data in 2005 (billion gigabytes)
Data flow
Data Variety
@Kognitio
@mphnyc
#OANYC
What?
New value comes from your existing data
Respondents were asked to choose up to two descriptions about how their organizations view big data from the choices above. Choices have been abbreviated, and selections have been normalized to equal 100%. n=1144
Source: IBM Institute for Business Value/Said Business School Survey
@Kognitio
@mphnyc
#OANYC
@Kognitio
@mphnyc
#MPP_R #OANYC
Dynamic Simulation
#MPP_R
Technology/Automation
@Kognitio
@mphnyc
#OANYC
loss of trainof-thought
@Kognitio @mphnyc #MPP_R #OANYC
@Kognitio
@mphnyc
#MPP_R #OANYC
Reporting
Persistence Layer
Hadoop Clusters
Cloud Storage
Legacy Systems
The Future
Predictive Analytics Advanced Analytics
Big Data
In-memory
Data Scientists
connect www.kognitio.com linkedin.com/companies/kognitio tinyurl.com/kognitio NA: +1 855 KOGNITIO EMEA: +44 1344 300 770 twitter.com/kognitio youtube.com/kognitio
Hadoop meets Mature BI: Where the rubber meets the road for Data Scientists
The key challenge for Data Scientists is not the proliferation of their roles, but the ability to graduate key Big Data projects from the Data Science Lab and production-ize them into their broader organizations. Over the next 18 months, "Big Data' will become just "Data"; this means everyone (even business users) will need to have a way to use it - without reinventing the way they interact with their current reporting and analysis. To do this requires interactive analysis with existing tools and massively parallel code execution, tightly integrated with Hadoop. Your Data Warehouse is dying; Hadoop will elicit a material shift away from price per TB in persistent data storage.
@Kognitio @mphnyc #MPP_R
Wanted
Dead or Alive
SQL
Tasks evolving: Used to be simple fetch of value Then was calc dynamic aggregate
select receives sum(sales) total_sales Trans_Year, Num_Trans, dept, (sum(sales) SALEDATE DATE, DOW INTEGER, ROW_ID INTEGER, PRODNO INTEGER, DAILYSALES partition by PRODNO order by PRODNO, ROW_ID from sales_history count(distinct summary Account_ID) Num_Accts, sales_fact sends ( R_OUTPUT varchar ) where year = 2006 and month = -055 and region=1; sum(count( distinct Account_ID)) over (partition by-05Trans_Year Where period between date 01 2006 and date 31 2006 isolate partitions script S'endofr( # Simple R script to Total_Spend, run a linear fit on daily sales cast(sum(total_spend)/1000 as int) group by dept prod1<-read.csv(file=file("stdin"), header=FALSE,row.names cast(sum(total_spend)/1000 as int) / count(distinct Account_ID having sum(sales) > 50000; colnames(prod1)<-c("DOW","ID","PRODNO","DAILYSALES") rank() over (partition dim1<-dim(prod1) by Trans_Year order by count(distinct A daily1<-aggregate(prod1$DAILYSALES, list(DOW = prod1$DOW), rank() over (partition by Trans_Year order by sum(total_spend) daily1[,2]<-daily1[,2]/sum(daily1[,2]) from( select Account_ID, basesales<-array(0,c(dim1[1],2)) basesales[,1]<-prod1$ID Extract(Year from Effective_Date) Trans_Year, basesales[,2]<-(prod1$DAILYSALES/daily1[prod1$DOW+1,2]) @Kognitio @mphnyc #MPP_R count(Transaction_ID) Num_Trans, colnames(basesales)<-c("ID","BASESALES")
fit1=lm(BASESALES ~ ID,as.data.frame(basesales))
rsint
@Kognitio
@mphnyc
#MPP_R
Hadoop is
Lots of these
Hadoop inherently disk oriented
@Kognitio
@mphnyc
#MPP_R