Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Hive Principal Introduction & Shark Introduction

Hive Principal Introduction & Shark Introduction

zhuguangbin

July 08, 2013
Tweet

More Decks by zhuguangbin

Other Decks in Programming

Transcript

  1. What we talk about today? • What Hive is? •

    Why we use Hive • Hive Implemention principal • Our work on Hive • Next Step
  2. Why hive user should know principal • How Hive generate

    MR jobs? • How many MR jobs Hive generates? • How to optimize HQL to make query faster? • How to find root cause when fail? • Why Hive jobs are so slow?
  3. What Hive is? • Data warehouse • Top Project on

    Apache, devoted by Facebook • Offline batch processing system on MR • HQL: SQL like query language to reduce difficulty of MapReduce programming
  4. What Hive is NOT? • Not OLTP • Not suitable

    for OLAP( not "online") • No extensive writes • Not a real RDBMS • Not a real-time Analytics – Batch Query because of MapReduce
  5. Why we use Hive? • what do we need indeed?

    – we need a DataWarehouse for analytic job and reporting • if not this, what else? – GreenPlum, Teradata, Oracle Exadata, Sybase IQ, AsterData, Vertica, InfoBright ……
  6. EDW&Hive Comparation • EDW: – Expensive – Low Scalability •

    Hive: – Free – SQL – High Scalability – Fault-tolerant – OpenSource
  7. Hive Data Model • Hive is an abstract table-like Model

    on HDFS • example: USE tmp; CREATE TABLE page_view(viewTime INT, userid BIGINT, page_url STRING, referrer_url STRING, ip STRING COMMENT 'IP Address of the User') COMMENT 'This is the page view table' PARTITIONED BY(dt STRING, country STRING) CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETS ROW FORMAT DELIMITED FIELDS TERMINATED BY '1' COLLECTION ITEMS TERMINATED BY '2' MAP KEYS TERMINATED BY '3' STORED AS SEQUENCEFILE;
  8. Hive Data Model Cont. example on HDFS tmp page_view /user/hive/warehouse/tmp.db/

    bucket dt='2013-07-07',country='CN' /user/hive/warehouse/tmp.db/page_view/ Hive part-00000 /user/hive/warehouse/tmp.db/page_view/ dt=2013-07-07/country=CN/ database table partition /user/hive/warehouse/tmp.db/page_view/ dt=2013-07-07/country=CN/part- 00000/ row records 1373126400 1 http://www.dianping.com http://t.dianping.com 202.87.12.206 /user/hive/warehouse/tmp.db/page_view/ dt=2013-07-07/country=CN/part- 00000/a.txt • row : STORED AS SEQUENCEFILE SequenceFileRecordReader reads each line as a row • field :  ROW FORMAT DELIMITED FIELDS TERMINATED BY '1' COLLECTION ITEMS TERMINATED BY '2' MAP KEYS TERMINATED BY '3'  LazySimpleSerDe inspect each filed value of a row
  9. Hive Archtecture Detail • User Interface: – CLI: Hive command

    line interface – HWI: Web interface – JDBC/ODBC Server: JDBC/ODBC compatible interface based on thrift server • Driver – Query Compiler: compile HQL into a DAG QueryPlan – Optimizer: optimize M/R tasks on the DAG QP – execution Engine: execute the QP(submit MR Job/ Write or Read HDFS) • Metastore: – hold all the information about the tables and partitions that are in the warehouse.
  10. QueryPlan is a List of Task DAG • HQL: –

    insert overwrite table rec_searched_keyword select t.userid as userid, t.cityid as cityid, t.keyword as keyword, tt.shopid as shopid, tt.count as count from keyword_refer_count tt inner join user_keyword_tfidf t on t.cityid = tt.cityid and tt.referer = t.keyword DISTRIBUTE by userid , cityid sort by userid, cityid , count desc ; • Output QueryPlan: Stage-7 ConditionalTask Stage-8 MapRedLocalTask Stage-1 MapRedTask Stage-6 MapRedTask Stage-2 MapRedTask Stage-0 MoveTask Stage-3 StatsTask Task DAG RootTask ChildTask of State-7 backup Task of Stage-8 Note: This QP has only one Task DAG, if multi-DAG, Hive can run concurrently
  11. Hive Tasks • MapRedTask:在集群中运行MapReduce Job,提交一个MapReduce Job, 有两种提交模式: – viachild:在本地另起一个JVM提交Job –

    jc.submitJob: rpc到JobTracker, • MapredLocalTask:在本地运行MapReduce Job • DDLTask:DDL操作,比如create database/table/index,drop database/table/index,alter database/table/index etc • FunctionTask:CREATE/DROP TEMPORARY FUNCTION 语句 • ExplainTask:explain [formatted|extended] <hql>,用来查看hql生成 的物理执行计划 • ConitionalTask:运行时根据判断条件选择执行计划的相应分支路线执 行。一个典型的例子比如a表和b表join,会生成三条执行计划路线: reduce-side common join/a表是小表的MapJoin/b表是小表的 MapJoin。这三条执行计划的parent就是一个ConditionalTask,在执行 时根据条件选择合适的执行路线。 • FetchTask:凡是有要显示在终端的查询语句(比如select * from ***) 最后都有一个FetchTask,它从指定路径读取数据放到一个ArrayList类 型的结果集里。 • MoveTask:顾名思义,从源目录move数据到目标目录 • CopyTask:股名思义,从源目录copy数据到目标目录 • BlockMergeTask:针对RCFile,起一个MapOnly的MapReduce Job进 行小文件合并,用法:alter table tbl_name [partition ()] merge files • IndexMetadataChangeTask:当Index表变化时,修改Index表的 hive.index.basetbl.dfs.lastModifiedTime属性 • StatsTask:更新Hive表的统计信息(包括numFiles/numRows/ numPartitions/totalSize/rawDataSize)并写入metastore(目前支持 HBase/JDBC)
  12. Operator —essential processing unit on Hive • Each Task is

    a Tree of Operators • Example: TableScanOperator t TableScanOperator tt ReduceSinkOperator ReduceSinkOperator JoinOperator SelectOperator FileSinkOperator MapRedTask Map Reduce
  13. Hive Operators • 输入相关: – TableScanOperator: 扫描hive表数据 – SelectOperator:选择输出列 –

    FilterOperator:过滤输入数据 • 输出相关: – TerminalOperator:抽象类,没有任何实现方法和抽象方法,只表明这是一个输出的 Operator,有三种具体实现 • FileSinkOperator: 建立结果数据,输出至文件 • HashTableSinkOperator: MapJoin时,MapRedLocalTask对小表创建成HashTable并dump到文件 上传到DCache • ReduceSinkOperator: Map端创建将发送到Reducer端的<Key,Value>对 – LimitOperator: limit 语句,显示输出的结果集数目 – HashTableDummyOperator:MapJoin的一个优化,尚不清楚细节 • 计算相关: – CommonJoinOperator:Join操作的抽象类,目前有两类实现: • MapSideJoin:由AbstractMapJoinOperator定义,有两个实现: – MapJoinOperator:MapJoin的实现 – SMBMapJoinOperator:SortedMergeJoin的实现 • ReduceSideJoin:JoinOperator,实现reduce端join逻辑 – GroupByOperator: GroupBy语句 – UnionOperator:union语句 – ScriptOperator:using 语句,用于执行本地的一段shell script – UDTFOperator: – LateralViewForwardOperator: – LateralViewJoinOperator: • 框架执行流程相关:这类Operator的Type为null – MapOperator:作为MapRedTask的Mapper实现类ExecMapper的成员函数,其实是 Map端所有Map Operator Tree的父Operator。每次map处理,对应MapOperator的一 次process。 – ForwardOperator:不做任何处理,只把当前row转交给下一个operator处理 – CollectOperator:将其他Operator输出的row buffer到一个ArrayList里 – ExtractOperator:取当前行的一部分,主要用于复杂的数据结构
  14. use Explain to analyze QP • usage: – explain [extended|formatted]

    <hql> • Output: – ABSTRACT SYNTAX TREE – STAGE DEPENDENCIES – STAGE PLANS
  15. Our work on Hive • Bugfix: – https://github.com/dianping/hive/tree/cosmos-hadoop- 0.9.0 •

    integration: – ACL-Hive – Wormhole Hive Writer/Reader – HiveWeb • new feature: – JDBCStorageHandler • performance improvement: – configuration – Daily Maintenance Work
  16. Further more • Hive users complain that hive runs so

    slow – Job submit slow – MR Job runs slow • why? – HQL: • complex job • ugly sql – DW Model: • too many partiton • small file – Hive itself: • rule based compiler • mapreduce – Disk IO / Network IO – heatbeat – JVM launch time • serde
  17. Data processing Model  DataSize: TB~PB  period whole size

    analystic job  large latency: mins ~ hours Batch Adhoc query  DataSize: GB~TB  one-click query  low latency: sec ~ mins streaming  Live Data  event driven  low latency: sec Actually, we launched more adhoc query than reporting batch job. But Hive is so weight for adhoc query offline hive Jobs for reporting or large size processing Not implemented in DP, but most valuable
  18. Shark introduction • A data analysis (warehouse) system that –

    builds on Spark (MapReduce deterministic, idempotent tasks), – scales out and is fault-tolerant, – supports low-latency, interactive queries through in-memory computation, – supports both SQL and complex analytics such as machine learning, – is compatible with Apache Hive (storage, serdes, UDFs, types, metadata).
  19. Our Test Result • Test enviroment: – Hive (Hadoop): 39

    Nodes • 12 map slot + 12 reduce slot per node • 460 map slot + 460 reduce slot total – Shark(Spark): 3 Nodes • 16 core + 16G mem per node • 48 core + 48G mem total • 3 core + 2G mem per Application NN DN/TT JT DN/TT DN/TT …… Spark Master Spark Worker Spark Worker Spark Worker Hadoop Spark 39 Nodes 3 Nodes Shark Client
  20. Test1 • Query: – select count(*) from tmp.dp_userprice; • Data

    Size: – 46830512 rows, 1.25G • Test Result: – Hive: job_201306041727_223455, 57.736 seconds. – Shark(disk): 13 seconds. – Shark(cached): 5.215 seconds
  21. Test2 • Query: – select cal_dt,train_id,source,count(1) from bi.dpmid_mb_deviceid where cal_dt>='2013-

    06-28' and cal_dt<='2013-07-03' group by cal_dt,train_id,source • Data Size: – 4.5G • Test Result: – Hive: job_201306041727_204186, 2mins, 2sec – Shark(disk): 46.477 seconds
  22. Next Step • Migrate adhoc queries to Shark • Hive:

    – for report analysis/ETL/large size batch job – performance tuning • Shark: – for adhoc query – hiveweb integration