Spark整合hive
1.hive的类库需要在spark worker节点。2.复制core-site.xml(hdfs) + hdfs-site.xml(hdfs) + hive-site.xml(hive)三个文件
到spark/conf下。
如果spark-env里面配置了hadoop的路径,就不用复制core-site.xml(hdfs) + hdfs-site.xml(hdfs)
3.复制mysql驱动程序到/soft/spark/jars下
...
hive的jar包中有
4.启动spark-shell,指定启动模式
spark-shell --master local[4]
$scala>create table tt(id int,name string , age int)
row format delimited fields terminated by ','
lines terminated by '\n'
stored as textfile ;
//加载数据到hive表
$scala>spark.sql("load data local inpath 'file:///home/centos/data.txt' into table mydb.tt");
spark-sql读取映射hbase数据的hive外部表
1.拷贝如下jar包到${spark_home}/jars(spark2.0之前是${spark_home}/lib):
hbase的jar包中
- hbase-protocol-1.2.0.jar
- hbase-client-1.2.0.jar
- hbase-common-1.2.0.jar
- hbase-server-1.2.0.jar
hive的jar包中
1.复制hive的hive-hbase-handler-2.1.0.jar文件到spark/jars目录下。
2.复制hive/下的metrics的jar文件到spark下。
$>cd /soft/hive/lib
$>ls | grep metrics | cp `xargs` /soft/spark/jars
3.启动spark-shell 本地模式测试
$>spark-shell --master local[4]
$scala>spark.sql("select * from mydb.ext_calllogs_in_hbase").show();
$scala>spark.sql("select count(*) ,substr(calltime,1,6) from ext_calllogs_in_hbase where caller = '15778423030' and substr(calltime,1,4) == '2017' group by substr(calltime,1,6)").show();