spark-sql部署实现与Hive交互

spark-sql部署

版本

Hadoop-2.5.0-cdh5.3.2 

Hive-0.13.1-cdh5.3.2

Spark-1.5.1

以CNSH001节点为例

spark master在CNSH001上:spark://CNSH001:7077

spark HistoryServer在CNSH001上:CNSH001:8032

spark eventLog在hdfs上:hdfs://testenv/spark/eventLog

分步指南

  

1. 拷贝$HIVE_HOME/conf/hive-site.xml, hive-log4j.properties 到 $SPARK_HOME/conf/目录

  

2.修改spark-defaults.conf

spark.eventLog.enabled true
spark.eventLog.dir hdfs://testenv/spark/eventLog
spark.eventLog.compress true
spark.yarn.historyServer.address=CNSH001:8032
spark.sql.hive.metastore.version=0.13.1
spark.port.maxRetries=100

spark.sql.hive.metastore.jars=/opt/apps/hadoop/share/hadoop/mapreduce/*:/opt/apps/hadoop/share/hadoop/mapreduce/lib/*:/opt/apps/hadoop/share/hadoop/common/*:/opt/apps/hadoop/share/hadoop/common/lib/*:/opt/apps/hadoop/share/hadoop/hdfs/*
:/opt/apps/hadoop/share/hadoop/hdfs/lib/*:/opt/apps/hadoop/share/hadoop/yarn/*:/opt/apps/hadoop/share/hadoop/yarn/lib/*:/opt/apps/hive/lib/*:/opt/apps/spark/lib/*
spark.driver.extraLibraryPath=/opt/apps/hadoop/share/hadoop/mapreduce/*:/opt/apps/hadoop/share/hadoop/mapreduce/lib/*:/opt/apps/hadoop/share/hadoop/common/*:/opt/apps/hadoop/share/hadoop/common/lib/*:/opt/apps/hadoop/share/hadoop/hdfs/*
:/opt/apps/hadoop/share/hadoop/hdfs/lib/*:/opt/apps/hadoop/share/hadoop/yarn/*:/opt/apps/hadoop/share/hadoop/yarn/lib/*:/opt/apps/hive/lib/*:/opt/apps/spark/lib/*
spark.executor.extraLibraryPath=/opt/apps/hadoop/share/hadoop/mapreduce/*:/opt/apps/hadoop/share/hadoop/mapreduce/lib/*:/opt/apps/hadoop/share/hadoop/common/*:/opt/apps/hadoop/share/hadoop/common/lib/*:/opt/apps/hadoop/share/hadoop/hdfs
/*:/opt/apps/hadoop/share/hadoop/hdfs/lib/*:/opt/apps/hadoop/share/hadoop/yarn/*:/opt/apps/hadoop/share/hadoop/yarn/lib/*:/opt/apps/hive/lib/*:/opt/apps/spark/lib/*


 

3.修改spark-env.sh

#set Hadoop path
export HDFS_YARN_LOGS_DIR=/data1/hadooplogs
export HADOOP_PREFIX=/opt/apps/hadoop
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_MAPRED_PID_DIR=$HADOOP_HOME/pids
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_LOG_DIR=$HDFS_YARN_LOGS_DIR/logs
export HADOOP_PID_DIR=$HADOOP_HOME/pids
export HADOOP_SECURE_DN_PID_DIR=$HADOOP_PID_DIR
export YARN_HOME=$HADOOP_HOME
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_LOG_DIR=$HDFS_YARN_LOGS_DIR/logs
export YARN_PID_DIR=$HADOOP_HOME/pids
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_CONF_DIR:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*:$HADOOP_COMMON_HOME/share/hadoop/common/*:$HADOOP_COMMON_HOME/share/hadoop/common/lib/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/*:$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*:$HADOOP_YARN_HOME/share/hadoop/yarn/*:$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
export CLASSPATH=$HADOOP_CLASSPATH:$CLASSPATH
### sparkSQL and hive
export HIVE_HOME=/opt/apps/hive
export SPARK_CLASSPATH=$SPARK_HOME/lib:$HIVE_HOME/lib:$HADOOP_CLASSPATH

4.[ERROR] Terminal initialization failed; falling back to unsupported
解决问题java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected

需要删除/opt/apps/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar文件,在/etc/profile中配置export HADOOP_USER_CLASSPATH_FIRST=true 并source操作

见:https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark:+Getting+Started

 

5.使用spark-shell

val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql("select * from test limit 2 ").collect().foreach(println)

6.使用spark-sql启动local模式

里面可以直接输入HiveQL语句执行


7. 启动standalone集群模式的spark-sql 

 spark-sql --master spark://CNSH001:7077  


(该主机地址应该对应spark的ALIVE节点)


8.启动spark on yarn模式的spark-sql

spark-sql --master yarn-client 
或者 
spark-sql --master yarn-cluster(暂时还不支持)





 

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值