前面我们熟悉了通过spark访问mysql,这一节我们将了解通过spark通过hive
1 系统、软件以及前提约束
- CentOS 7 64 工作站 作者的机子ip是192.168.100.200,主机名为danji,请读者根据自己实际情况设置
- 已完成spark访问mysql
https://www.jianshu.com/p/2b4471c03fea - 为去除权限对操作的影响,所有操作都以root进行
2 操作
- 拷贝hive的配置文件到spark
cp /root/apache-hive-0.14.0-bin/conf/hive-site.xml /root/spark-2.2.1-bin-hadoop2.7/conf
- 修改/root/spark-2.2.1-bin-hadoop2.7/conf/hive-site.xml
加入以下内容:
<property>
<name>hive.metastore.uris</name>
<value>thrift://danji:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
- 修改/root/spark-2.2.1-bin-hadoop2.7/conf/spark-env.sh
加入以下内容:
export SPARK_DIST_CLASSPATH=$(/root/hadoop-2.5.2/bin/hadoop classpath)
export JAVA_HOME=/root/jdk1.8.0_152
export SPARK_HOME=/root/spark-2.2.1-bin-hadoop2.7
export SPARK_MASTER_IP=danji
export SPARK_EXECUTOR_MEMORY=1G
export SCALA_HOME=/root/scala-2.12.2
export HADOOP_HOME=/root/hadoop-2.5.2
export HIVE_CONF_DIR=/root/apache-hive-0.14.0-bin/conf
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/root/spark-2.2.1-bin-hadoop2.7/bin/mysql-connector-java-5.1.47.jar
- 重新启动spark
cd /root/spark-2.2.1-bin-hadoop2.7/sbin
./stop-all.sh
./start-all.sh
- 启动hive的metastore进程
cd /root/apache-hive-0.14.0-bin/bin
./hive --service metastore
- 连接spark-shell
cd /root/spark-2.2.1-bin-hadoop2.7/bin
./spark-shell
# 执行以下scala语句,用于展示有哪些表
scala> spark.sql("show tables").show()
- 或者也可以连接spark-sql
cd /root/spark-2.2.1-bin-hadoop2.7/bin
./spark-sql
# 执行以下类sql语句,用于展示有哪些表
spark-sql> show tables;
以上就是我们在spark中访问hive的过程。