1、将hive-site.xml内容添加到spark conf配置文件中,内容仅需要元数据连接信息即可
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://master-centos:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
</configuration>
并分发到各个节点中
2、如hive元数据采用的是mysql,则需将mysql-connector-java-5.1.25-bin.jar放置 spark/lib下
3、修改 spark-defaults.conf 配置文件
spark-default.conf
spark.master spark://192.168.130.140:7077
spark.driver.memory 512m
spark.executor.memory 512m
spark.eventLog.enabled true
spark.eventLog.dir hdfs://192.168.130.140:8020/user/spark/logs (需提前在hadoop上创建好该目录)
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://master-centos:9083</value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
</property>
</configuration>
并分发到各个节点中
2、如hive元数据采用的是mysql,则需将mysql-connector-java-5.1.25-bin.jar放置 spark/lib下
3、修改 spark-defaults.conf 配置文件
spark-default.conf
spark.master spark://192.168.130.140:7077
spark.driver.memory 512m
spark.executor.memory 512m
spark.eventLog.enabled true
spark.eventLog.dir hdfs://192.168.130.140:8020/user/spark/logs (需提前在hadoop上创建好该目录)