一、hive的安装与配置
-
确保自己的hadoop集群没有问题
可以看到我的集群hadoop1和hadoop3为NN,hadoop2为RM
-
关闭集群(hdfs和yarn,zookeper可以不用管)
hadoop1:sbin/stop-dfs.sh
hadoop2:sbin/stop-yarn.sh -
在hadoop集群的每一个节点的core-site.xml增加如下代码
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
总配置为():
<configuration>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/programs/hadoop-2.6.0/data/tmp</value>
</property>
</configuration>
4. 在集群中的每个节点的yarn-site.xml增加如下代码:
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/opt/programs/hadoop-2.6.0/data/tmp</value>
</property>
总配置如下(spark的配置也在里面):
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_sh