java hive execute org.apache.hadoop.hive.ql.exec.mr.MapRedTask
安装的是hive2 然后java执行hive 直接select* from table 没有问题 但是带有mr的会包这个错,
然后beeline连接执行查看是版本问题换成
1.2.就ok了
org.apache.hive.jdbc.ZooKeeperHiveClientException: Unable to read HiveServer2 uri from ZooKeeper
该错误调试了好长时间,最后无意中发现 hive-site.xml中的配置项:hive.server2.thrift.bind.host 为0.0.0.0
然后查看zk中写入的也是0.0.0.0:10000
然后重启将该项改为本机Ip重启就OK看了
hive注册zk的ha配置:
http://lxw1234.com/archives/2016/05/675.htm
- <property>
- <name>hive.server2.support.dynamic.service.discovery</name>
- <value>true</value>
- </property>
- <property>
- <name>hive.server2.zookeeper.namespace</name>
- <value>hiveserver2_zk</value>
- </property>
- <property>
- <name>hive.zookeeper.quorum</name>
- <value> zkNode1:2181,zkNode2:2181,zkNode3:2181</value>
- </property>
- <property>
- <name>hive.zookeeper.client.port</name>
- <value>2181</value>
- </property>
- <property>
- <name>hive.server2.thrift.bind.host</name>
- <value>0.0.0.0</value> //改为本机Ip
- </property>
- <property>
- <name>hive.server2.thrift.port</name>
- <value>10001</value> //两个HiveServer2实例的端口号要一致
- </property>
- <property>
- <name>hadoop.proxyuser.abc.groups</name>
- <value>*</value>
- </property>
- <property>
- <name>hadoop.proxyuser.abc.hosts</name>
- <value>*</value>
- </property>
-
yarn rmadmin -refreshSuperUserGroupsConfiguration
hdfs dfsadmin -refreshSuperUserGroupsConfiguration
beeline连接方式:!connect jdbc:hive2://yun01:2181,yun02:2181,yun03:2181,yun04:2181,yun05:2181/weather_eva_db;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2_zk abc ""