--session 1:
hive --service hiveserver2
netstat -nplt|grep 10000 绑定的是127.0.0.1,远程不能连
[hadoop@node1 conf]$ vi hive-site.xml
<property>
<name>hive.server2.thrift.bind.host</name>
<value>node1</value>
<description>Bind host on which to run the HiveServer2 Thrift service.</description>
</property>
本地测试是否能联通,hive提供了远程客户端,相当于远程的sqlplus,和直接执行hive是一样的显示:
--session 2:
[hadoop@node1 ~]$ hadoop fs -chmod 777 /tmp
[hadoop@node1 conf]$ beeline
Beeline version 1.2.1 by Apache Hive
beeline> !connect jdbc:hive2://node1:10000/default
Connecting to jdbc:hive2://node1:10000/default
Enter username for jdbc:hive2://node1:10000/default: root
Enter password for jdbc:hive2://node1:10000/default:
Connected to: Apache Hive (version 1.2.1)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://node1:10000/default> show tables;
+---------------+--+
| tab_name |
+---------------+--+
| t_ansi |
| t_dept_count |
| t_emp |
| t_utf |
+---------------+--+
4 rows selected (5.204 seconds)
0: jdbc:hive2://node1:10000/default> desc t_emp;
+-----------+------------+----------+--+
| col_name | data_type | comment |
+-----------+------------+----------+--+
| id | int | |
| name | string | |
| age | int | |
| dept | string | |
+-----------+------------+----------+--+
0: jdbc:hive2://node1:10000/default> select count(*) from t_emp;
INFO : Number of reduce tasks determined at compile time: 1
INFO : In order to change the average load for a reducer (in bytes):
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
INFO : number of splits:1
INFO : Submitting tokens for job: job_1447681295674_0002
INFO : The url to track the job: http://node1:8088/proxy/application_1447681295674_0002/
INFO : Starting Job = job_1447681295674_0002, Tracking URL = http://node1:8088/proxy/application_1447681295674_0002/
INFO : Kill Command = /home/hadoop/hadoop-2.7.1/bin/hadoop job -kill job_1447681295674_0002
INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO : 2015-11-17 09:23:01,010 Stage-1 map = 0%, reduce = 0%
INFO : 2015-11-17 09:24:01,912 Stage-1 map = 0%, reduce = 0%
INFO : 2015-11-17 09:25:02,665 Stage-1 map = 0%, reduce = 0%
INFO : 2015-11-17 09:26:11,228 Stage-1 map = 0%, reduce = 0%
INFO : 2015-11-17 09:26:27,671 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 5.17 sec
INFO : 2015-11-17 09:27:28,482 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 6.28 sec
INFO : 2015-11-17 09:28:23,379 Stage-1 map = 100%, reduce = 67%, Cumulative CPU 8.22 sec
INFO : 2015-11-17 09:28:29,053 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 9.75 sec
INFO : MapReduce Total cumulative CPU time: 9 seconds 750 msec
INFO : Ended Job = job_1447681295674_0002
+------+--+
| _c0 |
+------+--+
| 3 |
+------+--+
1 row selected (451.259 seconds)
一步一步安装hive2,beeline
最新推荐文章于 2024-04-18 23:07:49 发布