一步一步安装hive2,beeline


--session 1:
hive --service hiveserver2
netstat -nplt|grep 10000  绑定的是127.0.0.1,远程不能连

[hadoop@node1 conf]$ vi hive-site.xml 
  <property>
    <name>hive.server2.thrift.bind.host</name>
    <value>node1</value>
    <description>Bind host on which to run the HiveServer2 Thrift service.</description>
  </property>


本地测试是否能联通,hive提供了远程客户端,相当于远程的sqlplus,和直接执行hive是一样的显示:

--session 2:
[hadoop@node1 ~]$ hadoop fs -chmod 777 /tmp
[hadoop@node1 conf]$ beeline
Beeline version 1.2.1 by Apache Hive
beeline> !connect jdbc:hive2://node1:10000/default
Connecting to jdbc:hive2://node1:10000/default
Enter username for jdbc:hive2://node1:10000/default: root
Enter password for jdbc:hive2://node1:10000/default: 
Connected to: Apache Hive (version 1.2.1)
Driver: Hive JDBC (version 1.2.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://node1:10000/default> show tables;
+---------------+--+
|   tab_name    |
+---------------+--+
| t_ansi        |
| t_dept_count  |
| t_emp         |
| t_utf         |
+---------------+--+
4 rows selected (5.204 seconds)
0: jdbc:hive2://node1:10000/default> desc t_emp;
+-----------+------------+----------+--+
| col_name  | data_type  | comment  |
+-----------+------------+----------+--+
| id        | int        |          |
| name      | string     |          |
| age       | int        |          |
| dept      | string     |          |
+-----------+------------+----------+--+
0: jdbc:hive2://node1:10000/default> select count(*) from t_emp;
INFO  : Number of reduce tasks determined at compile time: 1
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
INFO  : number of splits:1
INFO  : Submitting tokens for job: job_1447681295674_0002
INFO  : The url to track the job: http://node1:8088/proxy/application_1447681295674_0002/
INFO  : Starting Job = job_1447681295674_0002, Tracking URL = http://node1:8088/proxy/application_1447681295674_0002/
INFO  : Kill Command = /home/hadoop/hadoop-2.7.1/bin/hadoop job  -kill job_1447681295674_0002
INFO  : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO  : 2015-11-17 09:23:01,010 Stage-1 map = 0%,  reduce = 0%
INFO  : 2015-11-17 09:24:01,912 Stage-1 map = 0%,  reduce = 0%
INFO  : 2015-11-17 09:25:02,665 Stage-1 map = 0%,  reduce = 0%
INFO  : 2015-11-17 09:26:11,228 Stage-1 map = 0%,  reduce = 0%
INFO  : 2015-11-17 09:26:27,671 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.17 sec
INFO  : 2015-11-17 09:27:28,482 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 6.28 sec
INFO  : 2015-11-17 09:28:23,379 Stage-1 map = 100%,  reduce = 67%, Cumulative CPU 8.22 sec
INFO  : 2015-11-17 09:28:29,053 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 9.75 sec
INFO  : MapReduce Total cumulative CPU time: 9 seconds 750 msec
INFO  : Ended Job = job_1447681295674_0002
+------+--+
| _c0  |
+------+--+
| 3    |
+------+--+
1 row selected (451.259 seconds)



  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值