(三)大数据环境准备:Hive安装步骤(依赖Hadoop)

[b]1、解压缩文件[/b]# tar -zxvf hive-0.9.0.tar.gz
[b]2、改名字[/b]# mv hive-0.9.0 hive
[b]3、配置环境变量,修改etc/profile全局变量文件[/b]/opt/hive/bin
JAVA_HOME=/opt/jdk1.6.0_24
HADOOP_HOME=/opt/hadoop
HBASE_HOME=/opt/hbase
HIVE_HOME=/opt/hive
PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$PATH
export JAVA_HOME HADOOP_HOME HBASE_HOME HIVE_HOME PATH
# su -
[b]4、测试运行,看看是否安装成功[/b]# hive
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Hive history file=/tmp/root/hive_job_log_root_201509250619_148272494.txt
hive> show tables;
FAILED: Error in metadata: MetaException(message:Got exception: java.net.ConnectException Call to hadoop0/192.168.46.129:9000 failed on connection exception: java.net.ConnectException: Connection refused)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
--解决方案:hive依赖于hdfs存储数据,所以确保hadoop启动了
# start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-namenode-hadoop0.out
localhost: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-hadoop0.out
localhost: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-hadoop0.out
starting jobtracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-jobtracker-hadoop0.out
localhost: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-hadoop0.out
--至此最简单的hive环境配置完毕

[b]5、开始创建数据表[/b]hive> show tables;
OK
Time taken: 5.619 seconds
hive> create table stu(name String,age int);
FAILED: Error in metadata: MetaException(message:Got exception: org.apache.hadoop.ipc.RemoteException
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /user/hive/warehouse/stu.
Name node is in safe mode.
The reported blocks 18 has reached the threshold 0.9990 of total blocks 17. Safe mode will be turned off automatically in 15 seconds.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2204)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2178)
at org.apache.hadoop.hdfs.server.namenode.NameNode.mkdirs(NameNode.java:857)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
--解决方案:由于缺少参数配置,手工建立目录,解决这个问题
# mkdir -p /user/hive/warehouse/stu
hive> create table stu(name String,age int);
OK
Time taken: 0.229 seconds
[b]6、开始插入数据,Hive不支持Insert语句[/b]hive> insert into stu values('MengMeng',24);
FAILED: Parse Error: line 1:12 mismatched input 'stu' expecting TABLE near 'into' in insert clause
hive> show tables;
OK
stu
Time taken: 0.078 seconds
hive> desc stu;
OK
name string
age int
Time taken: 0.255 seconds


--解决方案:hive不支持上述操作,可以使用load加载
hive> LOAD DATA LOCAL INPATH '/opt/stu.txt' OVERWRITE INTO TABLE stu;
Copying data from file:/opt/stu.txt
Copying file: file:/opt/stu.txt
Loading data to table default.stu
Deleted hdfs://hadoop0:9000/user/hive/warehouse/stu
OK
Time taken: 0.643 seconds
[b]7、查询刚才导入的语句[/b]hive> select name ,age from stu;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201509250620_0001, Tracking URL = http://hadoop0:50030/jobdetails.jsp?jobid=job_201509250620_0001
Kill Command = /opt/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=hadoop0:9001 -kill job_201509250620_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-09-25 06:37:55,535 Stage-1 map = 0%, reduce = 0%
2015-09-25 06:37:58,565 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.59 sec
2015-09-25 06:37:59,595 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 0.59 sec
2015-09-25 06:38:00,647 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 0.59 sec
MapReduce Total cumulative CPU time: 590 msec
Ended Job = job_201509250620_0001
MapReduce Jobs Launched:
Job 0: Map: 1 Cumulative CPU: 0.59 sec HDFS Read: 221 HDFS Write: 22 SUCCESS
Total MapReduce CPU Time Spent: 590 msec
OK
--查询结构显示出来了
JieJie 26 NULL
MM 24 NULL
Time taken: 12.812 seconds
疑问:为何有个null值呢,切待下次研究
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值