(四)大数据环境:Hive命令操作(一)

[b]1、准备文本文件,启动hadoop[/b]# cat /opt/test.txt
JieJie
MengMeng
NingNing
JingJing
FengJie
# start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-namenode-hadoop0.out
localhost: starting datanode, logging to /opt/hadoop/libexec/../logs/hadoop-root-datanode-hadoop0.out
localhost: starting secondarynamenode, logging to /opt/hadoop/libexec/../logs/hadoop-root-secondarynamenode-hadoop0.out
starting jobtracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-jobtracker-hadoop0.out
localhost: starting tasktracker, logging to /opt/hadoop/libexec/../logs/hadoop-root-tasktracker-hadoop0.out
[b]2、进入命令行[/b]# hive
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-0.9.0.jar!/hive-log4j.properties
Hive history file=/tmp/root/hive_job_log_root_201509252001_1674268419.txt
[b]3、查询昨天的表[/b]hive> select * from stu;
OK
JieJie 26 NULL
MM 24 NULL
Time taken: 17.05 seconds
[b]4、显示数据库[/b]hive> show databases;
OK
default
Time taken: 0.237 seconds
[b]5、创建数据库[/b]hive> create database test;
OK
Time taken: 0.259 seconds
hive> show databases;
OK
default
test
[b]6、使用数据库[/b]Time taken: 0.119 seconds
hive> use test;
OK
Time taken: 0.03 seconds
[b]7、创建表[/b]textfile 默认格式,数据不做压缩,磁盘开销大,数据解析开销大。
可结合Gzip、Bzip2使用(系统自动检查,执行查询时自动解压),但使用这种方式,hive不会对数据进行切分,从而无法对数据进行并行操作。
SequenceFile是Hadoop API提供的一种二进制文件支持,其具有使用方便、可分割、可压缩的特点。
SequenceFile支持三种压缩选择:NONE, RECORD, BLOCK。 Record压缩率低,一般建议使用BLOCK压缩
rcfile是一种行列存储相结合的存储方式。首先,其将数据按行分块,保证同一个record在一个块上,避免读一个记录需要读取多个block。其次,块数据列式存储,有利于数据压缩和快速的列存取。
hive> create table test1(str STRING) STORED AS TEXTFILE;
OK
Time taken: 0.598 seconds
--加载数据
hive> LOAD DATA LOCAL INPATH '/opt/test.txt' INTO TABLE test1;
Copying data from file:/opt/test.txt
Copying file: file:/opt/test.txt
Loading data to table test.test1
OK
Time taken: 1.657 seconds
hive> select * from test1;
OK
JieJie
MengMeng
NingNing
JingJing
FengJie
Time taken: 0.388 seconds
hive> select count(*) from test1;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_201509252000_0001, Tracking URL = http://hadoop0:50030/jobdetails.jsp?jobid=job_201509252000_0001
Kill Command = /opt/hadoop/libexec/../bin/hadoop job -Dmapred.job.tracker=hadoop0:9001 -kill job_201509252000_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-09-25 20:09:55,796 Stage-1 map = 0%, reduce = 0%
2015-09-25 20:10:19,806 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.67 sec
2015-09-25 20:10:53,218 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 6.95 sec
2015-09-25 20:10:54,223 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 6.95 sec
MapReduce Total cumulative CPU time: 6 seconds 950 msec
Ended Job = job_201509252000_0001
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 Cumulative CPU: 6.95 sec HDFS Read: 258 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 950 msec
OK
5
Time taken: 77.515 seconds


create table test1(str STRING) STORED AS TEXTFILE;
create table test2(str STRING) ;
hive> create table test3(str STRING) STORED AS SEQUENCEFILE;
OK
Time taken: 0.112 seconds

hive> create table test4(str STRING) STORED AS RCFILE;
OK
Time taken: 0.502 seconds
[b]8、把旧表数据导入新表[/b]INSERT OVERWRITE TABLE test4 SELECT * FROM test1;
[b]9、设置hive参数[/b]hive> SET hive.exec.compress.output=true;
hive> SET io.seqfile.compression.type=BLOCK;
[b]10、查看hive参数 [/b]hive> SET ;
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值