需求:
每天会产生很多的日志文件数据,有这么一种需求:需要将每天产生的日志数据在晚上12点钟过后定时执行操作,导入到Hive表中供第二天数据分析使用。要求创建分区表,并按照日期分区。数据文件命名是以当天日期命名的,如2015-01-09.txt
一、创建分区表,以日期作为分区字段
- hive> CREATE TABLE storebydate(
- > name STRING,
- > age INT,
- > address STRING
- > )
- > PARTITIONED BY(date STRING)
- > ROW FORMAT DELIMITED
- > FIELDS TERMINATED BY ','
- > STORED AS TEXTFILE;
- OK
- Time taken: 0.093 seconds
因为我测试的数据文件格式是以逗号隔开的,所以此出创建表列以逗号隔开,可以修改成你自己对应的分隔符。
二、创建脚本
- #!/bin/sh
- todaydate=`date -d -1days +%Y-%m-%d`
- hive -e "USE hive; LOAD DATA LOCAL INPATH '/home/hadoopUser/data/test/$todaydate.txt' OVERWRITE INTO TABLE storebydate PARTITION (date='$todaydate')"
其实,脚本的定时执行是在一天数据收集结束之后进行的,也就是第二天,即9日的数据是在10日处理的,所以todaydate实际上求的是前一天的时间。其次,使用hive -e命令在终端执行HQL语句。
三、测试脚本
1、选用以下文件2015-01-09.txt,数据格式如下:
- Tom,19,HuaiAn
- Jack,21,HuaiAn
- HaoNing,12,HuaiAn
- Hadoop,20,AnHui
- Rose,23,NanJing
- [hadoopUser@secondmgt test]$ ./test.sh
- 15/01/10 20:43:05 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
- 15/01/10 20:43:05 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
- 15/01/10 20:43:05 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
- 15/01/10 20:43:05 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
- 15/01/10 20:43:05 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
- 15/01/10 20:43:05 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
- 15/01/10 20:43:05 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
- 15/01/10 20:43:05 INFO Configuration.deprecation: mapred.committer.job.setup.cleanup.needed is deprecated. Instead, use mapreduce.job.committer.setup.cleanup.needed
- Logging initialized using configuration in file:/home/hadoopUser/cloud/hive/apache-hive-0.13.1-bin/conf/hive-log4j.properties
- OK
- Time taken: 0.565 seconds
- Copying data from file:/home/hadoopUser/data/test/2015-01-09.txt
- Copying file: file:/home/hadoopUser/data/test/2015-01-09.txt
- Loading data to table hive.storebydate partition (date=2015-01-09)
- rmr: DEPRECATED: Please use 'rm -r' instead.
- Deleted hdfs://secondmgt:8020/hive/warehouse/hive.db/storebydate/date=2015-01-09
- Partition hive.storebydate{date=2015-01-09} stats: [numFiles=1, numRows=0, totalSize=79, rawDataSize=0]
- OK
- Time taken: 1.168 seconds
- hive> select * from storebydate;
- OK
- Tom 19 HuaiAn 2015-01-09
- Jack 21 HuaiAn 2015-01-09
- HaoNing 12 HuaiAn 2015-01-09
- Hadoop 20 AnHui 2015-01-09
- Rose 23 NanJing 2015-01-09
- Time taken: 0.058 seconds, Fetched: 5 row(s)
四、将脚本做成定时执行
此处比较简单,主要利用crontab,此处不做介绍。可以参考我的另外一篇博客:Crontab 实现定时执行一个shell脚本(以每隔十分钟执行一次为例)。