我的配置
1. 将编译好的hadoop-lzo-0.4.20.jar 放入 hadoop-2.7.2/share/hadoop/common/
2. 同步 hadoop-lzo-0.4.20.jar 到集群的其他机器 (hadoop103、hadoop104 )
3. core-site.xml 增加配置支持 LZO 压缩
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
</configuration>
4. 同步 core-site.xml 到 hadoop103、hadoop104
5. 启动hadoop 集群,手动创建 lzo 文件的索引,若无索引,lzo文件的切片只有一个
6. 测试 第 5 步
1. hive 建表
create table bigtable(id bigint, time bigint, uid string, keyword string, url_rank int, click_num int, click_url string) row format delimited fields terminated by '\t' STORED AS
INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
2. 向表中导入数据,bigtable.lzo 大小 为 140M
#在hive终端执行以下命令
load data local inpath '/opt/module/datas/bigtable.lzo' into table bigtable;
3、建索引
#在linux终端执行以下命令
hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /user/hive/warehouse/bigtable