目录
编译好的hadoop-lzo-0.4.20.jar和bigtable.lzo下载链接
链接:https://caiyun.139.com/m/i?185CjwRJ1lzT0
提取码:icUc
复制内容打开和彩云手机APP,操作更方便哦
hadoop本身并不支持lzo压缩
,故需要使用twitter提供的hadoop-lzo开源组件。hadoop-lzo需依赖hadoop和lzo进行编译
,编译步骤省略。
-
2. 将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-2.7.2/share/hadoop/common/
-
3. 同步hadoop-lzo-0.4.20.jar到替他所有节点
-
4. 在core-site.xml文件中增加配置支持LZO压缩
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
LZO压缩文件的可切片特性依赖于其索引,
故我们需要手动为LZO压缩文件创建索引。若无索引,
则LZO文件的切片只有一个。
- 将bigtable.lzo(150M)上传到集群的根目录
[hadoop@hadoop201 hadoop]$ hadoop fs -mkdir /input
[hadoop@hadoop201 hadoop]$ hadoop fs -put bigtable.lzo /input
- 对上传的LZO文件建索引
[hadoop@hadoop201 hadoop]$ hadoop jar /opt/modules/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /input/bigtable.lzo
- 再次执行WordCount程序
[hadoop@hadoop201 hadoop]$ hadoop jar /opt/modules/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /input /output2