将LZO形式的文件放入HDFS中并压缩
LZO文件必须创建索引才支持切片删除线格式
[root@hadoop102 hadoop-3.1.3]$ hadoop fs -mkdir /input
[root@hadoop102 hadoop-3.1.3]$ hadoop fs -put /opt/software/bigtable.lzo /input
2022-07-31 15:04:29,330 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-07-31 15:04:30,487 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
[root@hadoop102 hadoop-3.1.3]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount -Dmapreduce.job.inputformat.class=com.hadoop.mapreduce.LzoTextInputFormat /input /output1
指定inputformat为LzoTextInoutFormat
执行用了一个切片:文件大小比128M大,比256M小,仍然没有切片,原因是Lzo默认不支持切片
2022-07-31 15:10:38,055 INFO mapreduce.JobSubmitter: number of splits:1
要想让其支持切片,应该在上传文件时对上传的LZO文件建索引
[root@hadoop102 hadoop-3.1.3]$hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /input/bigtable.lzo
执行后则创建好索引,在原来的文件目录下多了一个索引文件
再次执行wordcount,观察切片个数(注意要改输出文件路径,因为mapreduce输出文件路径不能存在)
[root@hadoop102 hadoop-3.1.3]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount -Dmapreduce.job.inputformat.class=com.hadoop.mapreduce.LzoTextInputFormat /input /output2
切片数为两个,索引文件不会算在内进行切片,对原文件进行了切片
2022-07-31 15:29:05,250 INFO mapreduce.JobSubmitter: number of splits:2