资料下载地址:
http://pan.baidu.com/s/1sj4Sbwx
Hadoop中常用的压缩算法有bzip2、gzip、lzo、snappy,其中lzo、snappy需要操作系统安装native库才可以支持
下面这张表,是比较官方一点的统计,不同的场合用不同的压缩算法。bzip2和GZIP是比较消耗CPU的,压缩比最高,GZIP不能被分块并行的处理;Snappy和LZO差不多,稍微胜出一点,cpu消耗的比GZIP少。
通常情况下,想在CPU和IO之间取得平衡的话,用Snappy和lzo比较常见一些。
Comparison between compression algorithms
Algorithm % remaining Encoding Decoding
GZIP 13.4% 21 MB/s 118 MB/s
LZO 20.5% 135 MB/s 410 MB/s
Snappy 22.2% 172 MB/s 409 MB/s
对于数据格式为TextFile,Sequence,以及其他用户自定义的文件格式的文件,都可以采用以上的压缩算法进行压缩;
TextFile在压缩以后,不能split,压缩出来的数据当做job的输入是一个文件作为一个map。SequenceFile本身是分块的,加上lzo的压缩格式,文件可以实现lzo方式的split操作,可以按照record、block进行压缩,一般采用block效率更高一些。
---------------------------------------------
hadoop各种压缩算法的优缺点简述
在考虑如何压缩那些将由MapReduce处理的数据时,考虑压缩格式是否支持分割是很重要的。考虑存储在HDFS中的未压缩的文件,其大小为1GB,HDFS的块大小为64MB,所以该文件将被存储为16块,将此文件用作输入的MapReduce作业会创建1个输人分片(split ,也称为“分块”。对于block,我们统一称为“块”。)每个分片都被作为一个独立map任务的输入单独进行处理。
现在假设。该文件是一个gzip格式的压缩文件,压缩后的大小为1GB。和前面一样,HDFS将此文件存储为16块。然而,针对每一块创建一个分块是没有用的,因为不可能从gzip数据流中的任意点开始读取,map任务也不可能独立于其他分块只读取一个分块中的数据。gzip格式使用DEFLATE来存储压缩过的数据,DEFLATE将数据作为一系列压缩过的块进行存储。问题是,每块的开始没有指定用户在数据流中任意点定位到下一个块的起始位置,而是其自身与数据流同步。因此,gzip不支持分割(块)机制。
在这种情况下,MapReduce不分割gzip格式的文件,因为它知道输入是gzip压缩格式的(通过文件扩展名得知),而gzip压缩机制不支持分割机制。这样是以牺牲本地化为代价:一个map任务将处理16个HDFS块。大都不是map的本地数据。与此同时,因为map任务少,所以作业分割的粒度不够细,从而导致运行时间变长。
在我们假设的例子中,如果是一个LZO格式的文件,我们会碰到同样的问题,因为基本压缩格式不为reader提供方法使其与流同步。但是,bzip2格式的压缩文件确实提供了块与块之间的同步标记(一个48位的PI近似值),因此它支持分割机制。
对于文件的收集,这些问题会稍有不同。ZIP是存档格式,因此它可以将多个文件合并为一个ZIP文件。每个文件单独压缩,所有文档的存储位置存储在ZIP文件的尾部。这个属性表明ZIP文件支持文件边界处分割,每个分片中包括ZIP压缩文件中的一个或多个文件。
在MapReduce我们应该使用哪种压缩格式
根据应用的具体情况来决定应该使用哪种压缩格式。就个人而言,更趋向于使用最快的速度压缩,还是使用最优的空间压缩?一般来说,应该尝试不同的策略,并用具有代表性的数据集进行测试,从而找到最佳方法。对于那些大型的、没有边界的文件,如日志文件,有以下选项。
存储未压缩的文件。
使用支持分割机制的压缩格式,如bzip2。
在应用中将文件分割成几个大的数据块,然后使用任何一种支持的压缩格式单独压缩每个数据块(可不用考虑压缩格式是否支持分割)。在这里,需要选择数据块的大小使压缩后的数据块在大小上相当于HDFS的块。
使用支持压缩和分割的Sequence File(序列文件)。
对于大型文件,不要对整个文件使用不支持分割的压缩格式,因为这样会损失本地性优势,从而使降低MapReduce应用的性能。
hadoop支持Splittable压缩lzo
在hadoop中使用lzo的压缩算法可以减小数据的大小和数据的磁盘读写时间,在HDFS中存储压缩数据,可以使集群能保存更多的数据,延长集群的使用寿命。不仅如此,由于mapreduce作业通常瓶颈都在IO上,存储压缩数据就意味这更少的IO操作,job运行更加的高效。
但是在hadoop上使用压缩也有两个比较麻烦的地方:第一,有些压缩格式不能被分块,并行的处理,比如gzip。第二,另外的一些压缩格式虽然支持分块处理,但是解压的过程非常的缓慢,使job的瓶颈转移到了cpu上,例如bzip2。
----------------------------------------------
我们就选择LZO压缩了,数据格式使用序列文件,这样的实现压缩和分割,在mapreduce的时候可以实现一个大文件可以分给多个map去处理
1.安装依赖包
sudo yum -y install
lzo-devel
zlib-devel
gcc autoconf automake libtool(给hadoop配置sudo的权限)(三个节点都需要安装)
开始了哦!(三个节点都需要安装)
(1)安装LZO
wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz
tar -zxvf lzo-2.06.tar.gz
./configure -enable-shared -prefix=/usr/local/hadoop/lzo/
make && sudo make install
(2)安装LZOP
wget http://www.lzop.org/download/lzop-1.03.tar.gz
tar -zxvf lzop-1.03.tar.gz
./configure -enable-shared -prefix=/usr/local/hadoop/lzop
make
&& sudo make install
(3)把lzop复制到/usr/bin/
(4)测试lzop
会在生成一个lzo后缀的压缩文件: /home/hadoop/data/access_20131219.log.lzo即表示前述几个步骤正确哦。
(5)安装Hadoop-LZO
当然的还有一个前提,就是配置好maven和svn 或者Git(我使用的是SVN)
我这里使用svn https://github.com/twitter/hadoop-lzo (或者直接从git下载zip包,上传的linux上面去)
使用SVN从https://github.com/twitter/hadoop-lzo/trunk下载代码,修改pom.xml文件中的一部分。
++++++++++++++++++++++++++++
linux下svn客户端安装及环境配置
可能出现错误:(可以提前安装)
1. configure: error: could not find library containing RSA_new
解决方法:
#
yum install openssl-devel
2. configure: error: no XML parser was found: expat or libxml 2.x required
解决方法:
#
yum install expat-devel
开始安装:
一、svn客户端安装及环境配置.
果所在的linux机器上没有安装svn客户端,则首先安装svn客户端:
1. subversion-1.4.3.tar.bz2
subversion-deps-1.4.3.tar.bz2
2. 使用
tar xvfj subversion-1.4.3.tar.bz2
tar xvfj subversion-deps-1.4.3.tar.bz2
解压这两个文件(不要乱了顺序),此时在当前目录下会出现subversion-1.4.3文件夹,
3. 进入subversion-1.4.3文件夹:
# cd subversion-1.4.3
#./configure --with-libs=/usr/lib --enable-shared --with-ssl=openssl
#
--with-ssl默认是openssl, 请指定
# make
#编译,如果提示错误, 则可能需要root权限
# sudo make install
#安装,此时可能需要root用户权限
安装好客户端之后, 默认安装目录是/usr/local/subversion.
4. 测试
svn help 如果出现命令列表,则表示已经链接成功。
svn客户端安装成功
++++++++++++++++++++++++++++
使用SVN从https://github.com/twitter/hadoop-lzo/trunk下载代码,修改pom.xml文件中的一部分。
修改为:
++++++++++++++++++++++++++
maven安装:
maven官方下载地址,可以选择源码编码安装,这里就直接下载编译好的 就可以了
wget http://mirror.bit.edu.cn/apache/maven/maven-3/3.1.1/binaries/apache-maven-3.1.1-bin.zip
解压文件后,同样在/etc/profie里配置环境变量
export MAVEN_HOME=/opt/maven3.1.1
export PATH=$PATH:$MAVEN_HOME/bin
验证配置是否成功: mvn -version
Apache Maven 3.1.1 (0728685237757ffbf44136ac
ec0402957f723d9a; 2013-09-17 23:22:22+0800)
Maven home: /opt/maven3.1.1
Java version: 1.7.0_45, vendor: Oracle Corporation
Java home: /opt/jdk1.7/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-358.el6.x86_64", arch: "amd64", family: "unix"
配置国内的pom服务器,官方的速度垃圾的要死呀
在maven目录下,conf/settings.xml,在里添加,原本的不要动
这样就可以了,速度比较不错喔
再依次执行:
cd hadoop-lzo-master
//该目录下有pom文件,是mvn的依赖
mvn clean package -Dmaven.test.skip=true
//如果是64位操作系统,如果编译失败,需要设置两个shell的参数 export CFLAGS=-m64
export CXXFLAGS=-m64
tar -cBf - -C target/native/Linux-i386-32/lib . | tar -xBvf - -C /cloud/hadoop-2.2.0/lib/native/
cp target/hadoop-lzo-0.4.20-SNAPSHOT.jar /cloud/hadoop-2.2.0/share/hadoop/common/
接下来就是将/cloud/hadoop-2.2.0/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar以及/cloud/hadoop-2.2.0/lib/native/ 同步到其它所有的hadoop节点。
scp -r /cloud/hadoop-2.2.0/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar slave1:/cloud/hadoop-2.2.0/share/hadoop/common/
scp -r /cloud/hadoop-2.2.0/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar slave2:/cloud/hadoop-2.2.0/share/hadoop/common/
scp -r /cloud/hadoop-2.2.0/lib/native/* slave1:/cloud/hadoop-2.2.0/lib/native/
scp -r /cloud/hadoop-2.2.0/lib/native/* slave2:/cloud/hadoop-2.2.0/lib/native/
注意,要保证目录/cloud/hadoop-2.2.0/lib/native/ 下的jar包,你运行hadoop的用户都有执行权限。
(6)配置Hadoop
在文件$HADOOP_HOME/etc/hadoop/hadoop-env.sh中追加如下内容:
export LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib
在文件$HADOOP_HOME/etc/hadoop/core-site.xml中追加如下内容:
在文件$HADOOP_HOME/etc/hadoop/mapred-site.xml中追加如下内容:
(7)安装成功与否的测试
生成数据:
java -jar createDataWC.jar rolin.txt 1000000
lzop rolin.txt
压缩前后变化
[hadoop@master ~]$ ll -lsh r*
597M -rw-rw-r--. 1 hadoop hadoop 597M Jun 13 02:07 rolin.txt
163M -rw-rw-r--. 1 hadoop hadoop 163M Jun 13 02:07 rolin.txt.lz
hadoop fs -mkdir /lzo
hadoop fs -mkdir /lzo1
hadoop fs -put rolin.txt.lzo /lzo/
上传到hdfs上
hadoop fs -put rolin.txt /lzo1
建立lzo文件索引
$HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar \
com.hadoop.compression.lzo.DistributedLzoIndexer \
/lzo/rolin.txt.lzo
mapreduce方式建立索引
$HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar \
com.hadoop.compression.lzo.LzoIndexer \
/lzo/rolin.txt.lzo
本地程序方式建立索引
使用mapreduce方式建立lzo索引输出:
[hadoop@master ~]$ $HADOOP_HOME/bin/hadoop jar \
> $HADOOP_HOME/share/hadoop/common/hadoop-lzo-0.4.20-SNAPSHOT.jar \
> com.hadoop.compression.lzo.DistributedLzoIndexer \
> /lzo/rolin.txt.lzo
14/06/15 18:43:14 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries
14/06/15 18:43:14 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev e8c11c2be93b965abb548411
379b203dabcbce79]
14/06/15 18:43:16 INFO lzo.DistributedLzoIndexer: Adding LZO file /lzo/rolin.txt.lzo to indexing list (no index currently exists)
14/06/15 18:43:16 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
14/06/15 18:43:16 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.1.150:8032
14/06/15 18:43:17 INFO input.FileInputFormat: Total input paths to process : 1
14/06/15 18:43:17 INFO mapreduce.JobSubmitter: number of splits:1
14/06/15 18:43:17 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/06/15 18:43:17 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/06/15 18:43:17 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/06/15 18:43:17 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/06/15 18:43:17 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/06/15 18:43:17 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/06/15 18:43:17 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
14/06/15 18:43:17 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/06/15 18:43:17 INFO Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
14/06/15 18:43:17 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/06/15 18:43:17 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/06/15 18:43:17 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/06/15 18:43:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1402882236152_0001
14/06/15 18:43:18 INFO impl.YarnClientImpl: Submitted application application_1402882236152_0001 to ResourceManager at master/192.168.1.150:8032
14/06/15 18:43:19 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1402882236152_0001/
14/06/15 18:43:19 INFO lzo.DistributedLzoIndexer: Started DistributedIndexer job_1402882236152_0001 with 1 splits for [/lzo/rolin.txt.lzo]
14/06/15 18:43:19 INFO mapreduce.Job: Running job: job_1402882236152_0001
14/06/15 18:43:29 INFO mapreduce.Job: Job job_1402882236152_0001 running in uber mode : false
14/06/15 18:43:29 INFO mapreduce.Job:
map 0% reduce 0%
14/06/15 18:43:43 INFO mapreduce.Job:
map 100% reduce 0%
14/06/15 18:43:43 INFO mapreduce.Job: Job job_1402882236152_0001 completed successfully
14/06/15 18:43:43 INFO mapreduce.Job: Counters: 28
map端就可以完成
两个程序运行成功之后,都会在hdfs目录/lzo/下生成一个索引文件rolin.txt.index。
跑wordcount测试程序:
官方非压缩的wc程序:
package org.apache.hadoop.examples;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
}
结果:
[hadoop@master ~]$ hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /lzo1 /out1
14/06/15 23:40:38 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.1.150:8032
14/06/15 23:40:39 INFO input.FileInputFormat: Total input paths to process : 1
14/06/15 23:40:39 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries
14/06/15 23:40:39 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev e8c11c2be93b965abb548411
379b203dabcbce79]
14/06/15 23:40:39 INFO mapreduce.JobSubmitter: number of splits:5
14/06/15 23:40:39 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/06/15 23:40:39 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/06/15 23:40:39 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/06/15 23:40:39 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
14/06/15 23:40:39 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/06/15 23:40:39 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/06/15 23:40:39 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/06/15 23:40:39 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/06/15 23:40:39 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/06/15 23:40:39 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/06/15 23:40:39 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/06/15 23:40:39 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/06/15 23:40:39 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1402882236152_0002
14/06/15 23:40:40 INFO impl.YarnClientImpl: Submitted application application_1402882236152_0002 to ResourceManager at master/192.168.1.150:8032
14/06/15 23:40:40 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1402882236152_0002/
14/06/15 23:40:40 INFO mapreduce.Job: Running job: job_1402882236152_0002
14/06/15 23:40:48 INFO mapreduce.Job: Job job_1402882236152_0002 running in uber mode : false
14/06/15 23:40:48 INFO mapreduce.Job:
map 0% reduce 0%
14/06/15 23:41:52 INFO mapreduce.Job:
map 8% reduce 0%
14/06/15 23:41:55 INFO mapreduce.Job:
map 10% reduce 0%
14/06/15 23:41:58 INFO mapreduce.Job:
map 12% reduce 0%
14/06/15 23:42:01 INFO mapreduce.Job:
map 13% reduce 0%
14/06/15 23:42:20 INFO mapreduce.Job:
map 18% reduce 0%
14/06/15 23:42:23 INFO mapreduce.Job:
map 21% reduce 0%
14/06/15 23:42:26 INFO mapreduce.Job:
map 23% reduce 0%
14/06/15 23:42:29 INFO mapreduce.Job:
map 24% reduce 0%
14/06/15 23:42:47 INFO mapreduce.Job:
map 30% reduce 0%
14/06/15 23:42:50 INFO mapreduce.Job:
map 33% reduce 0%
14/06/15 23:42:53 INFO mapreduce.Job:
map 34% reduce 0%
14/06/15 23:42:56 INFO mapreduce.Job:
map 35% reduce 0%
14/06/15 23:43:13 INFO mapreduce.Job:
map 36% reduce 0%
14/06/15 23:43:15 INFO mapreduce.Job:
map 37% reduce 0%
14/06/15 23:43:16 INFO mapreduce.Job:
map 41% reduce 0%
14/06/15 23:43:19 INFO mapreduce.Job:
map 44% reduce 0%
14/06/15 23:43:22 INFO mapreduce.Job:
map 45% reduce 0%
14/06/15 23:43:25 INFO mapreduce.Job:
map 46% reduce 0%
14/06/15 23:43:40 INFO mapreduce.Job:
map 48% reduce 0%
14/06/15 23:43:43 INFO mapreduce.Job:
map 52% reduce 0%
14/06/15 23:43:46 INFO mapreduce.Job:
map 53% reduce 0%
14/06/15 23:43:49 INFO mapreduce.Job:
map 54% reduce 0%
14/06/15 23:43:50 INFO mapreduce.Job:
map 61% reduce 0%
14/06/15 23:44:04 INFO mapreduce.Job:
map 62% reduce 7%
14/06/15 23:44:06 INFO mapreduce.Job:
map 63% reduce 7%
14/06/15 23:44:07 INFO mapreduce.Job:
map 65% reduce 7%
14/06/15 23:44:10 INFO mapreduce.Job:
map 67% reduce 7%
14/06/15 23:44:12 INFO mapreduce.Job:
map 68% reduce 7%
14/06/15 23:44:13 INFO mapreduce.Job:
map 69% reduce 7%
14/06/15 23:44:25 INFO mapreduce.Job:
map 70% reduce 7%
14/06/15 23:44:28 INFO mapreduce.Job:
map 72% reduce 7%
14/06/15 23:44:31 INFO mapreduce.Job:
map 73% reduce 7%
14/06/15 23:44:36 INFO mapreduce.Job:
map 87% reduce 7%
14/06/15 23:44:37 INFO mapreduce.Job:
map 93% reduce 13%
14/06/15 23:44:38 INFO mapreduce.Job:
map 100% reduce 100%
14/06/15 23:44:40 INFO mapreduce.Job: Job job_1402882236152_0002 completed successfully
14/06/15 23:44:41 INFO mapreduce.Job: Counters: 43
结果:
hello 24994174
legend 25006579
rolin 25000767
world 24998480
时间:
real
3m18.074s
user
0m0.736s
sys
0m2.493s
使用压缩后的wc程序:
在mapreduce程序中使用lzo压缩
把inputformat设置成LzoTextInputFormat,
job.setInputFormatClass(LzoTextInputFormat.class);
package youling.studio.lzo;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
import com.hadoop.mapreduce.LzoTextInputFormat;
public class WordCount {
}
注意,对于建立了索引的lzo文件,如果不把inputformat设置为LzoTextInputFormat,会把索引文件也当做是输入文件。
另外,编译的时候需要引入hadoop-lzo-0.4.20-SNAPSHOT.jar包。如果是用maven编译,在pom文件中把这个依赖加上
测试结果:
[hadoop@master ~]$ hadoop jar lzoTest.jar youling.studio.lzo.WordCount /lzo /out
14/06/15 23:57:44 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.1.150:8032
14/06/15 23:57:45 INFO input.FileInputFormat: Total input paths to process : 2
14/06/15 23:57:45 INFO mapreduce.JobSubmitter: number of splits:2
14/06/15 23:57:45 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/06/15 23:57:45 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/06/15 23:57:45 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/06/15 23:57:45 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
14/06/15 23:57:45 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/06/15 23:57:45 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/06/15 23:57:45 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/06/15 23:57:45 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
14/06/15 23:57:45 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/06/15 23:57:45 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/06/15 23:57:45 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/06/15 23:57:45 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/06/15 23:57:45 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/06/15 23:57:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1402882236152_0003
14/06/15 23:57:45 INFO impl.YarnClientImpl: Submitted application application_1402882236152_0003 to ResourceManager at master/192.168.1.150:8032
14/06/15 23:57:45 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1402882236152_0003/
14/06/15 23:57:45 INFO mapreduce.Job: Running job: job_1402882236152_0003
14/06/15 23:57:51 INFO mapreduce.Job: Job job_1402882236152_0003 running in uber mode : false
14/06/15 23:57:51 INFO mapreduce.Job:
map 0% reduce 0%
14/06/15 23:58:03 INFO mapreduce.Job:
map 7% reduce 0%
14/06/15 23:58:07 INFO mapreduce.Job:
map 8% reduce 0%
14/06/15 23:58:13 INFO mapreduce.Job:
map 11% reduce 0%
14/06/15 23:58:16 INFO mapreduce.Job:
map 14% reduce 0%
14/06/15 23:58:22 INFO mapreduce.Job:
map 15% reduce 0%
14/06/15 23:58:25 INFO mapreduce.Job:
map 19% reduce 0%
14/06/15 23:58:28 INFO mapreduce.Job:
map 21% reduce 0%
14/06/15 23:58:34 INFO mapreduce.Job:
map 25% reduce 0%
14/06/15 23:58:37 INFO mapreduce.Job:
map 27% reduce 0%
14/06/15 23:58:43 INFO mapreduce.Job:
map 30% reduce 0%
14/06/15 23:58:46 INFO mapreduce.Job:
map 33% reduce 0%
14/06/15 23:58:52 INFO mapreduce.Job:
map 34% reduce 0%
14/06/15 23:58:55 INFO mapreduce.Job:
map 38% reduce 0%
14/06/15 23:58:58 INFO mapreduce.Job:
map 40% reduce 0%
14/06/15 23:59:04 INFO mapreduce.Job:
map 42% reduce 0%
14/06/15 23:59:07 INFO mapreduce.Job:
map 43% reduce 0%
14/06/15 23:59:08 INFO mapreduce.Job:
map 60% reduce 0%
14/06/15 23:59:16 INFO mapreduce.Job:
map 61% reduce 0%
14/06/15 23:59:23 INFO mapreduce.Job:
map 61% reduce 17%
14/06/15 23:59:25 INFO mapreduce.Job:
map 62% reduce 17%
14/06/15 23:59:40 INFO mapreduce.Job:
map 64% reduce 17%
14/06/15 23:59:49 INFO mapreduce.Job:
map 65% reduce 17%
14/06/15 23:59:58 INFO mapreduce.Job:
map 66% reduce 17%
14/06/16 00:00:07 INFO mapreduce.Job:
map 67% reduce 17%
14/06/16 00:00:10 INFO mapreduce.Job:
map 68% reduce 17%
14/06/16 00:00:20 INFO mapreduce.Job:
map 69% reduce 17%
14/06/16 00:00:29 INFO mapreduce.Job:
map 70% reduce 17%
14/06/16 00:00:32 INFO mapreduce.Job:
map 71% reduce 17%
14/06/16 00:00:41 INFO mapreduce.Job:
map 72% reduce 17%
14/06/16 00:00:51 INFO mapreduce.Job:
map 73% reduce 17%
14/06/16 00:01:00 INFO mapreduce.Job:
map 74% reduce 17%
14/06/16 00:01:03 INFO mapreduce.Job:
map 75% reduce 17%
14/06/16 00:01:12 INFO mapreduce.Job:
map 76% reduce 17%
14/06/16 00:01:21 INFO mapreduce.Job:
map 77% reduce 17%
14/06/16 00:01:30 INFO mapreduce.Job:
map 78% reduce 17%
14/06/16 00:01:33 INFO mapreduce.Job:
map 79% reduce 17%
14/06/16 00:01:42 INFO mapreduce.Job:
map 80% reduce 17%
14/06/16 00:01:51 INFO mapreduce.Job:
map 81% reduce 17%
14/06/16 00:02:00 INFO mapreduce.Job:
map 82% reduce 17%
14/06/16 00:02:03 INFO mapreduce.Job:
map 83% reduce 17%
14/06/16 00:02:17 INFO mapreduce.Job:
map 100% reduce 17%
14/06/16 00:02:18 INFO mapreduce.Job:
map 100% reduce 100%
14/06/16 00:02:19 INFO mapreduce.Job: Job job_1402882236152_0003 completed successfully
14/06/16 00:02:19 INFO mapreduce.Job: Counters: 44
测试结果:
hello 24994174
legend 25006579
rolin 25000767
world 24998480
时间:
real
4m12.915s
user
0m1.462s
sys
0m3.057s
由于我的是三个节点,副本数是三,所以对时间方面的没有提升,以为数据本地化后没有了网络的io,我的是虚拟机性能和很受限制,反而使用lzo后会使压缩与解压的时间花费的更多.但是在真实的环境下是有很大作用的.