Hadoop笔记05-Hadoop-生产调优手册

HDFS-核心参数

NameNode内存生产配置

每个文件块大约占用150Byte。
在Hadoop 2.x中,通过修改hadoop-env.sh文件:HADOOP_NAMENODE_OPTS=-Xmx3072m 来设置NameNode的内存。
在Hadoop 3.x中,hadoop-env.sh中描述内存是动态分配的,有时候是不合适的,需要进行修改,修改hadoop-env.sh文件。
经验参考:https://docs.cloudera.com/documentation/enterprise/6/releasenotes/topics/rg_hardware_requirements.html#concept_fzz_dq4_gbb
NameNode最小值是1G,每增加100W个block,增加1G内存。
DataNode最小值是4G,block数或副本数升高,都需要增大DataNode的内存值,一个DataNode上副本数低于400W的时候,内存设置为4G,超过400W,每增加100W,内存增加1G。

export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS -Xmx1024m"
export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS -Xmx1024m"

使用jps命令查看进程号和Java程序,使用jmap -heap [pid]查看内存占用情况。

NameNode心跳并发配置

NameNode需要和DataNode保持联系,也要接收客户端发来的请求,所以NameNode需要有一个线程池来维持DataNode心跳和响应客户端请求。在hdfs-site.xml中,通过dfs.namenode.handler.count属性设置,默认值是10,企业经验值: 20 × logClusterSize,ClusterSize是集群规模(DataNode的个数)。

开启回收站配置

开启回收站功能,可以将删除的文件在不超时的情况下,恢复数据,起到备份作用。
在core-site.xml文件中,可以修改回收站属性。

<property>
	<!--设置文件存活时间,默认值是0,表示不启用回收站-->
	<name>fs.trash.interval</name>
	<value>1</value>
</property>
<property>
	<!--检查回收站时间间隔,如果该值为0,表示该值等于fs.trash.interval的值-->
	<name> fs.trash.checkpoint.interval</name>
	<value>1</value>
</property>

在设置参数的时候,需要注意,fs.trash.checkpoint.interval ≤ fs.trash.interval。
通过http://hadoop102:9870/网页端删除的文件不会经过回收站,通过程序删除的也不会经过回收站,需要程序调用moveToTrash()方法才会经过回收站。在命令行调用hadoop fs -rm命令的会经过回收站。使用hadoop fs -mv [trash path] [new path]命令可以将回收站的文件移动出来,也就是恢复数据。

HDFS-集群压测

新搭建好的集群需要进行压测,达到业务要求才能上线使用。HDFS的读写性能主要受网络和磁盘的影响比较大,为了方便测试,将hadoop102、hadoop103、hadoop104虚拟机网络的的传入带宽和传出带宽都设置成100Mbps(12.5MB/s)。
在这里插入图片描述
在hadoop102的/opt/module目录创建一个SimpleHTTPServer(python -m SimpleHTTPServer命令),浏览器访问hadoop102:8000就可以看到文件了,通过下载判断带宽设置是否生效。
启动集群,下面进行测试。

测试HDFS写性能

# 向HDFS集群写10个128M的文件
[root@hadoop102 /]# hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 128MB

执行命令会报错,我们把yarn-site.xml中虚拟内存检测改成false,分发并重启集群再次测试。

<!--是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,默认是 true -->
<property>
	<name>yarn.nodemanager.vmem-check-enabled</name>
 	<value>false</value>
</property>

这时候就不报错了,执行日志如下:

2022-02-02 21:55:26,199 INFO fs.TestDFSIO: TestDFSIO.1.8
2022-02-02 21:55:26,201 INFO fs.TestDFSIO: nrFiles = 10
2022-02-02 21:55:26,201 INFO fs.TestDFSIO: nrBytes (MB) = 128.0
2022-02-02 21:55:26,201 INFO fs.TestDFSIO: bufferSize = 1000000
2022-02-02 21:55:26,201 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
2022-02-02 21:55:26,940 INFO fs.TestDFSIO: creating control file: 134217728 bytes, 10 files
2022-02-02 21:55:27,151 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:27,981 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,045 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,095 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,139 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,176 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,213 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,251 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,287 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,325 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:28,356 INFO fs.TestDFSIO: created control files for: 10 files
2022-02-02 21:55:28,455 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.216.103:8032
2022-02-02 21:55:28,649 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.216.103:8032
2022-02-02 21:55:28,951 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1643810094463_0001
2022-02-02 21:55:29,001 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:29,183 INFO mapred.FileInputFormat: Total input files to process : 10
2022-02-02 21:55:29,204 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:29,244 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:29,272 INFO mapreduce.JobSubmitter: number of splits:10
2022-02-02 21:55:29,385 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:55:29,421 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1643810094463_0001
2022-02-02 21:55:29,421 INFO mapreduce.JobSubmitter: Executing with tokens: []
2022-02-02 21:55:29,609 INFO conf.Configuration: resource-types.xml not found
2022-02-02 21:55:29,609 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-02-02 21:55:30,058 INFO impl.YarnClientImpl: Submitted application application_1643810094463_0001
2022-02-02 21:55:30,098 INFO mapreduce.Job: The url to track the job: http://hadoop103:8088/proxy/application_1643810094463_0001/
2022-02-02 21:55:30,100 INFO mapreduce.Job: Running job: job_1643810094463_0001
2022-02-02 21:55:38,257 INFO mapreduce.Job: Job job_1643810094463_0001 running in uber mode : false
2022-02-02 21:55:38,259 INFO mapreduce.Job:  map 0% reduce 0%
2022-02-02 21:55:58,121 INFO mapreduce.Job:  map 13% reduce 0%
2022-02-02 21:56:05,837 INFO mapreduce.Job:  map 17% reduce 0%
2022-02-02 21:56:06,891 INFO mapreduce.Job:  map 23% reduce 0%
2022-02-02 21:56:08,074 INFO mapreduce.Job:  map 30% reduce 0%
2022-02-02 21:56:09,240 INFO mapreduce.Job:  map 63% reduce 0%
2022-02-02 21:56:10,424 INFO mapreduce.Job:  map 70% reduce 0%
2022-02-02 21:56:24,345 INFO mapreduce.Job:  map 70% reduce 3%
2022-02-02 21:57:16,770 INFO mapreduce.Job:  map 73% reduce 3%
2022-02-02 21:57:19,692 INFO mapreduce.Job:  map 73% reduce 7%
2022-02-02 21:57:27,659 INFO mapreduce.Job:  map 77% reduce 7%
2022-02-02 21:57:32,512 INFO mapreduce.Job:  map 83% reduce 10%
2022-02-02 21:57:34,387 INFO mapreduce.Job:  map 90% reduce 10%
2022-02-02 21:57:35,395 INFO mapreduce.Job:  map 97% reduce 10%
2022-02-02 21:57:38,417 INFO mapreduce.Job:  map 97% reduce 30%
2022-02-02 21:57:43,742 INFO mapreduce.Job:  map 100% reduce 30%
2022-02-02 21:57:44,747 INFO mapreduce.Job:  map 100% reduce 100%
2022-02-02 21:57:44,759 INFO mapreduce.Job: Job job_1643810094463_0001 completed successfully
2022-02-02 21:57:44,853 INFO mapreduce.Job: Counters: 53
        File System Counters
                FILE: Number of bytes read=863
                FILE: Number of bytes written=2395963
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=2350
                HDFS: Number of bytes written=1342177358
                HDFS: Number of read operations=45
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=12
        Job Counters
                Launched map tasks=10
                Launched reduce tasks=1
                Data-local map tasks=10
                Total time spent by all maps in occupied slots (ms)=1029072
                Total time spent by all reduces in occupied slots (ms)=95904
                Total time spent by all map tasks (ms)=1029072
                Total time spent by all reduce tasks (ms)=95904
                Total vcore-milliseconds taken by all map tasks=1029072
                Total vcore-milliseconds taken by all reduce tasks=95904
                Total megabyte-milliseconds taken by all map tasks=1053769728
                Total megabyte-milliseconds taken by all reduce tasks=98205696
        Map-Reduce Framework
                Map input records=10
                Map output records=50
                Map output bytes=757
                Map output materialized bytes=917
                Input split bytes=1230
                Combine input records=0
                Combine output records=0
                Reduce input groups=5
                Reduce shuffle bytes=917
                Reduce input records=50
                Reduce output records=5
                Spilled Records=100
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=7236
                CPU time spent (ms)=67140
                Physical memory (bytes) snapshot=3803803648
                Virtual memory (bytes) snapshot=28404977664
                Total committed heap usage (bytes)=3086483456
                Peak Map Physical memory (bytes)=377524224
                Peak Map Virtual memory (bytes)=2606665728
                Peak Reduce Physical memory (bytes)=207413248
                Peak Reduce Virtual memory (bytes)=2583265280
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=1120
        File Output Format Counters
                Bytes Written=78
2022-02-02 21:57:44,890 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 21:57:44,909 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
2022-02-02 21:57:44,909 INFO fs.TestDFSIO:             Date & time: Wed Feb 02 21:57:44 CST 2022
2022-02-02 21:57:44,909 INFO fs.TestDFSIO:         Number of files: 10
2022-02-02 21:57:44,909 INFO fs.TestDFSIO:  Total MBytes processed: 1280
2022-02-02 21:57:44,909 INFO fs.TestDFSIO:       Throughput mb/sec: 1.49
2022-02-02 21:57:44,909 INFO fs.TestDFSIO:  Average IO rate mb/sec: 1.91
2022-02-02 21:57:44,909 INFO fs.TestDFSIO:   IO rate std deviation: 1.61
2022-02-02 21:57:44,909 INFO fs.TestDFSIO:      Test exec time sec: 136.5
2022-02-02 21:57:44,909 INFO fs.TestDFSIO:

Number of files:生成MapTask的数量,一般是集群中CPU核数-1
Total MBytes processed:单个map处理的文件大小
Throughput mb/sec:单个mapTask的吞吐量,处理的总文件大小÷每一个mapTask写数据的时间累加
集群整体吞吐量:生成mapTask数量×单个mapTask的吞吐量
Average IO rate mb/sec:平均mapTask的吞吐量,每个mapTask处理文件大小÷每个mapTask写数据时间,全部相加后,除以task数量
IO rate std deviation:方差,反应各个mapTask处理的差值,越小越均衡
由于程序是在hadoop102上跑的,hadoop102就算本地运行,hadoop103和hadoop104是两个副本,一共参与测试的文件是10个文件×2个副本=20个,速度为1.49,所以实测速度为1.49×20个文件=29.8M/s,三台服务器带宽是30MB/s,几乎跑满。
如果实测速度远小于网络速度,并且实测速度不能满足业务需求,就要考虑换成固态硬盘或者增加磁盘数。
如果测试程序不在集群节点(hadoop102,hadoop103,hadoop104)上,那么就算3个副本。

测试HDFS读性能

继续测试读性能。

[root@hadoop102 /]# hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 128MB
2022-02-02 22:13:53,645 INFO fs.TestDFSIO: TestDFSIO.1.8ad -nrFiles 10 -fileSize 128MB                                                                                                                        2022-02-02 22:13:53,646 INFO fs.TestDFSIO: nrFiles = 10
2022-02-02 22:13:53,647 INFO fs.TestDFSIO: nrBytes (MB) = 128.0
2022-02-02 22:13:53,647 INFO fs.TestDFSIO: bufferSize = 1000000
2022-02-02 22:13:53,647 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
2022-02-02 22:13:54,445 INFO fs.TestDFSIO: creating control file: 134217728 bytes, 10 files
2022-02-02 22:13:54,636 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:54,806 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:54,847 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:54,885 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:54,925 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:54,964 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:54,998 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:55,029 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:55,063 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:55,102 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:55,129 INFO fs.TestDFSIO: created control files for: 10 files
2022-02-02 22:13:55,221 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.216.103:8032
2022-02-02 22:13:55,427 INFO client.RMProxy: Connecting to ResourceManager at hadoop103/192.168.216.103:8032
2022-02-02 22:13:55,745 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1643810094463_0002
2022-02-02 22:13:55,789 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:55,985 INFO mapred.FileInputFormat: Total input files to process : 10
2022-02-02 22:13:56,007 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:56,044 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:56,074 INFO mapreduce.JobSubmitter: number of splits:10
2022-02-02 22:13:56,199 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:13:56,231 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1643810094463_0002
2022-02-02 22:13:56,231 INFO mapreduce.JobSubmitter: Executing with tokens: []
2022-02-02 22:13:56,395 INFO conf.Configuration: resource-types.xml not found
2022-02-02 22:13:56,395 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2022-02-02 22:13:56,458 INFO impl.YarnClientImpl: Submitted application application_1643810094463_0002
2022-02-02 22:13:56,505 INFO mapreduce.Job: The url to track the job: http://hadoop103:8088/proxy/application_1643810094463_0002/
2022-02-02 22:13:56,507 INFO mapreduce.Job: Running job: job_1643810094463_0002
2022-02-02 22:14:04,642 INFO mapreduce.Job: Job job_1643810094463_0002 running in uber mode : false
2022-02-02 22:14:04,644 INFO mapreduce.Job:  map 0% reduce 0%
2022-02-02 22:14:19,770 INFO mapreduce.Job:  map 40% reduce 0%
2022-02-02 22:14:20,791 INFO mapreduce.Job:  map 80% reduce 0%
2022-02-02 22:14:21,804 INFO mapreduce.Job:  map 100% reduce 0%
2022-02-02 22:14:24,826 INFO mapreduce.Job:  map 100% reduce 100%
2022-02-02 22:14:24,840 INFO mapreduce.Job: Job job_1643810094463_0002 completed successfully
2022-02-02 22:14:24,955 INFO mapreduce.Job: Counters: 53
        File System Counters
                FILE: Number of bytes read=854
                FILE: Number of bytes written=2395923
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1342179630
                HDFS: Number of bytes written=80
                HDFS: Number of read operations=55
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=10
                Launched reduce tasks=1
                Data-local map tasks=10
                Total time spent by all maps in occupied slots (ms)=139210
                Total time spent by all reduces in occupied slots (ms)=2793
                Total time spent by all map tasks (ms)=139210
                Total time spent by all reduce tasks (ms)=2793
                Total vcore-milliseconds taken by all map tasks=139210
                Total vcore-milliseconds taken by all reduce tasks=2793
                Total megabyte-milliseconds taken by all map tasks=142551040
                Total megabyte-milliseconds taken by all reduce tasks=2860032
        Map-Reduce Framework
                Map input records=10
                Map output records=50
                Map output bytes=748
                Map output materialized bytes=908
                Input split bytes=1230
                Combine input records=0
                Combine output records=0
                Reduce input groups=5
                Reduce shuffle bytes=908
                Reduce input records=50
                Reduce output records=5
                Spilled Records=100
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=5684
                CPU time spent (ms)=23730
                Physical memory (bytes) snapshot=3099643904
                Virtual memory (bytes) snapshot=28330016768
                Total committed heap usage (bytes)=2653421568
                Peak Map Physical memory (bytes)=298332160
                Peak Map Virtual memory (bytes)=2577387520
                Peak Reduce Physical memory (bytes)=183496704
                Peak Reduce Virtual memory (bytes)=2583261184
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=1120
        File Output Format Counters
                Bytes Written=80
2022-02-02 22:14:24,993 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2022-02-02 22:14:25,013 INFO fs.TestDFSIO: ----- TestDFSIO ----- : read
2022-02-02 22:14:25,013 INFO fs.TestDFSIO:             Date & time: Wed Feb 02 22:14:25 CST 2022
2022-02-02 22:14:25,013 INFO fs.TestDFSIO:         Number of files: 10
2022-02-02 22:14:25,013 INFO fs.TestDFSIO:  Total MBytes processed: 1280
2022-02-02 22:14:25,013 INFO fs.TestDFSIO:       Throughput mb/sec: 104.7
2022-02-02 22:14:25,013 INFO fs.TestDFSIO:  Average IO rate mb/sec: 138.98
2022-02-02 22:14:25,013 INFO fs.TestDFSIO:   IO rate std deviation: 81.36
2022-02-02 22:14:25,013 INFO fs.TestDFSIO:      Test exec time sec: 29.83
2022-02-02 22:14:25,013 INFO fs.TestDFSIO:

可以看到,读比写快很多,而且超过了我们设置的网络带宽,这是因为3台服务器采用就近读取原则,相当于读取的本地磁盘数据,没有走网络传输。
最后,删除测试数据。

[root@hadoop102 /]# hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.3-tests.jar TestDFSIO -clean

HDFS-多目录

NameNode多目录配置

在一台机器上,可以配置多个NameNode目录,每个目录的内容是一样的。需要在hdfs-site.xml里添加如下内容。

<property>
	<name>dfs.namenode.name.dir</name>
	<value>file://${hadoop.tmp.dir}/dfs/name1,file://${hadoop.tmp.dir}/dfs/nam2</value>
</property>

其中${hadoop.tmp.dir}的值是在/opt/module/hadoop-3.1.3/etc/hadoop/core-site.xml里配置的。根据集群中每台机器的情况是否相同,确定是否分发配置文件。
停止集群后,删除3台结点的data和logs目录,格式化并启动集群。

# 删除data和logs目录
[root@hadoop102 /]# rm -rf /opt/module/hadoop-3.1.3/data/
[root@hadoop102 /]# rm -rf /opt/module/hadoop-3.1.3/logs/
[root@hadoop103 /]# rm -rf /opt/module/hadoop-3.1.3/data/
[root@hadoop103 /]# rm -rf /opt/module/hadoop-3.1.3/logs/
[root@hadoop104 /]# rm -rf /opt/module/hadoop-3.1.3/data/
[root@hadoop104 /]# rm -rf /opt/module/hadoop-3.1.3/logs/
# 格式化集群
[root@hadoop102 hadoop-3.1.3]# bin/hdfs namenode -format
# 启动集群
[root@hadoop102 hadoop-3.1.3]# myhadoop.sh start
# 查看结果
[root@hadoop102 dfs]# pwd
/opt/module/hadoop-3.1.3/data/dfs
[root@hadoop102 dfs]# ll
总用量 0
drwx------. 3 root root 40 22 22:14 data
drwxr-xr-x. 3 root root 40 22 22:14 name1
drwxr-xr-x. 3 root root 40 22 22:14 name2

DataNode多目录配置

DataNode也可以配置多目录,每个目录内容不一样,需要修改hdfs-site.xml配置文件。

<property>
	<name>dfs.datanode.data.dir</name>
	<value>file://${hadoop.tmp.dir}/dfs/data1,file://${hadoop.tmp.dir}/dfs/data2</value>
</property>
# 查看结果
[root@hadoop102 dfs]# pwd
/opt/module/hadoop-3.1.3/data/dfs
[root@hadoop102 dfs]# ll
总用量 0
drwx------. 3 root root 40 22 22:14 data1
drwx------. 3 root root 40 22 22:14 data2
drwxr-xr-x. 3 root root 40 22 22:14 name1
drwxr-xr-x. 3 root root 40 22 22:14 name2
# 向集群上传一个文件,可以观察到两个data文件夹里的内容是不一样的
[root@hadoop102 hadoop-3.1.3]# hadoop fs -put wcinput/word.txt /

集群数据均衡之磁盘间数均衡

在Hadoop 3.x里,有一个新特性,可以均衡磁盘数据,新加的硬盘是没有数据的,可以将数据均衡到新加的磁盘里。

# 生成均衡计划(有多块磁盘的时候才能执行成功)
hdfs diskbalancer -plan hadoop103
# 执行均衡计划
hdfs diskbalancer -execute hadoop103.plan.json
# 查看当前均衡任务执行情况
hdfs diskbalancer -query hadoop103
# 取消均衡计划
hdfs diskbalancer -cancel hadoop103.plan.json

HDFS-集群扩容及缩容

在公司里,通过配置白名单和黑名单可以避免黑客攻击,白名单用于指定哪些ip是用来存储数据的,不在ip列表中的ip,只能是一个普通的客户端,不能用来存储数据。

添加白名单

在NameNode结点的/opt/module/hadoop-3.1.3/etc/hadoop目录下,创建whitelist和blacklist文件。在whitelist中加入hadoop102和hadoop103的ip或者主机名。在hdfs-site.xml中指定黑名单和白名单的位置,增加dfs.hosts配置参数。

<!-- 白名单 -->
<property>
  <name>dfs.hosts</name>
  <value>/opt/module/hadoop-3.1.3/etc/hadoop/whitelist</value>
</property>
<!-- 黑名单 -->
<property>
  <name>dfs.hosts.exclude</name>
  <value>/opt/module/hadoop-3.1.3/etc/hadoop/blacklist</value>
</property>

分发配置文件、黑名单、白名单。第一次添加黑名单、白名单需要重启集群,后续只需要刷新NameNode结点即可。

[root@hadoop102 hadoop]# xsync.sh hdfs-site.xml whitelist blacklist
[root@hadoop102 hadoop]# myhadoop.sh stop
[root@hadoop102 hadoop]# myhadoop.sh start

在http://hadoop102:9870/dfshealth.html#tab-datanode可以看到只有hadoop102和hadoop103结点了。
尝试在hadoop104上执行一次上传操作,可以执行成功,但是在http://hadoop102:9870/explorer.html#/里点开NOTICE.txt,可以看到Availablity里只有hadoop102和hadoop103结点。

[root@hadoop104 hadoop-3.1.3]# hadoop fs -put NOTICE.txt /

修改白名单,加入hadoop104结点,刷新NameNode结点信息,此时再去浏览器上查看HDFS的结点信息,已经有3个结点了。

[root@hadoop102 hadoop]# vim whitelist
[root@hadoop102 hadoop]# hdfs dfsadmin -refreshNodes
Refresh nodes successful

服役新服务器

随着业务的增长,需要给集群新增服务器了,我们以hadoop100为基准克隆出来一台,命名为hadoop105。克隆类型要选择“创建完整克隆”。
配置静态ip和主机名后重启。

# 修改IPADDR为192.168.216.105
[root@hadoop100 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
# 修改主机名,输入hadoop105
[root@hadoop100 ~]# vim /etc/hostname
# 重启,重启之后hostname就变成hadoop105了
[root@hadoop100 ~]# reboot
# 来到hadoop102上,将module目录下的文件拷贝到105上
[root@hadoop102 module]# scp -r ./* root@hadoop105:/opt/module/
# 将hadoop102的环境变量拷贝到hadoop105
[root@hadoop102 module]# scp /etc/profile.d/my_env.sh root@hadoop105:/etc/profile.d/my_env.sh
# 在hadoop105上source一下,让环境变量生效
[root@hadoop105 module]# source /etc/profile
# 删除hadoop105上的data和logs
[root@hadoop105 hadoop-3.1.3]# rm -rf data/ logs/
# 配置其他机器到hadoop105无密码登录
[root@hadoop102 module]# ssh-copy-id hadoop105
[root@hadoop103 module]# ssh-copy-id hadoop105
[root@hadoop104 module]# ssh-copy-id hadoop105

单台启动hadoop105的datanode和nodemanager。

[root@hadoop105 hadoop-3.1.3]# hdfs --daemon start datanode
[root@hadoop105 hadoop-3.1.3]# yarn --daemon start nodemanager
[root@hadoop105 hadoop-3.1.3]# jps
12375 DataNode
12765 Jps
12638 NodeManager

修改xsync.sh,将hadoop105也加进来,以后分发文件的时候,也可以同步发给hadoop105了。

# 修改白名单配置,把hadoop105也加进来
[root@hadoop102 hadoop]# cd /opt/module/hadoop-3.1.3/etc/hadoop
[root@hadoop102 hadoop]# vim whitelist
# 分发配置
[root@hadoop102 hadoop]# xsync.sh hdfs-site.xml whitelist blacklist
# 更新结点
[root@hadoop102 hadoop]# hdfs dfsadmin -refreshNodes
# 在hadoop105上传文件
[root@hadoop105 hadoop-3.1.3]# hadoop fs -put LICENSE.txt /

服务器间数据均衡

对于新加进来的结点,上面的数据可能是空的,为了让集群内的数据尽可能平均,Hadoop提供了一个数据均衡的命令。

# -threshold 10的意思是:让各个机器之间数据量相差不超过10%
[root@hadoop102 hadoop-3.1.3]# sbin/start-balancer.sh -threshold 10
# 如果数据均衡耗时很久,可以手动停止
[root@hadoop102 hadoop-3.1.3]# sbin/stop-balancer.sh

黑名单退役服务器

添加到黑名单的ip,不能用来存储数据,企业中使用配置黑名单来退役服务器。

# 将hadoop105加到黑名单中
[root@hadoop102 hadoop]# vim blacklist
# 分发黑名单
[root@hadoop102 hadoop]# xsync.sh blacklist
# 刷新NameNode结点
[root@hadoop102 hadoop]# hdfs dfsadmin -refreshNodes
# hadoop105在退役之前,会把数据移动到其他结点上,可能会出现某个结点数据暴增的情况,所以,建议执行一下数据均衡操作
[root@hadoop102 hadoop-3.1.3]# sbin/start-balancer.sh -threshold 10
# 退役之后,hadoop105上的服务依旧在启动,只是它从集群中退出了,需要手动杀死hadoop105上的进程。
[root@hadoop105 hadoop-3.1.3]# hdfs --daemon stop datanode
[root@hadoop105 hadoop-3.1.3]# yarn --daemon stop nodemanager
[root@hadoop105 hadoop-3.1.3]# jps
30933 Jps

HDFS-存储优化

在hadoop105的基础上,克隆出来一台hadoop106,修改hadoop106的ip地址和主机名,重启机器,删除hadoop目录下的data和logs。

[root@hadoop105 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
[root@hadoop105 ~]# vim /etc/hostname
[root@hadoop105 ~]# reboot
[root@hadoop106 ~]# cd /opt/module/hadoop-3.1.3/
[root@hadoop106 hadoop-3.1.3]# rm -rf data/ logs/
# 配置其他机器到hadoop106无密码登录
[root@hadoop102 ~]# ssh-copy-id hadoop106
[root@hadoop103 ~]# ssh-copy-id hadoop106
[root@hadoop104 ~]# ssh-copy-id hadoop106
[root@hadoop105 ~]# ssh-copy-id hadoop106

在hadoop105上执行copy的时候会提示错误,因为hadoop105上并没有一个key可以copy,那么我们就生成一个再拷贝。

# 生成key,敲3下回车
[root@hadoop105 .ssh]# ssh-keygen -t rsa
# 发送key到hadoop106
[root@hadoop105 .ssh]# ssh-copy-id hadoop106

把hadoop102,hadoop103,hadoop104,hadoop105的data和logs都删除掉,修改脚本,给jpsall和xsync脚本加上hadoop105和hadoop106,方便查看所有服务器的情况,在hadoop102上,删掉blacklist里的内容,将hadoop105从黑名单里释放出来,白名单增加hadoop106结点。修改workers,加上hadoop105,hadoop106。
在hdfs-site.xml里修改datanode和namenode多目录配置。

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file://${hadoop.tmp.dir}/dfs/name</value>
  </property>
  <property>
    <name>dfs.namenode.data.dir</name>
    <value>file://${hadoop.tmp.dir}/dfs/data</value>
  </property>

分发hadoop目录到hadoop102~hadoop106,因为删除了data和logs,第一次启动集群需要格式化。

# 格式化namenode
[root@hadoop102 hadoop-3.1.3]# hdfs namenode -format
# 启动集群
[root@hadoop102 hadoop-3.1.3]# myhadoop.sh start
# 查看集群启动结果
[root@hadoop102 hadoop-3.1.3]# jpsall.sh

纠删码

纠删码原理

HDFS在默认情况下,一个文件会有3个副本,可以提高数据的可靠性,会产生多余的磁盘开销,Hadoop 3.x引入了纠删码,这是一种通过计算方式生成副本的技术,可以节省约50%左右的存储空间。

# 纠删码相关操作命令
[root@hadoop102 hadoop-3.1.3]# hdfs ec
Usage: bin/hdfs ec [COMMAND]
          [-listPolicies]
          [-addPolicies -policyFile <file>]
          [-getPolicy -path <path>]
          [-removePolicy -policy <policy>]
          [-setPolicy -path <path> [-policy <policy>] [-replicate]]
          [-unsetPolicy -path <path>]
          [-listCodecs]
          [-enablePolicy -policy <policy>]
          [-disablePolicy -policy <policy>]
          [-help <command-name>]
# 查看当前纠删码策略,哪个State=ENABLED表示当前开启的哪个策略
[root@hadoop102 hadoop-3.1.3]# hdfs ec -listPolicies
Erasure Coding Policies:
ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5], State=DISABLED
ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2], State=DISABLED
ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1], State=ENABLED
ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=3], State=DISABLED
ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4], State=DISABLED

Name=RS-6-3-1024k的策略是开启的,它的含义是:使用RS编码,每6个数据单元,生成3个校验单元,共9个单元,在这9个单元中,任意6个单元存在(不管是数据单元还是校验单元,只要总数=6),就可以得到原始数据,每个单元大小是1024k=1024×1024=1048576。其他策略的含义类似。

纠删码案例实操

HDFS支持给某个路径设置纠删码策略,所有在此路径下的文件都执行此策略。我们这里有5台机器,所以我们使用RS-3-2-1024k策略。

# 首先开启RS-3-2-1024k策略
[root@hadoop102 hadoop-3.1.3]# hdfs ec -enablePolicy -policy RS-3-2-1024k
Erasure coding policy RS-3-2-1024k is enabled
# 创建一个input目录
[root@hadoop102 hadoop-3.1.3]# hdfs dfs -mkdir /input
# 设置RS-3-2-1024k策略
[root@hadoop102 hadoop-3.1.3]# hdfs ec -setPolicy -path
/input -policy RS-3-2-1024k

在管理后台上传一个大于2M的文件,查看编码后存储情况,它的Replication只有一个。删掉2个机器上的文件(/opt/module/hadoop-3.1.3/data/dfs/data/current/BP-xxx/current/finalized/subdir0/subdir0),在管理后台依旧可以下载下来,稍等一会儿,会发现被删除的文件会恢复,尝试删除3个机器上的文件,在管理后台下载此时就下不下来了。

异构存储(冷热数据分离)

异构存储主要解决:不同类型的数据,存储在不同的磁盘上,达到最佳的性能问题。
存储类型:
RAM_DISK:内存镜像文件系统
SSD:SSD固态硬盘
DISK:普通硬盘,在HDFS中,没有主动声明数据目录存储类型默认是DISK
ARCHIVE:没有特指哪种存储介质,常用计算能力弱而存储密度高的存储介质,用来解决数据量扩容问题,一般用于归档

策略ID策略名称副本分布描述
15Lazy_PersistRAM_DISK:1,DISK:n-1一个副本保存在RAM_DISK,其余保存在磁盘中
12All_SSDSSD:n所有副本都保存在SSD中
10One_SSDSSD:1,DISK:n-1一个副本保存在SSD中,其余副本保存在磁盘中
7Hot(default)DISK:n所有副本都保存在磁盘中
5WarmDISK:1,ARCHIVE:n-1一个副本保存在磁盘,其余副本保存在归档存储上
2ColdARCHIVE:n所有副本都保存在归档存储上

异构存储Shell操作

# 查看当前有哪些存储策略
[root@hadoop102 hadoop-3.1.3]# hdfs storagepolicies -listPolicies
Block Storage Policies:
        BlockStoragePolicy{PROVIDED:1, storageTypes=[PROVIDED, DISK], creationFallbacks=[PROVIDED, DISK], replicationFallbacks=[PROVIDED, DISK]}
        BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], creationFallbacks=[], replicationFallbacks=[]}
        BlockStoragePolicy{WARM:5, storageTypes=[DISK, ARCHIVE], creationFallbacks=[DISK, ARCHIVE], replicationFallbacks=[DISK, ARCHIVE]}
        BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
        BlockStoragePolicy{ONE_SSD:10, storageTypes=[SSD, DISK], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}
        BlockStoragePolicy{ALL_SSD:12, storageTypes=[SSD], creationFallbacks=[DISK], replicationFallbacks=[DISK]}
        BlockStoragePolicy{LAZY_PERSIST:15, storageTypes=[RAM_DISK, DISK], creationFallbacks=[DISK], replicationFallbacks=[DISK]}
# 为指定路径设置存储策略
[root@hadoop102 hadoop-3.1.3]# hdfs storagepolicies -setStoragePolicy -path xxx -policy xxx
# 获取指定路径的存储策略
[root@hadoop102 hadoop-3.1.3]# hdfs storagepolicies -getStoragePolicy -path xxx
# 取消存储策略;执行改命令之后该目录或者文件,以其上级的目录为准,如果是根目录,那么就是 HOT
[root@hadoop102 hadoop-3.1.3]# hdfs storagepolicies -unsetStoragePolicy -path xxx
# 查看文件块分布
[root@hadoop102 hadoop-3.1.3]# bin/hdfs fsck xxx -files -blocks -locations
# 查看集群结点
[root@hadoop102 hadoop-3.1.3]# hadoop dfsadmin -report

测试环境准备

结点存储类型分配
hadoop102RAM_DISK,SSD
hadoop103SSD,DISK
hadoop104DISK,RAM_DISK
hadoop105ARCHIVE
hadoop106ARCHIVE
分布修改hadoop102,hadoop103,hadoop104,hadoop105,hadoop106的hdfs-site.xml配置文件。
hadoop102添加配置信息
<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>dfs.storage.policy.enabled</name>
  <value>true</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>[SSD]file:///opt/module/hadoop3.1.3/hdfsdata/ssd,[RAM_DISK]file:///opt/module/hadoop3.1.3/hdfsdata/ram_disk</value>
</property>

hadoop103添加配置信息

<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>dfs.storage.policy.enabled</name>
  <value>true</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>[SSD]file:///opt/module/hadoop3.1.3/hdfsdata/ssd,[DISK]file:///opt/module/hadoop3.1.3/hdfsdata/disk</value>
</property>

hadoop104添加配置信息

<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>dfs.storage.policy.enabled</name>
  <value>true</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>[RAM_DISK]file:///opt/module/hdfsdata/ram_disk,[DISK]file:///opt/module/hadoop-3.1.3/hdfsdata/disk</value>
</property>

hadoop105添加配置信息

<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>dfs.storage.policy.enabled</name>
  <value>true</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>[ARCHIVE]file:///opt/module/hadoop3.1.3/hdfsdata/archive</value>
</property>

hadoop106添加配置信息

<property>
  <name>dfs.replication</name>
  <value>2</value>
</property>
<property>
  <name>dfs.storage.policy.enabled</name>
  <value>true</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>[ARCHIVE]file:///opt/module/hadoop3.1.3/hdfsdata/archive</value>
</property>
# 格式化namenode
[root@hadoop102 hadoop]# hdfs namenode -format
# 启动集群
[root@hadoop102 hadoop]# myhadoop.sh start
# 在HDFS上创建文件目录
[root@hadoop102 hadoop]#  hadoop fs -mkdir /hdfsdata
# 上传文件
[root@hadoop102 hadoop]#  hadoop fs -put /opt/module/hadoop-3.1.3/NOTICE.txt /hdfsdata

HOT存储策略(DISK:n)

# 查看默认存储策略
[root@hadoop102 hadoop]#  hdfs storagepolicies -getStoragePolicy -path /hdfsdata
The storage policy of /hdfsdata is unspecified
# 查看文件块分布
[root@hadoop102 hadoop]# hdfs fsck /hdfsdata -files -blocks -locations
Connecting to namenode via http://hadoop102:9870/fsck?ugi=root&files=1&blocks=1&locations=1&path=%2Fhdfsdata
FSCK started by root (auth:SIMPLE) from /192.168.216.102 for path /hdfsdata at Wed Mar 16 08:20:27 CST 2022
/hdfsdata <dir>
/hdfsdata/NOTICE.txt 21867 bytes, replicated: replication=2, 1 block(s):  OK
0. BP-401178567-192.168.216.102-1647389537203:blk_1073741825_1001 len=21867 Live_repl=2  [DatanodeInfoWithStorage[192.168.216.103:9866,DS-991746c1-da89-437b-9445-c22eddfbf77d,DISK], DatanodeInfoWithStorage[192.168.216.104:9866,DS-09552d44-0b43-4cac-8f44-e6288bf9688f,DISK]]


Status: HEALTHY
 Number of data-nodes:  5
 Number of racks:               1
 Total dirs:                    1
 Total symlinks:                0

Replicated Blocks:
 Total size:    21867 B
 Total files:   1
 Total blocks (validated):      1 (avg. block size 21867 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2
 Average block replication:     2.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
FSCK ended at Wed Mar 16 08:20:27 CST 2022 in 5 milliseconds


The filesystem under path '/hdfsdata' is HEALTHY

可以看到,我们未设置存储策略的时候,默认使用的是HOT,所有的文件块都存储在DISK上。

WARM存储策略(DSIK:1,ARCHIVE:n-1)

# 把/hdfsdata路径的存储策略设置成WARM
[root@hadoop102 hadoop]# hdfs storagepolicies -setStoragePolicy -path /hdfsdata -policy WARM
# 查看文件块分布,会发现文件还在原始位置
[root@hadoop102 hadoop]# hdfs fsck /hdfsdata -files -blocks -locations
# 让HDFS按照存储策略自动移动文件块
[root@hadoop102 hadoop]# hdfs mover /hdfsdata
# 再次查看文件块分布
[root@hadoop102 hadoop]# hdfs fsck /hdfsdata -files -blocks -locations
Connecting to namenode via http://hadoop102:9870/fsck?ugi=root&files=1&blocks=1&locations=1&path=%2Fhdfsdata
FSCK started by root (auth:SIMPLE) from /192.168.216.102 for path /hdfsdata at Wed Mar 16 08:48:25 CST 2022
/hdfsdata <dir>
/hdfsdata/NOTICE.txt 21867 bytes, replicated: replication=2, 1 block(s):  OK
0. BP-401178567-192.168.216.102-1647389537203:blk_1073741825_1001 len=21867 Live_repl=2  [DatanodeInfoWithStorage[192.168.216.106:9866,DS-8545a2d1-fbfd-4ca2-976a-cc863093171e,ARCHIVE], DatanodeInfoWithStorage[192.168.216.104:9866,DS-09552d44-0b43-4cac-8f44-e6288bf9688f,DISK]]


Status: HEALTHY
 Number of data-nodes:  5
 Number of racks:               1
 Total dirs:                    1
 Total symlinks:                0

Replicated Blocks:
 Total size:    21867 B
 Total files:   1
 Total blocks (validated):      1 (avg. block size 21867 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2
 Average block replication:     2.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
FSCK ended at Wed Mar 16 08:48:25 CST 2022 in 1 milliseconds


The filesystem under path '/hdfsdata' is HEALTHY

可以看到,文件一半在DISK,一半在ARCHIVE,符合WARM策略。

COLD存储策略(ARCHIVE:n)

# 设置存储策略为COLD
[root@hadoop102 hadoop]# hdfs storagepolicies -setStoragePolicy -path /hdfsdata -policy COLD
# 手动移动文件块
[root@hadoop102 hadoop]# hdfs mover /hdfsdata
# 查看文件块分布
[root@hadoop102 hadoop]# hdfs fsck /hdfsdata -files -blocks -locations
Connecting to namenode via http://hadoop102:9870/fsck?ugi=root&files=1&blocks=1&locations=1&path=%2Fhdfsdata
FSCK started by root (auth:SIMPLE) from /192.168.216.102 for path /hdfsdata at Wed Mar 16 08:54:36 CST 2022
/hdfsdata <dir>
/hdfsdata/NOTICE.txt 21867 bytes, replicated: replication=2, 1 block(s):  OK
0. BP-401178567-192.168.216.102-1647389537203:blk_1073741825_1001 len=21867 Live_repl=2  [DatanodeInfoWithStorage[192.168.216.106:9866,DS-8545a2d1-fbfd-4ca2-976a-cc863093171e,ARCHIVE], DatanodeInfoWithStorage[192.168.216.105:9866,DS-d2981971-0228-487e-a09b-a7f7c85ff82c,ARCHIVE]]


Status: HEALTHY
 Number of data-nodes:  5
 Number of racks:               1
 Total dirs:                    1
 Total symlinks:                0

Replicated Blocks:
 Total size:    21867 B
 Total files:   1
 Total blocks (validated):      1 (avg. block size 21867 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2
 Average block replication:     2.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
FSCK ended at Wed Mar 16 08:54:36 CST 2022 in 1 milliseconds


The filesystem under path '/hdfsdata' is HEALTHY

可以看到,所有文件块都分布在ARCHIVE了,符合COLD存储策略。
注意,如果设置为COLD存储策略,但是没有配置ARCHIVE目录,不可以直接向该目录上传文件,会报异常。

ONE_SSD存储策略(SSD:1,DISK:n-1)

# 设置存储策略为One_SSD
[root@hadoop102 hadoop]# hdfs storagepolicies -setStoragePolicy -path /hdfsdata -policy One_SSD
# 手动移动文件块
[root@hadoop102 hadoop]# hdfs mover /hdfsdata
# 查看文件块分布
[root@hadoop102 hadoop]# hdfs fsck /hdfsdata -files -blocks -locations
Connecting to namenode via http://hadoop102:9870/fsck?ugi=root&files=1&blocks=1&locations=1&path=%2Fhdfsdata
FSCK started by root (auth:SIMPLE) from /192.168.216.102 for path /hdfsdata at Wed Mar 16 08:59:59 CST 2022
/hdfsdata <dir>
/hdfsdata/NOTICE.txt 21867 bytes, replicated: replication=2, 1 block(s):  OK
0. BP-401178567-192.168.216.102-1647389537203:blk_1073741825_1001 len=21867 Live_repl=2  [DatanodeInfoWithStorage[192.168.216.104:9866,DS-09552d44-0b43-4cac-8f44-e6288bf9688f,DISK], DatanodeInfoWithStorage[192.168.216.103:9866,DS-735e8807-482c-4abb-85d0-bce4d7677218,SSD]]


Status: HEALTHY
 Number of data-nodes:  5
 Number of racks:               1
 Total dirs:                    1
 Total symlinks:                0

Replicated Blocks:
 Total size:    21867 B
 Total files:   1
 Total blocks (validated):      1 (avg. block size 21867 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2
 Average block replication:     2.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
FSCK ended at Wed Mar 16 08:59:59 CST 2022 in 2 milliseconds


The filesystem under path '/hdfsdata' is HEALTHY

可以看到,一半文件块分布在SSD,一半分布在DISK,符合One_SSD存储策略。

ALL_SSD存储策略(SSD:n)

# 设置存储策略为All_SSD
[root@hadoop102 hadoop]# hdfs storagepolicies -setStoragePolicy -path /hdfsdata -policy All_SSD
# 手动移动文件块
[root@hadoop102 hadoop]# hdfs mover /hdfsdata
# 查看文件块分布
[root@hadoop102 hadoop]# hdfs fsck /hdfsdata -files -blocks -locations
Connecting to namenode via http://hadoop102:9870/fsck?ugi=root&files=1&blocks=1&locations=1&path=%2Fhdfsdata
FSCK started by root (auth:SIMPLE) from /192.168.216.102 for path /hdfsdata at Wed Mar 16 09:01:42 CST 2022
/hdfsdata <dir>
/hdfsdata/NOTICE.txt 21867 bytes, replicated: replication=2, 1 block(s):  OK
0. BP-401178567-192.168.216.102-1647389537203:blk_1073741825_1001 len=21867 Live_repl=2  [DatanodeInfoWithStorage[192.168.216.102:9866,DS-5e773de4-eea4-4938-afd5-032d184f1d71,SSD], DatanodeInfoWithStorage[192.168.216.103:9866,DS-735e8807-482c-4abb-85d0-bce4d7677218,SSD]]


Status: HEALTHY
 Number of data-nodes:  5
 Number of racks:               1
 Total dirs:                    1
 Total symlinks:                0

Replicated Blocks:
 Total size:    21867 B
 Total files:   1
 Total blocks (validated):      1 (avg. block size 21867 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2
 Average block replication:     2.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
FSCK ended at Wed Mar 16 09:01:42 CST 2022 in 2 milliseconds


The filesystem under path '/hdfsdata' is HEALTHY

可以看到,所有文件块都存储在SSD,符合All_SSD存储策略。

LAZY_PERSIST存储策略(RAM_DISK:1,DISK:n-1)

# 设置存储策略为lazy_persist
[root@hadoop102 hadoop]# hdfs storagepolicies -setStoragePolicy -path /hdfsdata -policy lazy_persist
# 手动移动文件块
[root@hadoop102 hadoop]# hdfs mover /hdfsdata
# 查看文件块分布
[root@hadoop102 hadoop]# hdfs fsck /hdfsdata -files -blocks -locations
Connecting to namenode via http://hadoop102:9870/fsck?ugi=root&files=1&blocks=1&locations=1&path=%2Fhdfsdata
FSCK started by root (auth:SIMPLE) from /192.168.216.102 for path /hdfsdata at Wed Mar 16 09:03:08 CST 2022
/hdfsdata <dir>
/hdfsdata/NOTICE.txt 21867 bytes, replicated: replication=2, 1 block(s):  OK
0. BP-401178567-192.168.216.102-1647389537203:blk_1073741825_1001 len=21867 Live_repl=2  [DatanodeInfoWithStorage[192.168.216.104:9866,DS-09552d44-0b43-4cac-8f44-e6288bf9688f,DISK], DatanodeInfoWithStorage[192.168.216.103:9866,DS-991746c1-da89-437b-9445-c22eddfbf77d,DISK]]


Status: HEALTHY
 Number of data-nodes:  5
 Number of racks:               1
 Total dirs:                    1
 Total symlinks:                0

Replicated Blocks:
 Total size:    21867 B
 Total files:   1
 Total blocks (validated):      1 (avg. block size 21867 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       0 (0.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    2
 Average block replication:     2.0
 Missing blocks:                0
 Corrupt blocks:                0
 Missing replicas:              0 (0.0 %)

Erasure Coded Block Groups:
 Total size:    0 B
 Total files:   0
 Total block groups (validated):        0
 Minimally erasure-coded block groups:  0
 Over-erasure-coded block groups:       0
 Under-erasure-coded block groups:      0
 Unsatisfactory placement block groups: 0
 Average block group size:      0.0
 Missing block groups:          0
 Corrupt block groups:          0
 Missing internal blocks:       0
FSCK ended at Wed Mar 16 09:03:08 CST 2022 in 1 milliseconds


The filesystem under path '/hdfsdata' is HEALTHY

可以看到所有文件块都存储在DISK,按照理论应该是一个副本存储在RAM_DISK,其余副本存储在DISK,这是因为,我们还需要配置dfs.datanode.max.locked.memorydfs.block.size参数。
当存储策略为LAZY_PERSIST时,文件块块都存储在DISK上原因有两个:

  1. 当客户端所在的DataNode节点没有RAM_DISK时,则会写入客户端所在的DataNode节点的DISK磁盘,其余副本会写入其他节点的DISK磁盘
  2. 当客户端所在的DataNode有RAM_DISK,但dfs.datanode.max.locked.memory参数值未设置或者设置过小(小于dfs.block.size参数值)时,则会写入客户端所在的DataNode节点的DISK磁盘,其余副本会写入其他节点的DISK磁盘

同时,虚拟机也有内存限制,可以通过ulimit -a查询到max locked memory是64KB,如果设置过大,也会报错,总之不希望你在内存中存储大量的数据,毕竟内存掉电数据就丢失了嘛。

HDFS-故障排除

NameNode故障处理

# 模拟Namenode结点挂掉,手动停止Namenode进程
[root@hadoop102 ~]# kill -9 6919
# 单独启动Namenode
[root@hadoop102 ~]# hdfs --daemon start namenode
# 模拟Namenode结点数据丢失
[root@hadoop102 name]# rm -rf /opt/module/hadoop-3.1.3/data/dfs/name/*
# 模拟Namenode结点挂掉,手动停止Namenode进程
[root@hadoop102 ~]# kill -9 9969
# 单独启动Namenode,此时是启动不起来的
[root@hadoop102 ~]# hdfs --daemon start namenode
# 查看日志
[root@hadoop102 name]# cd /opt/module/hadoop-3.1.3/logs/
[root@hadoop102 logs]# tail -200f hadoop-root-namenode-hadoop102.log
# 提示的信息是java.io.IOException: NameNode is not formatted.此时就需要来到别的机器上拷贝文件了
[root@hadoop102 name]# pwd
/opt/module/hadoop-3.1.3/data/dfs/name
[root@hadoop102 name]# scp -r root@hadoop104:/opt/module/hadoop-3.1.3/data/dfs/namesecondary/* ./
# 手动重启Namenode
[root@hadoop102 name]# hdfs --daemon start namenode

集群安全模式&磁盘修复

集群的安全模式:只接受读取数据,不接受写入、删除、修改数据。
进入安全模式的场景:

  1. 在NameNode加载镜像文件和编辑日志期间处于安全模式
  2. NameNode在接收DataNode注册的时候,处于安全模式

退出安全模式的条件:

  1. dfs.namenode.safemode.min.datanodes:最小可用datanode数量大于0,默认为0
  2. dfs.namenode.safemode.threshold-pct:副本数量达到最小要求的block数的百分比,默认为0.999f,只允许丢失一个块
  3. dfs.namenode.safemode.extension:超过稳定时间,默认是3000,即30秒

基本语法:

# 查看安全模式状态
bin/hdfs dfsadmin -safemode get
# 进入安全模式状态
bin/hdfs dfsadmin -safemode enter
# 离开安全模式状态
bin/hdfs dfsadmin -safemode leave
# 等待安全模式状态
bin/hdfs dfsadmin -safemode wait

案例1:启动集群进入安全模式

刚刚启动后,立刻执行删除操作,会提示,当前集群处于安全模式

案例2:磁盘修复

分别进入hadoop102,hadoop103,hadoop104的/opt/module/hadoop-3.1.3/data/dfs/data/current/BP-1505060178-192.168.216.102-1624977421765/current/finalized/subdir0/subdir0上,删除某两个块信息。

[root@hadoop102 subdir0]# pwd
/opt/module/hadoop-3.1.3/data/dfs/data/current/BP-1505060178-192.168.216.102-1624977421765/current/finalized/subdir0/subdir0
# 删除信息
[root@hadoop102 subdir0]# rm -rf blk_1073742023 blk_1073742023_1200.meta
[root@hadoop102 subdir0]# rm -rf blk_1073742024 blk_1073742024_1201.meta
# 重启集群
[root@hadoop102 subdir0]# myhadoop.sh stop
[root@hadoop102 subdir0]# myhadoop.sh start

来到浏览器观察http://hadoop102:9870/dfshealth.html#tab-overview,最上面有提示信息。

# 查看安全模式开启状态
[root@hadoop102 subdir0]#  hdfs dfsadmin -safemode get
Safe mode is ON
# 关闭安全模式
[root@hadoop102 subdir0]# hdfs dfsadmin -safemode leave
Safe mode is OFF

但是后面启动还是会自动进入安全模式,有两种解决方法,一种是将数据恢复,另一种是将元数据直接删除,再次观察http://hadoop102:9870/dfshealth.html#tab-overview就正常了。

案例3:模拟等待安全模式

在/opt/module/hadoop-3.1.3下创建一个脚本safemode.sh。

#!/bin/bash
hdfs dfsadmin -safemode wait
hdfs dfs -put /opt/module/hadoop-3.1.3/README.txt /
# 赋予执行权限
[root@hadoop102 hadoop-3.1.3]# chmod 777 safemode.sh
# 进入安全模式
[root@hadoop102 hadoop-3.1.3]# hdfs dfsadmin -safemode enter
# 执行safemode.sh
[root@hadoop102 hadoop-3.1.3]# ./safemode.sh
# 再开一个新窗口,执行离开safemode命令,观察之前的窗口,会提示安全模式已关闭,HDFS集群上也上传了README.txt文件
[root@hadoop102 hadoop-3.1.3]# hdfs dfsadmin -safemode leave

慢磁盘监控

慢磁盘是指写入数据非常慢的一类磁盘,一般机器运行时间长了之后,磁盘性能就会退化,直到出现慢磁盘问题。
一般情况下,NameNode和DataNode之间的心跳间隔是3秒,如果超过了3秒,说明有异常。还可以通过fio命令来测试,提前准备一个文件,用于测试。

# 安装fio
[root@hadoop102 hadoop-3.1.3]# yum -y install fio
# 顺序读测试(-rw=read):磁盘的顺序读取速度为:193MiB/s
[root@hadoop102 hadoop-3.1.3]# fio -filename=/opt/software/hadoop-3.1.3.tar.gz -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=16k -size=2G -numjobs=10 -runtime=60 -group_reporting -name=test_r
...
Run status group 0 (all jobs):
   READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=11.3GiB (12.1GB), run=60001-60001msec
...
# 顺序写测试(-rw=write):磁盘的顺序写速度为:195MiB/s
[root@hadoop102 hadoop-3.1.3]# fio -filename=/opt/software/hadoop-3.1.3.tar.gz -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=16k -size=2G -numjobs=10 -runtime=60 -group_reporting -name=test_r
...
Run status group 0 (all jobs):
  WRITE: bw=195MiB/s (204MB/s), 195MiB/s-195MiB/s (204MB/s-204MB/s), io=11.4GiB (12.2GB), run=60001-60001msec
...
# 随机写测试(-rw=randwrite):磁盘的随机写速度为:184MiB/s
[root@hadoop102 hadoop-3.1.3]# fio -filename=/opt/software/hadoop-3.1.3.tar.gz -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=16k -size=2G -numjobs=10 -runtime=60 -group_reporting -name=test_r
...
Run status group 0 (all jobs):
  WRITE: bw=184MiB/s (192MB/s), 184MiB/s-184MiB/s (192MB/s-192MB/s), io=10.8GiB (11.5GB), run=60001-60001msec
...
# 混合随机读写测试(-rw=randrw):磁盘的混合随机读速度为:91.8MiB/s,磁盘的混合随机写速度为:91.8MiB/s
[root@hadoop102 hadoop-3.1.3]# fio -filename=/opt/software/hadoop-3.1.3.tar.gz -direct=1 -iodepth 1 -thread -rw=randrw -ioengine=psync -bs=16k -size=2G -numjobs=10 -runtime=60 -group_reporting -name=test_r
...
Run status group 0 (all jobs):
   READ: bw=91.8MiB/s (96.3MB/s), 91.8MiB/s-91.8MiB/s (96.3MB/s-96.3MB/s), io=5508MiB (5776MB), run=60002-60002msec
  WRITE: bw=91.8MiB/s (96.2MB/s), 91.8MiB/s-91.8MiB/s (96.2MB/s-96.2MB/s), io=5506MiB (5773MB), run=60002-60002msec
...

小文件归档

每个文件块的元数据存储在NameNode内存中,100个1K的文件和100个128M的文件,它们占用的NameNode内存是一样的,所以,如果有大量的小文件,NameNode内存的利用率就降低了,解决这个问题的一个方式是HDFS归档文件或HAR文件,它将文件存入HDFS文件块,减少NameNode内存使用的同时,允许对文件进行透明访问,HDFS归档文件对内还是独立的文件,对NameNode而言却是一个整体,减少了NameNode的内存。

# 启动YARN(因为需要做MapReduce操作)
[root@hadoop102 hadoop-3.1.3]# start-yarn.sh
# 把wcinput目录下的文件归档成wcinput.har的归档文件,把归档文件存储在wcoutput路径下
[root@hadoop102 hadoop-3.1.3]# hadoop archive -archiveName wcinput.har -p /wcinput /wcoutput
# 查看归档
[root@hadoop102 hadoop-3.1.3]# hadoop fs -ls /wcoutput/wcinput.har
[root@hadoop102 hadoop-3.1.3]# hadoop fs -ls har:///wcoutput/wcinput.har
# 提取归档文件到HDFS根目录
[root@hadoop102 hadoop-3.1.3]# hadoop fs -cp har:///wcoutput/wcinput.har/* /

HDFS-集群迁移

Apache和Apache集群间数据拷贝

两个远程主机的文件复制,我们可以使用scp命令

# 把当前机器的a.txt推送到hadoop103的/opt/下
scp -r a.txt root@hadoop103:/opt/a.txt
# 把hadoop103的/opt/a.txt拉取到本机
scp -r root@hadoop103:/opt/a.txt a.txt
# 在hadoop102上操作,把hadoop103的/opt/a.txt发送到hadoop104的/opt/a.txt
scp -r root@hadoop103:/opt/a.txt root@hadoop104:/opt/a.txt

除此之外,hadoop还提供了一个命令:distcp,用于集群间数据复制

# 把hadoop102的a.txt拷贝到hadoop103上
[root@hadoop102 hadoop-3.1.3]# hadoop disticp hdfs://hadoop102:8020/opt/a.txt hdfs://hadoop103:8020/opt/a.txt

Apache和CDH集群间数据拷贝

MapReduce生产经验

MapReduce跑得慢的原因

计算机性能:CPU、内存、磁盘、网络
IO操作优化:数据倾斜、Map运行时间过长,导致Reduce等待、小文件过多

MapReduce常用调优参数

Map阶段:

  1. 自定义分区,减少数据倾斜:自定义类,继承Partitioner接口,重写getPartition方法
  2. 减少溢写次数:mapreduce.task.io.sort.mbShuffle的环形缓冲区大小,默认100m,可以提高到200m mapreduce.map.sort.spill.percent环形缓冲区溢出的阈值,默认80% ,可以提高的90%
  3. 每次增加Merge合并次数:mapreduce.task.io.sort.factor默认是10,可以提高到20
  4. 在不影响业务结果的前提条件下可以提前采用Combiner:job.setCombinerClass(xxxReducer.class);
  5. 为了减少磁盘IO,可以采用Snappy或者LZO压缩:conf.setBoolean("mapreduce.map.output.compress", true); conf.setClass("mapreduce.map.output.compress.codec", SnappyCodec.class,CompressionCodec.class);
  6. mapreduce.map.memory.mb默认MapTask内存上限1024MB,可以根据128m数据对应1G内存原则提高该内存
  7. mapreduce.map.java.opts:控制MapTask堆内存大小,如果内存不够会报java.lang.OutOfMemoryError异常
  8. mapreduce.map.cpu.vcores:默认MapTask的CPU核数1,计算密集型任务可以增加CPU核数
  9. 异常重试:mapreduce.map.maxattempts每个MapTask最大重试次数,一旦重试次数超过该值,则认为MapTask运行失败,默认值是4,根据机器性能适当提高

Reduce阶段:

  1. mapreduce.reduce.shuffle.parallelcopies:每个Reduce去Map中拉取数据的并行数,默认值是5。可以提高到10
  2. mapreduce.reduce.shuffle.input.buffer.percent:Buffer大小占Reduce可用内存的比例,默认值0.7。可以提高到0.8
  3. mapreduce.reduce.shuffle.merge.percent:Buffer中的数据达到多少比例开始写入磁盘,默认值0.66。可以提高到0.75
  4. mapreduce.reduce.memory.mb:默认ReduceTask内存上限1024MB,根据128m数据对应1G内存原则,适当提高内存到4-6G
  5. mapreduce.reduce.java.opts:控制ReduceTask堆内存大小,如果内存不够,报java.lang.OutOfMemoryError异常
  6. mapreduce.reduce.cpu.vcores:默认ReduceTask的CPU核数1个,可以提高到2-4个
  7. mapreduce.reduce.maxattempts:每个ReduceTask最大重试次数,一旦重试次数超过该值,则认为MapTask运行失败,默认值是4
  8. mapreduce.job.reduce.slowstart.completedmaps:当MapTask完成的比例达到该值后才会为ReduceTask申请资源。默认是0.05
  9. mapreduce.task.timeout:如果一个Task在一定时间内没有任何进入,即不会读取新的数据,也没有输出数据,则认为该Task处于Block状态,可能是卡住了,也许永远会卡住,为了防止因为用户程序永远Block住不退出,则强制设置了一个该超时时间(单位毫秒),默认是600000(10分钟)。如果你的程序对每条输入数据的处理时间过长,建议将该参数调大
  10. 如果可以不用Reduce,尽可能不用

MapReduce数据倾斜问题

数据倾斜:某一个分区的数据量要远大于其他分区,导致这个分区的计算耗时很长。
减少数据倾斜的方式:

  1. 检查是否有空值,对于空值,可以直接过滤掉,也可以自定义分区,将空值随机分配,最后二次聚合
  2. 能在map阶段处理的,提前在map阶段处理,如:Combiner、MapJoin
  3. 增加reduce个数

Hadoop-Yarn生产经验

查看YARN笔记

常用的调优参数

容量调度器使用

公平调度器使用

Hadoop综合调优

小文件优化方法

在HDFS上,小文件过多会导致NameNode占用过多,内存利用率降低,也会导致索引寻址变慢。小文件过多,在进行MR的时候,会产生过多的切片,需要启动更多的MapTask,每个MapTask处理的数据量小,可能会出现运算时间小于启动时间的现象,时间都浪费在了启动上。
解决方案:

  1. 在做数据采集的时候,避免小文件上传到HDFS,如果无法避免,先将文件合并成大文件再上传
  2. 将一批小文件打包成一个HAR文件,减少NameNode的内存使用
  3. CombineTextInputFormat用于将多个小文件在切片过程中生成一个单独的切片或者少量的切片
  4. 开启uber模式,实现JVM重用,默认情况下,每个Task都需要启动一个JVM来运行,如果Task很小,可以让同一个Job的多个Task运行在一个JVM里,不必为每个Task都开启一个JVM。

在mapred-site.xml里,添加如下配置开启uber模式,分发配置。

<!-- 开启 uber 模式,默认关闭 -->
<property>
  <name>mapreduce.job.ubertask.enable</name>
  <value>true</value>
</property>
<!-- uber 模式中最大的 mapTask 数量,可向下修改 -->
<property>
  <name>mapreduce.job.ubertask.maxmaps</name>
  <value>9</value>
</property>
<!-- uber 模式中最大的 reduce 数量,可向下修改 -->
<property>
  <name>mapreduce.job.ubertask.maxreduces</name>
  <value>1</value>
</property>
<!-- uber 模式中最大的输入数据量,默认使用 dfs.blocksize 的值,可向下修改 -->
<property>
  <name>mapreduce.job.ubertask.maxbytes</name>
  <value></value>
</property>

测试MapReduce计算性能

使用Sort测试MapReduce,一个虚拟机不超过150G磁盘,就要尝试这段命令了,执行时间会很久。

# 使用RandomWriter来产生随机数,每个节点运行10个Map任务,每个Map产生大约1G大小的二进制随机数
[root@hadoop102 hadoop-3.1.3]# hadoop jar /opt/module/hadoop3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples3.1.3.jar randomwriter random-data
# 执行Sort程序
[root@hadoop102 hadoop-3.1.3]# hadoop jar /opt/module/hadoop3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples3.1.3.jar sort random-data sorted-data
# 验证数据是否真正排好序了
[root@hadoop102 hadoop-3.1.3]# hadoop jar /opt/module/hadoop3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-clientjobclient-3.1.3-tests.jar testmapredsort -sortInput random-data -sortOutput sorted-data

企业开发场景案例

需求

这里有1G数据中,统计单词出现次数。3台服务器,每台配置4核CPU,4线程,4G内存。
1G的数据,需要1024/128=8个MapTask,1个ReduceTask,一个MRAppMaster,平均每个结点运行10/3≈3个任务

HDFS参数调优

修改hadoop-env.sh

export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS -Xmx1024m"
export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS -Xmx1024m"

修改hdfs-site.xml

<!-- NameNode 有一个工作线程池,默认值是 10 -->
<property>
  <name>dfs.namenode.handler.count</name>
  <value>21</value>
</property>

修改core-site.xml

<!-- 配置垃圾回收时间为 60 分钟 -->
<property>
  <name>fs.trash.interval</name>
  <value>60</value>
</property>

分发配置。

MapReduce参数调优

修改mapred-site.xml,分发配置。

<!-- 环形缓冲区大小,默认 100m -->
<property>
  <name>mapreduce.task.io.sort.mb</name>
  <value>100</value>
</property>
<!-- 环形缓冲区溢写阈值,默认 0.8 -->
<property>
  <name>mapreduce.map.sort.spill.percent</name>
  <value>0.80</value>
</property>
<!-- merge 合并次数,默认 10 个 -->
<property>
  <name>mapreduce.task.io.sort.factor</name>
  <value>10</value>
</property>
<!-- maptask 内存,默认 1g; maptask 堆内存大小默认和该值大小一致
mapreduce.map.java.opts -->
<property>
  <name>mapreduce.map.memory.mb</name>
  <value>-1</value>
  <description>The amount of memory to request from the
scheduler for each map task. If this is not specified or is
non-positive, it is inferred from mapreduce.map.java.opts and
mapreduce.job.heap.memory-mb.ratio. If java-opts are also not
specified, we set it to 1024.
  </description>
</property>
<!-- matask 的 CPU 核数,默认 1 个 -->
<property>
  <name>mapreduce.map.cpu.vcores</name>
  <value>1</value>
</property>
<!-- matask 异常重试次数,默认 4 次 -->
<property>
  <name>mapreduce.map.maxattempts</name>
  <value>4</value>
</property>
<!-- 每个 Reduce 去 Map 中拉取数据的并行数。默认值是 5 -->
<property>
  <name>mapreduce.reduce.shuffle.parallelcopies</name>
  <value>5</value>
</property>
<!-- Buffer 大小占 Reduce 可用内存的比例,默认值 0.7 -->
<property>
  <name>mapreduce.reduce.shuffle.input.buffer.percent</name>
  <value>0.70</value>
</property>
<!-- Buffer 中的数据达到多少比例开始写入磁盘,默认值 0.66。 -->
<property>
  <name>mapreduce.reduce.shuffle.merge.percent</name>
  <value>0.66</value>
</property>
<!-- reducetask 内存,默认 1g;reducetask 堆内存大小默认和该值大小一致mapreduce.reduce.java.opts -->
<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>-1</value>
  <description>The amount of memory to request from the
scheduler for each reduce task. If this is not specified or
is non-positive, it is inferred
 from mapreduce.reduce.java.opts and
mapreduce.job.heap.memory-mb.ratio.
 If java-opts are also not specified, we set it to 1024.
  </description>
</property>
<!-- reducetask 的 CPU 核数,默认 1 个 -->
<property>
  <name>mapreduce.reduce.cpu.vcores</name>
  <value>2</value>
</property>
<!-- reducetask 失败重试次数,默认 4 次 -->
<property>
  <name>mapreduce.reduce.maxattempts</name>
  <value>4</value>
</property>
<!-- 当 MapTask 完成的比例达到该值后才会为 ReduceTask 申请资源。默认是 0.05-->
<property>
  <name>mapreduce.job.reduce.slowstart.completedmaps</name>
  <value>0.05</value>
</property>
<!-- 如果程序在规定的默认 10 分钟内没有读到数据,将强制超时退出 -->
<property>
  <name>mapreduce.task.timeout</name>
  <value>600000</value>
</property>

YARN参数调优

修改yarn-site.xml,分发配置。

<!-- 选择调度器,默认容量 -->
<property>
  <description>The class to use as the resource scheduler.</description>
  <name>yarn.resourcemanager.scheduler.class</name>
  <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
<!-- ResourceManager 处理调度器请求的线程数量,默认 50;如果提交的任务数大于 50,可以增加该值,但是不能超过 3 台 * 4 线程 = 12 线程(去除其他应用程序实际不能超过 8) -->
<property>
  <description>Number of threads to handle scheduler interface.</description>
  <name>yarn.resourcemanager.scheduler.client.thread-count</name>
  <value>8</value>
</property>
<!-- 是否让 yarn 自动检测硬件进行配置,默认是 false,如果该节点有很多其他应用程序,建议手动配置。如果该节点没有其他应用程序,可以采用自动 -->
<property>
  <description>Enable auto-detection of node capabilities such as memory and CPU.
</description>
  <name>yarn.nodemanager.resource.detect-hardware-capabilities</name>
  <value>false</value>
</property>
<!-- 是否将虚拟核数当作 CPU 核数,默认是 false,采用物理 CPU 核数 -->
<property>
  <description>Flag to determine if logical processors(such as
hyperthreads) should be counted as cores. Only applicable on Linux
when yarn.nodemanager.resource.cpu-vcores is set to -1 and
yarn.nodemanager.resource.detect-hardware-capabilities is true.
  </description>
  <name>yarn.nodemanager.resource.count-logical-processors-ascores</name>
  <value>false</value>
</property>
<!-- 虚拟核数和物理核数乘数,默认是 1.0 -->
<property>
  <description>Multiplier to determine how to convert phyiscal cores to
vcores. This value is used if yarn.nodemanager.resource.cpu-vcores
is set to -1(which implies auto-calculate vcores) and
yarn.nodemanager.resource.detect-hardware-capabilities is set to true.
The number of vcores will be calculated as number of CPUs * multiplier.
  </description>
  <name>yarn.nodemanager.resource.pcores-vcores-multiplier</name>
  <value>1.0</value>
</property>
<!-- NodeManager 使用内存数,默认 8G,修改为 4G 内存 -->
<property>
  <description>Amount of physical memory, in MB, that can be allocated
for containers. If set to -1 and
yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
automatically calculated(in case of Windows and Linux).
In other cases, the default is 8192MB.
  </description>
  <name>yarn.nodemanager.resource.memory-mb</name>
  <value>4096</value>
</property>
<!-- nodemanager 的 CPU 核数,不按照硬件环境自动设定时默认是 8 个,修改为 4 个 -->
<property>
  <description>Number of vcores that can be allocated
for containers. This is used by the RM scheduler when allocating
resources for containers. This is not used to limit the number of
CPUs used by YARN containers. If it is set to -1 and
yarn.nodemanager.resource.detect-hardware-capabilities is true, it is
automatically determined from the hardware in case of Windows and Linux.
In other cases, number of vcores is 8 by default.
  </description>
  <name>yarn.nodemanager.resource.cpu-vcores</name>
  <value>4</value>
</property>
<!-- 容器最小内存,默认 1G -->
<property>
  <description>The minimum allocation for every container request at the
RM in MBs. Memory requests lower than this will be set to the value of
this property. Additionally, a node manager that is configured to have
less memory than this value will be shut down by the resource manager.
  </description>
  <name>yarn.scheduler.minimum-allocation-mb</name>
  <value>1024</value>
</property>
<!-- 容器最大内存,默认 8G,修改为 2G -->
<property>
  <description>The maximum allocation for every container request at the
RM in MBs. Memory requests higher than this will throw an
InvalidResourceRequestException.
  </description>
  <name>yarn.scheduler.maximum-allocation-mb</name>
  <value>2048</value>
</property>
<!-- 容器最小 CPU 核数,默认 1 个 -->
<property>
  <description>The minimum allocation for every container request at the
RM in terms of virtual CPU cores. Requests lower than this will be set to
the value of this property. Additionally, a node manager that is configured
to have fewer virtual cores than this value will be shut down by the
resource manager.
  </description>
  <name>yarn.scheduler.minimum-allocation-vcores</name>
  <value>1</value>
</property>
<!-- 容器最大 CPU 核数,默认 4 个,修改为 2 个 -->
<property>
  <description>The maximum allocation for every container request at the
RM in terms of virtual CPU cores. Requests higher than this will throw an
InvalidResourceRequestException.
  </description>
  <name>yarn.scheduler.maximum-allocation-vcores</name>
  <value>2</value>
</property>
<!-- 虚拟内存检查,默认打开,修改为关闭 -->
<property>
  <description>Whether virtual memory limits will be enforced for containers.</description>
  <name>yarn.nodemanager.vmem-check-enabled</name>
  <value>false</value>
</property>
<!-- 虚拟内存和物理内存设置比例,默认 2.1 -->
<property>
  <description>Ratio between virtual memory to physical memory when
setting memory limits for containers. Container allocations are
expressed in terms of physical memory, and virtual memory usage is
allowed to exceed this allocation by this ratio.
  </description>
  <name>yarn.nodemanager.vmem-pmem-ratio</name>
  <value>2.1</value>
</property>

执行程序

# 重启集群
[root@hadoop102 hadoop-3.1.3]# sbin/stop-yarn.sh
[root@hadoop102 hadoop-3.1.3]# sbin/start-yarn.sh
# 执行WordCount程序
[root@hadoop102 hadoop-3.1.3]# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /input /output

观察YARN执行界面:http://hadoop103:8088/cluster/apps。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值