Hadoop生产调优

1.1 NameNode内存生产配置

        1)Hadoop2.x系列:NameNode内存默认2000m,如果服务器内存128G,NameNode内存可以配置100g。在hadoop-env.sh文件中配置如下。

HADOOP_NAMENODE_OPTS=-Xmx3072m

        2) Hadoop3.x系列: NameNode 内存是动态分配的  . 

       3) 查看DataNode占用内存 

jmap -heap 2744

1.2 NameNode心跳并发配置

        1)hdfs-site.xml

<property>

    <name>dfs.namenode.handler.count</name>

    <value>value</value>

</property>

dfs.namenode.handler.count=20*\log_{e}clustersize

1.3 开启回收站配置

    1)修改core-site.xml,配置垃圾回收时间为1小时。 

<property>

    <name>fs.trash.interval</name>

    <value>60</value>

</property>

  

    2)通过网页上直接删除的文件也不会走回收站,只有在命令行利用hadoop fs -rm命令删除的文件才会走回收站

    3)恢复回收站数据

[$user@$host hadoop-3.1.3]$ hadoop fs -mv /user/$host /.Trash/Current/user/$host /input    /user/$host /input

 1.4 NameNode和DataNode多目录配置

        在hdfs-site.xml文件中添加如下内容

<property>

     <name>dfs.namenode.name.dir</name>

     <value>file://${hadoop.tmp.dir}/dfs/name1,file://${hadoop.tmp.dir}/dfs/name2</value>

</property>

<property>

     <name>dfs.datanode.data.dir</name>

     <value>file://${hadoop.tmp.dir}/dfs/data1,file://${hadoop.tmp.dir}/dfs/data2</value>

</property>

1.5集群数据均衡之磁盘间数据均衡

        1)生产环境,由于硬盘空间不足,往往需要增加一块硬盘。刚加载的硬盘没有数据时,可以执行磁盘数据均衡命令。(Hadoop3.x新特性)

        (1)生成均衡计划

        hdfs diskbalancer -plan $host

                (2)执行均衡计划

hdfs diskbalancer -execute $host.plan.json

                (3)查看当前均衡任务的执行情况

hdfs diskbalancer -query $host

                (4)取消均衡任务

hdfs diskbalancer -cancel $host.plan.json

1.6集群安全模式&磁盘修复

        1)安全模式:文件系统只接受读数据请求,而不接受删除、修改等变更请求

           集群处于安全模式,不能执行重要操作(写操作)。集群启动完成后,自动退出安全模式。

(1)bin/hdfs dfsadmin -safemode get (功能描述:查看安全模式状态)

(2)bin/hdfs dfsadmin -safemode enter (功能描述:进入安全模式状态)

(3)bin/hdfs dfsadmin -safemode leave (功能描述:离开安全模式状态)

(4)bin/hdfs dfsadmin -safemode wait (功能描述:等待安全模式状态)

        2)磁盘修复

        观察http://$host:9870/dfshealth.html#tab-overview

         将元数据删除

1.7HDFS—集群迁移

1scp实现两个远程主机之间的文件复制

scp -r hello.txt root@$host:/user/hello.txt // 推 push

scp -r root@$host:/user/hello.txt  hello.txt // 拉 pull

scp -r root@$host:/user/hello.txt root@$host:/user/         //是通过本地主机中转实现两个远程主机的文件复制;如果在两个远程主机之间ssh没有配置的情况下可以使用该方式。

2)采用distcp命令实现两个Hadoop集群之间的递归数据复制

[user@$host hadoop-3.1.3]$  bin/hadoop distcp hdfs://$host:8020/user/hello.txt hdfs://$host:8020/user/hello.txt

 1.8 MapReduce数据倾斜问题 

        数据频率倾斜——某一个区域的数据量要远远大于其他区域。

        数据大小倾斜——部分记录的大小远远大于平均值。

(1)首先检查是否空值过多造成的数据倾斜生产环境,可以直接过滤掉空值;如果想保留空值,就自定义分区,将空值加随机数打散。最后再二次聚合。

2)能在map阶段提前处理,最好先在Map阶段处理。如:Combiner、MapJoin

(3)设置多个reduce个数

2调优案例

(1)需求:从1G数据中,统计每个单词出现次数。服务器3台,每台配置4G内存,4核CPU,4线程。

(2)需求分析:

1G / 128m = 8个MapTask;1个ReduceTask;1个mrAppMaster

平均每个节点运行10个 / 3台 ≈ 3个任务(4 3 3)

         HDFS参数调优

(1)修改:hadoop-env.sh

export HDFS_NAMENODE_OPTS="-Dhadoop.security.logger=INFO,RFAS -Xmx1024m"

export HDFS_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS -Xmx1024m"

(2)修改hdfs-site.xml

<!-- NameNode有一个工作线程池,默认值是10 -->

<property>

    <name>dfs.namenode.handler.count</name>

    <value>21</value>

</property>

(3)修改core-site.xml

<!-- 配置垃圾回收时间为60分钟 -->

<property>

    <name>fs.trash.interval</name>

    <value>60</value>

</property>

(4)分发配置

 xsync hadoop-env.sh hdfs-site.xml core-site.xml

10.3.3 MapReduce参数调优

(1)修改mapred-site.xml

<!-- 环形缓冲区大小,默认100m -->

<property>

  <name>mapreduce.task.io.sort.mb</name>

  <value>100</value>

</property>

<!-- 环形缓冲区溢写阈值,默认0.8 -->

<property>

  <name>mapreduce.map.sort.spill.percent</name>

  <value>0.80</value>

</property>

<!-- merge合并次数,默认10个 -->

<property>

  <name>mapreduce.task.io.sort.factor</name>

  <value>10</value>

</property>

<!-- maptask内存,默认1g; maptask堆内存大小默认和该值大小一致mapreduce.map.java.opts -->

<property>

  <name>mapreduce.map.memory.mb</name>

  <value>-1</value>

  <description>The amount of memory to request from the scheduler for each    map task. If this is not specified or is non-positive, it is inferred from mapreduce.map.java.opts and mapreduce.job.heap.memory-mb.ratio. If java-opts are also not specified, we set it to 1024.

  </description>

</property>

<!-- matask的CPU核数,默认1个 -->

<property>

  <name>mapreduce.map.cpu.vcores</name>

  <value>1</value>

</property>

<!-- matask异常重试次数,默认4次 -->

<property>

  <name>mapreduce.map.maxattempts</name>

  <value>4</value>

</property>

<!-- 每个Reduce去Map中拉取数据的并行数。默认值是5 -->

<property>

  <name>mapreduce.reduce.shuffle.parallelcopies</name>

  <value>5</value>

</property>

<!-- Buffer大小占Reduce可用内存的比例,默认值0.7 -->

<property>

  <name>mapreduce.reduce.shuffle.input.buffer.percent</name>

  <value>0.70</value>

</property>

<!-- Buffer中的数据达到多少比例开始写入磁盘,默认值0.66 -->

<property>

  <name>mapreduce.reduce.shuffle.merge.percent</name>

  <value>0.66</value>

</property>

<!-- reducetask内存,默认1g;reducetask堆内存大小默认和该值大小一致mapreduce.reduce.java.opts -->

<property>

  <name>mapreduce.reduce.memory.mb</name>

  <value>-1</value>

  <description>The amount of memory to request from the scheduler for each    reduce task. If this is not specified or is non-positive, it is inferred

    from mapreduce.reduce.java.opts and mapreduce.job.heap.memory-mb.ratio.

<!-- reducetask的CPU核数,默认1个 -->

<property>

  <name>mapreduce.reduce.cpu.vcores</name>

  <value>2</value>

</property>

<!-- reducetask失败重试次数,默认4次 -->

<property>

  <name>mapreduce.reduce.maxattempts</name>

  <value>4</value>

</property>

<!-- MapTask完成的比例达到该值后才会为ReduceTask申请资源。默认是0.05 -->

<property>

  <name>mapreduce.job.reduce.slowstart.completedmaps</name>

  <value>0.05</value>

</property>

<!-- 如果程序在规定的默认10分钟内没有读到数据,将强制超时退出 -->

<property>

  <name>mapreduce.task.timeout</name>

  <value>600000</value>

</property>

(2)分发配置

 xsync mapred-site.xml

 Yarn参数调优

(1)修改yarn-site.xml配置参数如下:

<!-- 选择调度器,默认容量 -->

<property>

<description>The class to use as the resource scheduler.</description>

<name>yarn.resourcemanager.scheduler.class</name>

<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>

</property>

<!-- ResourceManager处理调度器请求的线程数量,默认50;如果提交的任务数大于50,可以增加该值,但是不能超过3* 4线程 = 12线程(去除其他应用程序实际不能超过8) -->

<property>

<description>Number of threads to handle scheduler interface.</description>

<name>yarn.resourcemanager.scheduler.client.thread-count</name>

<value>8</value>

</property>

<!-- 是否让yarn自动检测硬件进行配置,默认是false,如果该节点有很多其他应用程序,建议手动配置。如果该节点没有其他应用程序,可以采用自动 -->

<property>

<description>Enable auto-detection of node capabilities such as

memory and CPU.

</description>

<name>yarn.nodemanager.resource.detect-hardware-capabilities</name>

<value>false</value>

</property>

<!-- 是否将虚拟核数当作CPU核数,默认是false,采用物理CPU核数 -->

<property>

<description>Flag to determine if logical processors(such as

hyperthreads) should be counted as cores. Only applicable on Linux

when yarn.nodemanager.resource.cpu-vcores is set to -1 and

yarn.nodemanager.resource.detect-hardware-capabilities is true.

</description>

<name>yarn.nodemanager.resource.count-logical-processors-as-cores</name>

<value>false</value>

</property>

<!-- 虚拟核数和物理核数乘数,默认是1.0 -->

<property>

<description>Multiplier to determine how to convert phyiscal cores to

vcores. This value is used if yarn.nodemanager.resource.cpu-vcores

is set to -1(which implies auto-calculate vcores) and

yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The number of vcores will be calculated as number of CPUs * multiplier.

</description>

<name>yarn.nodemanager.resource.pcores-vcores-multiplier</name>

<value>1.0</value>

</property>

<!-- NodeManager使用内存数,默认8G,修改为4G内存 -->

<property>

<description>Amount of physical memory, in MB, that can be allocated

for containers. If set to -1 and

yarn.nodemanager.resource.detect-hardware-capabilities is true, it is

automatically calculated(in case of Windows and Linux).

In other cases, the default is 8192MB.

</description>

<name>yarn.nodemanager.resource.memory-mb</name>

<value>4096</value>

</property>

<!-- nodemanager的CPU核数,不按照硬件环境自动设定时默认是8个,修改为4个 -->

<property>

<description>Number of vcores that can be allocated

for containers. This is used by the RM scheduler when allocating

resources for containers. This is not used to limit the number of

CPUs used by YARN containers. If it is set to -1 and

yarn.nodemanager.resource.detect-hardware-capabilities is true, it is

automatically determined from the hardware in case of Windows and Linux.

In other cases, number of vcores is 8 by default.</description>

<name>yarn.nodemanager.resource.cpu-vcores</name>

<value>4</value>

</property>

<!-- 容器最小内存,默认1G -->

<property>

<description>The minimum allocation for every container request at the RM in MBs. Memory requests lower than this will be set to the value of this property. Additionally, a node manager that is configured to have less memory than this value will be shut down by the resource manager.

</description>

<name>yarn.scheduler.minimum-allocation-mb</name>

<value>1024</value>

</property>

<!-- 容器最大内存,默认8G,修改为2G -->

<property>

<description>The maximum allocation for every container request at the RM in MBs. Memory requests higher than this will throw an InvalidResourceRequestException.

</description>

<name>yarn.scheduler.maximum-allocation-mb</name>

<value>2048</value>

</property>

<!-- 容器最小CPU核数,默认1-->

<property>

<description>The minimum allocation for every container request at the RM in terms of virtual CPU cores. Requests lower than this will be set to the value of this property. Additionally, a node manager that is configured to have fewer virtual cores than this value will be shut down by the resource manager.

</description>

<name>yarn.scheduler.minimum-allocation-vcores</name>

<value>1</value>

</property>

<!-- 容器最大CPU核数,默认4个,修改为2个 -->

<property>

<description>The maximum allocation for every container request at the RM in terms of virtual CPU cores. Requests higher than this will throw an

InvalidResourceRequestException.</description>

<name>yarn.scheduler.maximum-allocation-vcores</name>

<value>2</value>

</property>

<!-- 虚拟内存检查,默认打开,修改为关闭 -->

<property>

<description>Whether virtual memory limits will be enforced for

containers.</description>

<name>yarn.nodemanager.vmem-check-enabled</name>

<value>false</value>

</property>

<!-- 虚拟内存和物理内存设置比例,默认2.1 -->

<property>

<description>Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.

</description>

<name>yarn.nodemanager.vmem-pmem-ratio</name>

<value>2.1</value>

</property>

(2)分发配置

xsync yarn-site.xml

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值