ccah-500 第40题 maintain your MRv1 TaskTracker slot capacities when you migrate. What should you do

40.You are migrating a cluster from MApReduce version 1 (MRv1) to MapReduce version 2(MRv2) on YARN. You want to maintain your MRv1 TaskTracker slot capacities when you migrate. What should you do/

A. Configure yarn.applicationmaster.resource.memory-mb and

yarn.applicationmaster.resource.cpu-vcores so that ApplicationMaster container allocations match the capacity you require.

B. You don't need to configure or balance these properties in YARN as YARN dynamically balances resource management capabilities on your cluster

C. Configure mapred.tasktracker.map.tasks.maximum and

mapred.tasktracker.reduce.tasks.maximum ub yarn-site.xml to match your cluster's capacity set by the yarn-scheduler.minimum-allocation

D. Configure yarn.nodemanager.resource.memory-mb and

yarn.nodemanager.resource.cpu-vcores to match the capacity you require under YARN for each NodeManager

Answer: D

 

explanation

A选项的属性不存在。

yarn.nodemanager.resource.memory-mb

The amount of physical memory (in MB) that may be allocated to containers

being run by the node manager.

可分配给容器的物理内存数量

yarn.nodemanager.resource.cpu-vcores

The number of CPU cores that may be allocated to containers being run by the

node manager.

可以为容器分配的虚拟 CPU 内核的数量

 

Oreily

CPU settings in YARN and MapReduce

In addition to memory, YARN treats CPU usage as a managed resource, and applications can request the number of cores they need. The number of cores that a node manager can allocate to containers is controlled by the yarn.nodemanager.resource.cpu-vcores property. It should be set to the total number of cores on the machine, minus a core for each daemon process running on the machine (datanode, node manager, and any other

long-running processes). MapReduce jobs can control the number of cores allocated to map and reduce containers by setting  mapreduce.map.cpu.vcores and  mapreduce.reduce.cpu.vcores . Both default to 1, an appropriate setting for normal single-threaded MapReduce tasks, which can only saturate a single core.

WARNING

While the number of cores is tracked during scheduling (so a container won't be allocated on a machine where there are no spare cores, for example), the node manager will not, by default, limit actual CPU usage of running containers.

This means that a container can abuse its allocation by using more CPU than it was given, possibly starving other containers running on the same host. YARN has support for enforcing CPU limits using Linux cgroups. The nodemanager's container executor class ( yarn.nodemanager.container-executor.class ) must be set to use the LinuxContainerExecutor class, which in turn must be configured to use cgroups (see the properties under yarn.nodemanager.linux-container-executor ).

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值