40.You are migrating a cluster from MApReduce version 1 (MRv1) to MapReduce version 2(MRv2) on YARN. You want to maintain your MRv1 TaskTracker slot capacities when you migrate. What should you do/
A. Configure yarn.applicationmaster.resource.memory-mb and
yarn.applicationmaster.resource.cpu-vcores so that ApplicationMaster container allocations match the capacity you require.
B. You don't need to configure or balance these properties in YARN as YARN dynamically balances resource management capabilities on your cluster
C. Configure mapred.tasktracker.map.tasks.maximum and
mapred.tasktracker.reduce.tasks.maximum ub yarn-site.xml to match your cluster's capacity set by the yarn-scheduler.minimum-allocation
D. Configure yarn.nodemanager.resource.memory-mb and
yarn.nodemanager.resource.cpu-vcores to match the capacity you require under YARN for each NodeManager
Answer: D
explanation:
A选项的属性不存在。
yarn.nodemanager.resource.memory-mb
The amount of physical memory (in MB) that may be allocated to containers
being run by the node manager.
可分配给容器的物理内存数量
yarn.nodemanager.resource.cpu-vcores
The number of CPU cores that may be allocated to containers being run by the
node manager.
可以为容器分配的虚拟 CPU 内核的数量
Oreily:
CPU settings in YARN and MapReduce
In addition to memory, YARN treats CPU usage as a managed resource, and applications can request the number of cores they need. The number of cores that a node manager can allocate to containers is controlled by the yarn.nodemanager.resource.cpu-vcores property. It should be set to the total number of cores on the machine, minus a core for each daemon process running on the machine (datanode, node manager, and any other
long-running processes). MapReduce jobs can control the number of cores allocated to map and reduce containers by setting mapreduce.map.cpu.vcores and mapreduce.reduce.cpu.vcores . Both default to 1, an appropriate setting for normal single-threaded MapReduce tasks, which can only saturate a single core.
WARNING
While the number of cores is tracked during scheduling (so a container won't be allocated on a machine where there are no spare cores, for example), the node manager will not, by default, limit actual CPU usage of running containers.
This means that a container can abuse its allocation by using more CPU than it was given, possibly starving other containers running on the same host. YARN has support for enforcing CPU limits using Linux cgroups. The nodemanager's container executor class ( yarn.nodemanager.container-executor.class ) must be set to use the LinuxContainerExecutor class, which in turn must be configured to use cgroups (see the properties under yarn.nodemanager.linux-container-executor ).