hadoop权威指南(第四版)要点翻译(7)——Chapter 4. YARN(2)

3) Scheduling in YARN
a) In an ideal world, the requests that a YARN application makes would be granted immediately. In the real world, however, resources are limited, and on a busy cluster, an application will often need to wait to have some of its requests fulfilled. It is the job of the YARN scheduler to allocate resources to applications according to some defined policy. Scheduling in general is a difficult problem and there is no one “best” policy, which is why YARN provides a choice of schedulers and configurable policies. We look at these next.
在理想的情况下,一个YARN应用发出的请求会被及时响应,然而,在现实情况下,资源是有限的,在一个繁忙的集群中,一个应用经常需要等待才能满足它的某些请求。根据一些确定的规则给应用程序分配资源,这就是YARN调度器的工作。通常而言,调度,是一个棘手的问题,没有一个“最好的”方针,这也就是为什么YARN提供了一个调度器和可配置规则的选择的原因。接下来我们可以看到那些。
b) Scheduler Options
c) Three schedulers are available in YARN: the FIFO, Capacity, and Fair Schedulers. The FIFO Scheduler places applications in a queue and runs them in the order of submission (first in, first out). Requests for the first application in the queue are allocated first; once its requests have been satisfied, the next application in the queue is served, and so on.
在YARN中,有三种调度器可用:FIFO,Capacity,Fair调度器。FIFO调度器会将应用程序置于一个队列中,然后按照提交的顺序运行它们(先进先出)。在队列中第一个应用的请求会被分配给第一个应用。一旦它的请求被满足了,队列中的下一个应用将开始接受服务,以此类推。
d) The FIFO Scheduler has the merit of being simple to understand and not needing any configuration, but it’s not suitable for shared clusters. Large applications will use all the resources in a cluster, so each application has to wait its turn. On a shared cluster it is better to use the Capacity Scheduler or the Fair Scheduler. Both of these allow longrunning jobs to complete in a timely manner, while still allowing users who are running concurrent smaller ad hoc queries to get results back in a reasonable time.
FIFO调度器具有容易理解和不需要配置的优点,但是它不适合于共享集群。大型应用程序将会使用集群中的所有资源,因此每一个应用都不得不等待来轮到自己。在一个共享集群中,使用Capacity或者Fair调度器会更好。这两者都允许需要长时间运行的作业以及时的方式来完成,但也任然允许正在运行的用户进行少量的特定的并发查询,且在一个合理的时间范围内返回结果。
e) 这里写图片描述
这里写图片描述
f) The difference between schedulers is illustrated in Figure 4-3, which shows that under the FIFO Scheduler (i) the small job is blocked until the large job completes.
调度器之间的差异在图表4-3中有说明,其展示了在FIFO调度器中,较小的作业被锁定,直到较大的作业完成.
g) With the Capacity Scheduler (ii in Figure 4-3), a separate dedicated queue allows the small job to start as soon as it is submitted, although this is at the cost of overall cluster utilization since the queue capacity is reserved for jobs in that queue. This means that the large job finishes later than when using the FIFO Scheduler.
在Capacity调度器中,只要作业已经提交,一个独立专门的队列将允许较小的作业启动,尽管是以整个集群利用率为代价的,这是由于队列容量需要给那个独立的队列的作业预留空间导致的。这也意味着,对于较大的作业来说,比采用FIFO调度器要完成的晚一些。
h) With the Fair Scheduler (iii in Figure 4-3), there is no need to reserve a set amount of capacity, since it will dynamically balance resources between all running jobs. Just after the first (large) job starts, it is the only job running, so it gets all the resources in the cluster. When the second (small) job starts, it is allocated half of the cluster resources so that each job is using its fair share of resources.
使用Fair调度器的话,没有必要预留一定数量的容量,因为Fair调度器将动态平衡所有运行作业的资源。第一个较大的作业刚刚启动之后,它就是唯一一个正在运行的作业,因此它将会得到集群中全部的资源。当第二个较小的作业启动时,它将会被分配一般的集群资源,以便每一个作业都可以使用它的公平分配的资源。
i) Note that there is a lag between the time the second job starts and when it receives its fair share, since it has to wait for resources to free up as containers used by the first job complete. After the small job completes and no longer requires resources, the large job goes back to using the full cluster capacity again. The overall effect is both high cluster utilization and timely small job completion.
注意,在第二个作业启动到他获得它的公平分配资源之间有一个时间延迟,因为它不得不等待被第一个作业所使用的容器完成之后所释放的资源。当较小的作业完成,且不再需要资源时,较大的作业将会返回去再次使用整个集群资源。整体效果就是高集群利用率和较小作业的及时完成性。
j) Capacity Scheduler Configuration
k) The Capacity Scheduler allows sharing of a Hadoop cluster along organizational lines, whereby each organization is allocated a certain capacity of the overall cluster. Each organization is set up with a dedicated queue that is configured to use a given fraction of the cluster capacity. Queues may be further divided in hierarchical fashion, allowing each organization to share its cluster allowance between different groups of users within the organization. Within a queue, applications are scheduled using FIFO scheduling.
Capacity调度器允许沿着组织线共享Hadoop集群,这可以通过给每个组织分配一定的集群容量来实现。每个组织可以通过已配置的专用队列去使用一部分已给定的集群资源来创建。队列可能更近一步的分开成为层级模式,其允许每个组织在组织内部的不同用户组织之间分享集群限额资源。在队列内部,应用程序将使用FIFO调度模式。
l) As we saw in Figure 4-3, a single job does not use more resources than its queue’s capacity. However, if there is more than one job in the queue and there are idle resources available, then the Capacity Scheduler may allocate the spare resources to jobs in the queue, even if that causes the queue’s capacity to be exceeded. This behavior is known as queue elasticity.
就像我们在图表4-3中看到的那样,单个作业不可能使用超过其队列容量的资源,然而,如果在队列中有不止一个作业,且还有空闲的资源可以使用,那么Capacity模式可能分配多余的资源给那些作业,即使这样会导致队列的容量溢出,这个动作以队列的灵活性著称。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
hadoop权威指南第三版(英文版)。 Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv 1. Meet Hadoop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Data! Data Storage and Analysis Comparison with Other Systems RDBMS Grid Computing Volunteer Computing A Brief History of Hadoop Apache Hadoop and the Hadoop Ecosystem Hadoop Releases What’s Covered in this Book Compatibility 2. MapReduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A Weather Dataset Data Format Analyzing the Data with Unix Tools Analyzing the Data with Hadoop Map and Reduce Java MapReduce Scaling Out Data Flow Combiner Functions Running a Distributed MapReduce Job Hadoop Streaming Ruby Python iii Hadoop Pipes Compiling and Running 3. The Hadoop Distributed Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 The Design of HDFS HDFS Concepts Blocks Namenodes and Datanodes HDFS Federation HDFS High-Availability The Command-Line Interface Basic Filesystem Operations Hadoop Filesystems Interfaces The Java Interface Reading Data from a Hadoop URL Reading Data Using the FileSystem API Writing Data Directories Querying the Filesystem Deleting Data Data Flow Anatomy of a File Read Anatomy of a File Write Coherency Model Parallel Copying with distcp Keeping an HDFS Cluster Balanced Hadoop Archives Using Hadoop Archives Limitations 4. Hadoop I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Data Integrity Data Integrity in HDFS LocalFileSystem ChecksumFileSystem Compression Codecs Compression and Input Splits Using Compression in MapReduce Serialization The Writable Interface Writable Classes iv | Table of Contents Implementing a Custom Writable Serialization Frameworks Avro File-Based Data Structures SequenceFile MapFile 5. Developing a MapReduce Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 The Configuration API Combining Resources Variable Expansion Configuring the Development Environment Managing Configuration GenericOptionsParser, Tool, and ToolRunner Writing a Unit Test Mapper Reducer Running Locally on Test Data Running a Job in a Local Job Runner Testing the Driver Running on a Cluster Packaging Launching a Job The MapReduce Web UI Retrieving the Results Debugging a Job Hadoop Logs Remote Debugging Tuning a Job Profiling Tasks MapReduce Workflows Decomposing a Problem into MapReduce Jobs JobControl Apache Oozie 6. How MapReduce Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Anatomy of a MapReduce Job Run Classic MapReduce (MapReduce 1) YARN (MapReduce 2) Failures Failures in Classic MapReduce Failures in YARN Job Scheduling Table of Contents | v The Fair Scheduler The Capacity Scheduler Shuffle and Sort The Map Side The Reduce Side Configuration Tuning Task Execution The Task Execution Environment Speculative Execution Output Committers Task JVM Reuse Skipping Bad Records 7. MapReduce Types and Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 MapReduce Types The Default MapReduce Job Input Formats Input Splits and Records Text Input Binary Input Multiple Inputs Database Input (and Output) Output Formats Text Output Binary Output Multiple Outputs Lazy Output Database Output

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值