Effective Objective-C 2.0: Item 44: Use Dispatch Groups to Take Advantage of Platform Scaling

Item 44: Use Dispatch Groups to Take Advantage of Platform Scaling

Dispatch groups are a GCD feature that allows you to easily group tasks. You can then wait on that set of tasks to finish or be notified through a callback when the set of tasks has finished. This feature is very useful for several reasons, the first and most interesting of which is when you want to perform multiple tasks concurrently but need to know when they have all finished. An example of this would be performing a task, such as compressing a set of files.

A dispatch group is created with the following function:

dispatch_group_t dispatch_group_create();

A group is a simple data structure with nothing distinguishing it, unlike a dispatch queue, which has an identifier. You can associate tasks with a dispatch group in two ways. The first is to use the following function:

void dispatch_group_async(dispatch_group_t group,
                          dispatch_queue_t queue,
                          dispatch_block_t block);

This is a variant of the normal dispatch_async function but takes an additional group parameter, which specifies the group with which to associate the block to execute. The second way to associate a task with a dispatch group is to use the following pair of functions:

void dispatch_group_enter(dispatch_group_t group);
void dispatch_group_leave(dispatch_group_t group);

The former causes the number of tasks the group thinks are currently running to increment; the latter does the opposite. Therefore, for each call to dispatch_group_enter, there must be an associateddispatch_group_leave. This is similar to reference counting (see Item 29),whereby retains and releases must be balanced to avoid leaks. In the case of dispatch groups, if an enter is not balanced with a leave, the group will never finish.

The following function can be used to wait on a dispatch group to finish:

long dispatch_group_wait(dispatch_group_t group,
                         dispatch_time_t timeout);

This takes the group to wait on and a timeout value. The timeout specifies how long this function should block while waiting for the group to finish. If the group finishes before the timeout, zero is returned; otherwise, a nonzero value is returned. The constant DISPATCH_TIME_FOREVER can be used as the timeout value to indicate that the function should wait forever and never time out.

The following function is an alternative to blocking the current thread to wait for a dispatch group to finish:

void dispatch_group_notify(dispatch_group_t group,
                           dispatch_queue_t queue,
                           dispatch_block_t block);

Slightly different from the wait function, this function allows you to specify a block that will be run on a certain queue when the group is finished. Doing so can be useful if the current thread should not be blocked, but you still need to know when all the tasks have finished. In both Mac OS X and iOS, for example, you should never block the main thread, as that’s where all UI drawing and event handling are done.

An example of using this GCD feature is to perform a task on an array of objects and then wait for all tasks to finish. The following code does this:

dispatch_queue_t queue =
  dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT0);
dispatch_group_t dispatchGroup = dispatch_group_create();
for (id object in collection) {
    dispatch_group_async(dispatchGroup,
                         queue,
                         ^{ [object performTask]; });
}

dispatch_group_wait(dispatchGroup, DISPATCH_TIME_FOREVER);
// Continue processing after completing tasks

If the current thread should not be blocked, you can use the notifyfunction instead of waiting:

dispatch_queue_t notifyQueue = dispatch_get_main_queue();
dispatch_group_notify(dispatchGroup,
                      notifyQueue,
                      ^{
                    // Continue processing after completing tasks
                       });

The queue on which the notify callback should be queued is entirely dependent on circumstances. Here, I’ve shown it being the main queue, which would be a fairly common use case. But it could also be any custom serial queue or one of the global concurrent queues.

In this example, the queue dispatched onto was the same one for all tasks. But this doesn’t have to be the case. You may want to put some tasks at a higher priority but still group them all into the same dispatch group and be notified when all have finished:

dispatch_queue_t lowPriorityQueue =

  dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW0);
dispatch_queue_t highPriorityQueue =
  dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH0);
dispatch_group_t dispatchGroup = dispatch_group_create();

for (id object in lowPriorityObjects) {
    dispatch_group_async(dispatchGroup,
                         lowPriorityQueue,
                         ^{ [object performTask]; });
}

for (id object in highPriorityObjects) {
    dispatch_group_async(dispatchGroup,
                         highPriorityQueue,
                         ^{ [object performTask]; });
}

dispatch_queue_t notifyQueue = dispatch_get_main_queue();
dispatch_group_notify(dispatchGroup,
                      notifyQueue,
                      ^{
                    // Continue processing after completing tasks
                       });

Instead of submitting tasks to concurrent queues as in the preceding examples, you may instead use dispatch groups to track multiple tasks submitted to different serial queues. However, a group is not particularly useful if all tasks are queued on the same serial queue. Because the tasks will all execute serially anyway, you could simply queue another block after queuing the tasks, which is the equivalent of a dispatch group’s notifycallback block:

dispatch_queue_t queue =
dispatch_queue_create("com.effectiveobjectivec.queue"NULL);

for (id object in collection) {
    dispatch_async(queue,
                   ^{ [object performTask]; });
}

dispatch_async(queue,
               ^{
                    // Continue processing after completing tasks
                });

This code shows that you don’t always need to use something like dispatch groups. Sometimes, the desired effect can be achieved by using a single queue and standard asynchronous dispatch.

Why did I mention performing tasks based on system resources? Well, if you look back to the example of dispatching onto a concurrent queue, it should become clear. GCD automatically creates new threads or reuses old ones as it sees fit to service blocks on a queue. In the case of concurrent queues, this can be multiple threads, meaning that multiple blocks are executed concurrently. The number of concurrent threads processing a given concurrent queue depends on factors, mostly based on system resources, that GCD decides. If the CPU has multiple cores, a queue having a lot of work to do will likely be given multiple threads on which to execute. Dispatch groups provide an easy way to perform a given set of tasks concurrently and be told when that group of tasks has finished. Through the nature of GCD’s concurrent queues, the tasks will be executed concurrently and based on available system resources. This leaves you to code your business logic and not have to write any kind of complex scheduler to handle concurrent tasks.

The example of looping through a collection and performing a task on each item can also be achieved through the use of another GCD function, as follows:

void dispatch_apply(size_t iterations,
                    dispatch_queue_t queue,
                    void(^block)(size_t));

This function performs a given number of iterations of a block, each time passing an incrementing value from zero to the number of iterations minus one. It is used like this:

dispatch_queue_t queue =
  dispatch_queue_create("com.effectiveobjectivec.queue"NULL);
dispatch_apply(10, queue, ^(size_t i){
    // Perform task
});

In effect, this is equivalent to a simple for loop that iterates from 0 to 9, like this:

for (int i = 0; i < 10; i++) {
    // Perform task
}

The key thing to note with dispatch_apply is that the queue could be a concurrent queue. If so, the blocks will be executed in parallel according to system resources, just like the example of dispatch groups. If the collection in that example were an array, it could be rewritten using dispatch_applylike this:

dispatch_queue_t queue =
  dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT0);

dispatch_apply(array.count, queue, ^(size_t i){
    id object = array[i];
    [object performTask];
});

Once again, this example shows that dispatch groups are not always necessary. However, dispatch_apply blocks until all iterations have finished. For this reason, if you try to run blocks on the current queue (or a serial queue above the current queue in the hierarchy), a deadlock will result. If you want the tasks to be executed in the background, you need to use dispatch groups.

Things to Remember

Image Dispatch groups are used to group a set of tasks. You can optionally be notified when the group finishes executing.

Image Dispatch groups can be used to execute multiple tasks concurrently through a concurrent dispatch queue. In this case, GCD handles the scheduling of multiple tasks at the same time, based on system resources. Writing this yourself would require a lot of code.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值