APIs to control the tasks in the dispatch queues.

2 篇文章 0 订阅
1 篇文章 0 订阅

Controlling Dispatch Queues

GCD also provides many useful APIs to control the tasks in the dispatch queues. Let’s see the APIs one by one to explore how GCD is so powerful.

dispatch_set_target_queue

The dispatch_set_target_queue function is for setting a “target” queue. This is mainly used to set the priority to a newly created queue. For both serial and concurrent dispatch queues, when a dispatch queue is created by the dispatch_queue_create function, the priority of the thread is the same as that of a global dispatch queue of default priority. To modify the priority of a dispatch queue after it is created, you can use this function to modify it. The following source code shows how to get a serial dispatch queue to be executed on the background priority.

dispatch_queue_t mySerialDispatchQueue =
    dispatch_queue_create("com.example.gcd.MySerialDispatchQueue", NULL);

dispatch_queue_t globalDispatchQueueBackground =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0);

dispatch_set_target_queue(mySerialDispatchQueue, globalDispatchQueueBackground);

In the source code, the dispatch queue is passed as the first argument for the dispatch_set_target_queue function to change the priority of the dispatch queue. As the second argument, a global dispatch queue is passed as its target. The mechanism is explained later, though, The result is that the priority of the dispatch queue is changed to the same priority as that of the target queueThe behavior is undefined when you pass the main or the global dispatch queues that are provided by the system as the first argument. So you shouldn’t do that.Using the dispatch_set_target_queue function, you can not only change the priority, but you can also create a hierarchy of dispatch queues as shown inFigure 7–8. When a serial dispatch queue is set as the target for multiple serial dispatch queues that will be executed concurrently for the dispatch_set_target_queue function, only one queue is executed on the target serial dispatch queue at a time.

images

Figure 7–8. Dispatch Queue execution hierarchy

By doing that, you can prevent the tasks executing concurrently if you have tasks that shouldn’t be executed concurrently and they must be added to different serial dispatch queues. Actually, I have no idea of such a situation, though.

dispatch_after

dispatch _after is for setting the timing to start tasks in the queue. Sometimes you may want to execute a task three seconds later, for instance. When you want to execute a task after some specified time has passed, you can use the dispatch_after function. For example, the following source code adds the specified Block to the main dispatch queue after three seconds.

dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, 3ull * NSEC_PER_SEC);

dispatch_after(time, dispatch_get_main_queue(), ^{

        NSLog(@"waited at least three seconds.");

});

In the source code, “ull” is a C language literal specifying a type. “ull” is for “unsigned long long” type. Please note that the dispatch_after function doesn’t execute the task after the specified time. After the time, it adds the task to the dispatch queue, meaning that this source code works the same as if you added the Block to the main dispatch queue by the dispatch_async function three seconds later. The main dispatch queue is executed in RunLoop on the main thread. So, for example, if the RunLoop is executed at a 1/60 second interval, the Block will be executed between after three seconds and after three + 1/60 seconds. If many tasks are added in the main dispatch queue, or if the main thread is delayed, it could get behind. Therefore, it is problematic to use it as an accurate timer, but if you just want to delay a task roughly, this function is very useful. The second argument specifies a dispatch queue to add a task, and the third argument is a Block to be executed. The first argument is a value of dispatch_time_t type to specify the time. This value is created by the dispatch_time or dispatch_walltime function. The dispatch_time function creates a time value, the nanoseconds (the second argument) that have elapsed after a time (the first argument) in dispatch_time_t type. As in the example, DISPATCH_TIME_NOW is mostly used for the first argument to specify the current time. In the following source code, you can get a time in a dispatch_time_t type variable to specify one second from now.

dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, 1ull * NSEC_PER_SEC);

The product of the number and NSEC_PER_SEC makes a time in units of nanoseconds. With NSEC_PER_MSEC, you can make millisecond unit values. The next source code shows how to get a time of 150 milliseconds from now.

dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, 150ull * NSEC_PER_MSEC);

The dispatch_walltime function creates a time in a dispatch_time_t type from a time of struct “timespec” that is used in POSIX. The dispatch_time function is used mainly to create a relative time. In contrast, the dispatch_walltime function is used to create an absolute time. For example, you can use dispatch_walltime to get an absolute time such as 11:11:11 on November 11, 2011 for the dispatch_after function. You can create an alarm clock with that, but it is low precision. A time of timespec struct type is created easily from the NSDate class object as shown in Listing 7–3.

Listing 7–3. dispatch_time_t from NSDate

dispatch_time_t getDispatchTimeByDate(NSDate *date)
{
    NSTimeInterval interval;
    double second, subsecond;
    struct timespec time;
    dispatch_time_t milestone;

    interval = [date timeIntervalSince1970];
    subsecond = modf(interval, &second);
    time.tv_sec = second;
    time.tv_nsec = subsecond * NSEC_PER_SEC;
    milestone = dispatch_walltime(&time, 0);

    return milestone;
}

In the source code, a value of dispatch_time_t type is created from the NSDate class object and it is passed to the dispatch_after function.

Dispatch Group

Dispatch group is used to make a group of queues. You may want to start a task to finalize something after all the tasks in the dispatch queues are finished. When all the tasks are in one serial dispatch queue, you can just add a finalizing task to the end of the queue. It seems complex when you use a concurrent dispatch queue or multiple dispatch queues. In these cases, you can use a dispatch group. The following source code is to add three Blocks to a global dispatch queue and when all the Blocks are finished, a Block for finalization will be executed on the main dispatch queue.

dispatch_queue_t queue =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group = dispatch_group_create();

dispatch_group_async(group, queue, ^{NSLog(@"blk0");});
dispatch_group_async(group, queue, ^{NSLog(@"blk1");});
dispatch_group_async(group, queue, ^{NSLog(@"blk2");});

dispatch_group_notify(group,
    dispatch_get_main_queue(), ^{NSLog(@"done");});
dispatch_release(group);

The result will be like:

blk1
blk2
blk0
done

The execution order of the tasks is not constant because they are in a global dispatch queue that is a concurrent dispatch queue; that is, the tasks are executed on multiple threads concurrently. The execution timing is not constant. Still, “done” must be displayed at the end every time. A dispatch group can monitor tasks to be finished no matter in which type of dispatch queues they are. When it detects that all the tasks are finished, the finalizing task is added to the dispatch queue. This is how the dispatch group is used. At first, a dispatch group that is of type dispatch_group_t is created by the dispatch_group_create function. As the function name contains “create”, the dispatch group has to be released when you don’t need it anymore. You should use the dispatch_release function as you use it for a dispatch queue. The dispatch_group_async function adds a Block to the specified dispatch queue as does the dispatch_async function. The difference from the dispatch_async function is that a created dispatch group is passed to its first argument. When the dispatch_group_async function is called, the specified Block is associated with the dispatch group. When a Block is associated with a dispatch group, the Block has ownership of the dispatch group by the dispatch_retain function as if a Block were added to the dispatch queue. When the Block is finished, the Block releases the dispatch group by the dispatch_release function. When you don’t need the dispatch group anymore, you should just call dispatch_release to release the dispatch group. You don’t need to care how the Blocks that are associated with the dispatch group are executed.

As in the example, the dispatch_group_notify function specifies a Block to be added to a dispatch queue. The Block will be executed when all the tasks in the dispatch group are finished. The first argument is a dispatch group to be monitored. When all the tasks associated with the group are finished, the Block (the third argument) will be added to the dispatch queue (the second argument). Regardless of the type of dispatch queue passed for the dispatch_group_notify function, when the Block is added, all the tasks associated with the dispatch group must be finished.

In addition, you can simply wait to finish all the tasks with a dispatch group. You can use the dispatch_group_wait function as shown in Listing 7–4.

Listing 7–4. dispatch_group_wait

dispatch_queue_t queue =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group = dispatch_group_create();

dispatch_group_async(group, queue, ^{NSLog(@"blk0");});
dispatch_group_async(group, queue, ^{NSLog(@"blk1");});
dispatch_group_async(group, queue, ^{NSLog(@"blk2");});

dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
dispatch_release(group);

The second argument for the dispatch_group_wait function is a timeout to specify how long it waits. The variable is dispatch_time_t type. This example uses DISPATCH_TIME_FOREVER to wait forever. It waits forever until all the tasks associated with the dispatch group are finished. You can’t cancel it in the middle.

As we’ve learned with the dispatch_after function, you can wait one second in the following source code. 

dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, 1ull * NSEC_PER_SEC);

long result = dispatch_group_wait(group, time);

if (result == 0) {

     /*
      * All the tasks that are associated with the dispatch group are finished
      */

} else {

     /*
      * some tasks that are associated with the dispatch group are still running.
      */
}

When the dispatch_group_wait function didn’t return zero, some tasks that are associated with the dispatch group were still running even after the specified time had passed. When it returned zero, all the tasks were finished. The dispatch_group_wait function returns zero if DISPATCH_TIME_FOREVER is used because all the tasks must be finished.

By the way, what does “wait” mean? It means that when the dispatch_group_wait function is called, it doesn’t return from the function. The current thread that is executing the dispatch_group_wait function stops.While the time that is specified to the dispatch_group_wait function passes, or until all the tasks that are associated with the dispatch group are finished, the thread that is executing the dispatch_group_wait function stops.

When DISPATCH_TIME_NOW is used, you can check if the tasks that are associated with the dispatch group are finished.

long result = dispatch_group_wait(group, DISPATCH_TIME_NOW);

For example, you can check if the tasks are finished in each loop of the RunLoop on the main thread without any delay. Although it is possible, I recommend using dispatch_group_notify to add some finalizing task to the main dispatch queue instead. Your source code becomes elegant. Still there are many useful functions in GCD. I explain them in the following sections. Let’s start with dispatch_barrier_async.

dispatch_barrier_async

The dispatch_barrier_async is a function for waiting for other tasks in a queue. As explained, when you access a database or a file, you can use serial dispatch queues to avoid inconsistent data. Actually, updating data shouldn’t be executed at the same time as other updating tasks or reading tasks. Reading data might be able to run concurrently with the other reading tasks, meaning to access the data effectively, the reading tasks should be added to concurrent dispatch queues and only the updating tasks have to be in a serial dispatch queue while no updating task is running. You have to make sure that the reading tasks don’t start until the updating tasks are finished. You might be able to implement it with the dispatch group and dispatch_set_target_queue functions, but it seems to be complex. GCD offers a better solution, thedispatch_barrier_async function. This function is used with a concurrent dispatch queue created by the dispatch_queue_create function. The following source code creates a concurrent dispatch queue by the dispatch_queue_create function, and some reading tasks are added by dispatch_async.

dispatch_queue_t queue = dispatch_queue_create(
    "com.example.gcd.ForBarrier", DISPATCH_QUEUE_CONCURRENT);

dispatch_async(queue, blk0_for_reading);
dispatch_async(queue, blk1_for_reading);
dispatch_async(queue, blk2_for_reading);
dispatch_async(queue, blk3_for_reading);
dispatch_async(queue, blk4_for_reading);
dispatch_async(queue, blk5_for_reading);
dispatch_async(queue, blk6_for_reading);
dispatch_async(queue, blk7_for_reading);

dispatch_release(queue);

Next, for example, think about writing data between the blk3_for_reading and blk4_for_reading, and blk4_for_reading and later tasks should read the updated data.

dispatch_async(queue, blk0_for_reading);
dispatch_async(queue, blk1_for_reading);
dispatch_async(queue, blk2_for_reading);
dispatch_async(queue, blk3_for_reading);

 /*
  * Writing data
  *
  * From now on, all the tasks should read the updated data.
  */

dispatch_async(queue, blk4_for_reading);
dispatch_async(queue, blk5_for_reading);
dispatch_async(queue, blk6_for_reading);
dispatch_async(queue, blk7_for_reading);

If we just put the writing task with the dispatch_async function, as in the next source code, even the tasks that are added before the writing task might read the updated data unexpectedly. The application might even crash because of its invalid access. This is the nature of the concurrent dispatch queue. Furthermore if you add writing tasks, it will cause inconsistent data and many more problems will occur.

dispatch_async(queue, blk0_for_reading);
dispatch_async(queue, blk1_for_reading);
dispatch_async(queue, blk2_for_reading);
dispatch_async(queue, blk3_for_reading);
dispatch_async(queue, blk_for_writing);
dispatch_async(queue, blk4_for_reading);
dispatch_async(queue, blk5_for_reading);
dispatch_async(queue, blk6_for_reading);
dispatch_async(queue, blk7_for_reading);

Here is the dispatch_barrier_async function. Using the dispatch_barrier_async function, you can add a task to a concurrent dispatch queue at the time all the tasks in the queue are finished. When the task added by the dispatch_barrier_async function is finished, the concurrent dispatch queue will be back to normal, which means that it executes the tasks concurrently as usual as shown in Figure 7–9.

dispatch_async(queue, blk0_for_reading);
dispatch_async(queue, blk1_for_reading);
dispatch_async(queue, blk2_for_reading);
dispatch_async(queue, blk3_for_reading);
dispatch_barrier_async(queue, blk_for_writing);
dispatch_async(queue, blk4_for_reading);
dispatch_async(queue, blk5_for_reading);
dispatch_async(queue, blk6_for_reading);
dispatch_async(queue, blk7_for_reading);

As you see, it is very simple. Just usethe dispatch_barrier_async function instead of the dispatch_async function. That’s all.

images

Figure 7–9. Execution with the dispatch_barrier_async function

Please use a concurrent dispatch queue and the dispatch_barrier_async function to implement an effective database or file access. Next, let’s see a dispatch_sync function that is similar to the dispatch_async function.

dispatch_sync

dispatch_sync is a function similar to dispatch_async, but is waiting for the task to be added. “async” in the name of the dispatch_async function means asynchronous. Thus it adds a Block to a dispatch queue and the task is executed asynchronously. The dispatch_async function doesn’t wait for anything as shown in Figure 7–10.

images

Figure 7–10. Behavior of dispatch_async function

There is also a synchronous version, the dispatch_sync function, as well. It adds the Block to the dispatch queue synchronously. The dispatch_sync function waits for the added Block to be finished as shown in Figure 7–11.

images

Figure 7–11. Behavior of dispatch_sync function

As I explained about the dispatch_group_wait function in the “Dispatch Group” section, “wait” means that the current thread stops. For example, on the main dispatch queue, you might want to use a result of a task that was executed on a global dispatch queue on the other thread. In such a situation, you can use the dispatch_sync function.

dispatch_queue_t queue =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

dispatch_sync(queue, ^{/* a task */});

After the dispatch_sync function is called, the function doesn’t return until the specified task is finished. It is like a brief version of the dispatch_group_wait function. As you see, the source code is very simple. You can use the dispatch_sync function very easily although it can produce a problem called deadlock. For example, when the following source code is executed on the main thread, it causes a deadlock.

dispatch_queue_t queue = dispatch_get_main_queue();
dispatch_sync(queue, ^{NSLog(@"Hello?");});

This source code adds a Block to the main dispatch queue; that is, the Block will be executed on the main thread. At the same time, it waits for the block to be finished. Because it is running on the main thread, the Block in the main dispatch queue is never executed. Let’s see the other example:

dispatch_queue_t queue = dispatch_get_main_queue();
dispatch_async(queue, ^{
    dispatch_sync(queue, ^{NSLog(@"Hello?");});
});

The Block that is running on the main dispatch queue is waiting for the other Block to be finished, which will run on the main dispatch queue as well. It causes deadlock.

Of course the same thing happens with a serial dispatch queue.

dispatch_queue_t queue =
    dispatch_queue_create("com.example.gcd.MySerialDispatchQueue", NULL);
dispatch_async(queue, ^{
    dispatch_sync(queue, ^{NSLog(@"Hello?");});
});

As the name of the dispatch_barrier_async function contains “async”, there is a sync version as well, dispatch_barrier_sync. After all the tasks in a dispatch queue are finished, the specified task is added to the dispatch queue as a dispatch_barrier_async function. It waits for the specified task to be finished like the dispatch_sync function. When you use synchronous APIs, such as dispatch_sync, that wait for a task to be finished, you have to ask why you are using the API. You don’t want to deadlock the application, do you?

dispatch_apply

dispatch_apply is a function that relates to the dispatch_sync function and dispatch groups. The dispatch_apply function is used to add a Block to a dispatch queue for a number of times, and then it waits until all the tasks are finished.

dispatch_queue_t queue =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply(10, queue, ^(size_t index) {
    NSLog(@"%zu", index);
});
NSLog(@"done");

The result is something like:

4
1
0
3
5
2
6
8
9
7
done

Because it is executed on the global dispatch queue, the executing timing of each task isn’t constant. “done” must always be last because the dispatch_apply function waits for all the tasks to be finished.

The first argument is the number of times, the second is the target dispatch queue, and the third is the task to be added to the queue. In this example, the Block in the third argument takes an argument to distinguish each Block because the Block is added multiple times. For example, when you want to do something for each entry in an NSArray class object, you don’t need to write a for-loop. Let’s see the following source code. It assumes that an NSArray class object is assigned to a variable “array”.

dispatch_queue_t queue =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply([array count], queue, ^(size_t index) {
    NSLog(@"%zu: %@", index, [array objectAtIndex:index]);
});

It is very easy to execute a Block for all the entries in an array on a global dispatch queue. The dispatch_apply function waits for the execution of all the tasks as the dispatch_sync function does. I recommend using the dispatch_apply function with the dispatch_async function to run it asynchronously as shown in Listing 7–5.

Listing 7–5. dispatch_apply

dispatch_queue_t queue =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

 /*
  * Executing on a global dispatch queue asynchronously
  */

dispatch_async(queue, ^{

     /*
      * On the global dispatch queue, dispatch_apply function waitsfor all the tasks to
be finished.
      */

    dispatch_apply([array count], queue, ^(size_t index) {

         /*
          * do something concurrently with all the objects in the NSArray object
          */

        NSLog(@"%zu: %@", index, [array objectAtIndex:index]);

    });

     /*
      * All the tasks by dispatch_apply function are finished.
      */

     /*
      * Execute on the main dispatch queue asynchronously
      */

    dispatch_async(dispatch_get_main_queue(), ^{

         /*
          * Executed on the main dispatch queue.
          * Something like updating userface, etc.
          */

        NSLog(@"done");

    });
});

Next, let’s see the dispatch_suspend and dispatch_resume functions that control the execution of the added tasks.

dispatch_suspend/dispatch_resume

These functions suspend or resume the execution of the queue. When you add many tasks to a dispatch queue, sometimes you don’t want to execute the tasks until you finish adding all of them. You may want to do that when a Block captures some values that are affected by other tasks. You can suspend a dispatch queue and resume it later if you would like to execute it then. A dispatch queue can be suspended by the dispatch_suspend function:

dispatch_suspend(queue);

It can be resumed by the dispatch_resume function:

 dispatch_resume(queue);

It doesn’t affect any tasks that are already running. It just prevents starting tasks that are in the dispatch queue but not yet begun. After they are resumed, these tasks will be executed.

Dispatch Semaphore

A dispatch semaphore is useful if you need a concurrency control for a small portion of the source code that has smaller granularity than a serial dispatch queue or dispatch_barrier_async function.

As explained, if data are updated concurrently, inconsistent data might occur or the application might crash. You can avoid that by using a serial dispatch queue or the dispatch_barrier_async function. But sometimes concurrency control has to be done in smaller granularity. Let’s see an example to show how to add all the data to an NSMutableArray when the order is unimportant, as shown in Listing 7–6.

Listing 7–6. Adding data to NSMutableArray

dispatch_queue_t queue =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

NSMutableArray *array = [[NSMutableArray alloc] init];

for (int i = 0; i < 100000; ++i) {
    dispatch_async(queue, ^{

        [array addObject:[NSNumber numberWithInt:i]];

    });
}

In this source code, the object of the NSMutableArray class is updated on a global dispatch queue, which means that the object is updated by multiple threads at the same time. Because the NSMutableArray class doesn’t support multithreading, when the object is updated from many threads, it will be corrupted. The application probably crashes because of a memory-related problem. This is a race condition. We can use a dispatch semaphore in this case. A dispatch semaphore should be used for smaller granularity though; we use this example to explain how to use a dispatch semaphore. A dispatch semaphore is a semaphore with a counter, which is called a counting semaphore in multithreaded programming. A semaphore is named after a traffic control with flags. A flag is up when you can go, and the flag is down when you can’t. A dispatch semaphore has a counter to simulate the flag. When the counter is zero, the execution waits. When the counter is more than zero, it keeps going after it decrements the counter. Let’s see how to use a dispatch semaphore. The following source code is creating a dispatch semaphore with the dispatch_semaphore_create function.

dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);

The argument is an initial value of the counter. In the example, the counter is initialized as one. As its name includes “create”, you have to release it with the dispatch_release function as for a dispatch queue or a dispatch group. You can have ownership by calling the dispatch_retain function as well.

dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);

A dispatch_semaphore_wait function waits until the counter of the dispatch semaphore becomes one and more. When the counter is one and more, or the counter becomes one and more while it is waiting, it decreases the counter and returns from the dispatch_semaphore_wait function. The second argument specifies how long it waits in dispatch_time_t type. In this example, it waits forever. The return value of the dispatch_semaphore_wait function is the same as that of the dispatch_group_wait function. You can switch the behavior by the return value as shown in Listing 7–7.

Listing 7–7. dispatch_semaphore_wait

dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, 1ull * NSEC_PER_SEC);

long result = dispatch_semaphore_wait(semaphore, time);

if (result == 0) {

     /*
      * The counter of the dispatch semaphore was more than one.
      * Or it became one and more before the specified timeout.
      * The counter is automatically decreased by one.
      *
      * Here, you can execute a task that needs a concurrency control.
      */

} else {

     /*
      * Because the counter of the dispatch semaphore was zero,
      * it has waited until the specified timeout.
      */

}

When a dispatch_semaphore_wait function returns zero, a task that needs a concurrency control can be executed safely. After you finish the task, you have to call the dispatch_semaphore_signal function to increase the counter of the dispatch semaphore by one. Listing 7–8 shows how to use a dispatch semaphore for the previous source code (Listing 7–6).

Listing 7–8. Adding data to NSMutableArray using dispatch semaphore

dispatch_queue_t queue =
    dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

 /*
  * Create a dispatch semaphore
  *
  * Set the initial value 1 for the counter of the dispatch semaphore
  * to assure that only one thread will access the object of
  * NSMutableArray class at the same time.
  */

dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);

NSMutableArray *array = [[NSMutableArray alloc] init];

for (int i = 0; i < 100000; ++i) {
    dispatch_async(queue, ^{

             /*
              * Wait for the dispatch semaphore
              *
              * Wait forever until the counter of the dispatch semaphore is one and
more.
              */

            dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);

             /*
              * Because the counter of the dispatch semaphore is one and more,
              * the counter is decreased by one and the program flow has returned from
              * the dispatch_semaphore_wait function.
              *
              * The counter of the dispatch semaphore is always zero here.
              *
              * Because only one thread can access the object of the NSMutableArray
class
              * at the same time, you can update the object safely.
              */

            [array addObject:[NSNumber numberWithInt:i]];

             /*
              * Because a task that needs concurrenct control is done,
              * you have to call the dispatch_semaphore_signal function
              * to increase the counter of the dispatch semaphore.
              *
              * If some threads are waiting for the counter of the dispatch_semaphore
              * incremented on dispatch_semaphore_wait, the first thread will be
started.
              */

            dispatch_semaphore_signal(semaphore);
    });
}

 /*
  * Originally, because the dispatch semaphore isn’t needed any more,
  * you have to release the dispatch semaphore.
  *
  * dispatch_release(semaphore);
  */

Next, let’s see the dispatch_once function.

dispatch_once

The dispatch_once function is used to ensure that the specified task will be executed only once during the application’s lifetime. Following is a typical source code to initialize something. It could be made elegant by using the dispatch_once function.

static int initialized = NO;

if (initialized == NO)
{

     /*
      * Initializing
      */

    initialized = YES;
}

With the dispatch_once function, it will be modified as follows.

static dispatch_once_t pred;

dispatch_once(&pred, ^{

     /*
      * Initializing
      */

});

There is not much difference between the two source codes. With the dispatch_once function, it works safely even in a multithreaded environment. The former source code is also safe in most cases. But on a multicore CPU, there is a slight chance that the ‘initialized’ variable might be read at the same time the value is overwritten. This will cause the initialization to be executed many times. However, no need to worry if you implemented it with the dispatch_once function. A dispatch_once function is useful to create a singleton object for a so-called “singleton pattern.”
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值