Grand Central Dispatch (GCD) Reference

1277 篇文章 12 订阅
85 篇文章 1 订阅


Inherits From


Not Applicable

Conforms To


Not Applicable

Import Statement


Not applicable

Availability


Not Applicable

Grand Central Dispatch (GCD) comprises language features, runtime libraries, and system enhancements that provide systemic, comprehensive improvements to the support for concurrent code execution on multicore hardware in iOS and OS X.

The BSD subsystem, CoreFoundation, and Cocoa APIs have all been extended to use these enhancements to help both the system and your application to run faster, more efficiently, and with improved responsiveness. Consider how difficult it is for a single application to use multiple cores effectively, let alone doing it on different computers with different numbers of computing cores or in an environment with multiple applications competing for those cores. GCD, operating at the system level, can better accommodate the needs of all running applications, matching them to the available system resources in a balanced fashion.

This document describes the GCD API, which supports the asynchronous execution of operations at the Unix level of the system. You can use this API to manage interactions with file descriptors, Mach ports, signals, or timers. In OS X v10.7 and later, you can also use GCD to handle general purpose asynchronous I/O operations on file descriptors.

GCD is not restricted to system-level applications, but before you use it for higher-level applications, you should consider whether similar functionality provided in Cocoa (via NSOperation and block objects) would be easier to use or more appropriate for your needs. See Concurrency Programming Guide for more information.

IMPORTANT

Be careful when mixing GCD with the fork system call. If a process makes GCD calls prior to calling fork, it is not safe to make additional GCD calls in the resulting child process until after a successful call to execor related functions.

GCD Objects and Automatic Reference Counting

When you build your app using the Objective-C compiler, all dispatch objects are Objective-C objects. As such, when automatic reference counting (ARC) is enabled, dispatch objects are retained and released automatically just like any other Objective-C object. When ARC is not enabled, use the dispatch_retain anddispatch_release functions (or Objective-C semantics) to retain and release your dispatch objects. You cannot use the Core Foundation retain/release functions.

If you need to use retain/release semantics in an ARC-enabled app with a later deployment target (for maintaining compatibility with existing code), you can disable Objective-C-based dispatch objects by adding -DOS_OBJECT_USE_OBJC=0 to your compiler flags.

Functions

Creating and Managing Queues

Queuing Tasks for Dispatch

GCD provides and manages FIFO queues to which your application can submit tasks in the form of block objects. Blocks submitted to dispatch queues are executed on a pool of threads fully managed by the system. No guarantee is made as to the thread on which a task executes. GCD offers three kinds of queues:

  • Main: tasks execute serially on your application’s main thread

  • Concurrent: tasks are dequeued in FIFO order, but run concurrently and can finish in any order.

  • Serial: tasks execute one at a time in FIFO order

The main queue is automatically created by the system and associated with your application’s main thread. Your application uses one (and only one) of the following three approaches to invoke blocks submitted to the main queue:

Use concurrent queues to execute large numbers of tasks concurrently. GCD automatically creates four concurrent dispatch queues (three prior to iOS 5 or OS X v10.7) that are global to your application and are differentiated only by their priority level. Your application requests these queues using thedispatch_get_global_queue function. Because these concurrent queues are global to your application, you do not need to retain and release them; retain and release calls for them are ignored. In OS X v10.7 and later or iOS 4.3 and later, you can also create additional concurrent queues for use in your own code modules.

Use serial queues to ensure that tasks to execute in a predictable order. It’s a good practice to identify a specific purpose for each serial queue, such as protecting a resource or synchronizing key processes. Your application must explicitly create and manage serial queues. It can create as many of them as necessary, but should avoid using them instead of concurrent queues just to execute many tasks simultaneously.

IMPORTANT

GCD is a C level API; it does not catch exceptions generated by higher level languages. Your application must catch all exceptions before returning from a block submitted to a dispatch queue.

Using Dispatch Groups

Grouping blocks allows for aggregate synchronization. Your application can submit multiple blocks and track when they all complete, even though they might run on different queues. This behavior can be helpful when progress can’t be made until all of the specified tasks are complete.

Managing Dispatch Objects

GCD provides dispatch object interfaces to allow your application to manage aspects of processing such as memory management, pausing and resuming execution, defining object context, and logging task data. Dispatch objects must be manually retained and released and are not garbage collected.

Using Semaphores

A dispatch semaphore is an efficient implementation of a traditional counting semaphore. Dispatch semaphores call down to the kernel only when the calling thread needs to be blocked. If the calling semaphore does not need to block, no kernel call is made.

Using Barriers

A dispatch barrier allows you to create a synchronization point within a concurrent dispatch queue. When it encounters a barrier, a concurrent queue delays the execution of the barrier block (or any further blocks) until all blocks submitted before the barrier finish executing. At that point, the barrier block executes by itself. Upon completion, the queue resumes its normal execution behavior.

Managing Dispatch Sources

GCD provides a suite of dispatch sources—interfaces for monitoring (low-level system objects such as Unix descriptors, Mach ports, Unix signals, VFS nodes, and so forth) for activity. and submitting event handlers to dispatch queues when such activity occurs. When an event occurs, the dispatch source submits your task code asynchronously to the specified dispatch queue for processing.

Using the Dispatch I/O Convenience API

The dispatch I/O convenience API lets you perform asynchronous read and write operations on file descriptors. This API supports stream-based semantics for accessing the contents of the file-descriptor.

Using the Dispatch I/O Channel API

The dispatch I/O channel API lets you manage file descriptor–based operations. This API supports both stream-based and random-access semantics for accessing the contents of the file descriptor.

Managing Dispatch Data Objects

Dispatch data objects present an interface for managing a memory-based data buffer. Clients accessing the data buffer see it as a contiguous block of memory, but internally the buffer may be comprised of multiple discontiguous blocks of memory.

Managing Time

Managing Queue-Specific Context Data

Data Types

Constants

Inherits From


Not Applicable

Conforms To


Not Applicable

Import Statement


Not applicable

Availability


Not Applicable

Grand Central Dispatch (GCD) comprises language features, runtime libraries, and system enhancements that provide systemic, comprehensive improvements to the support for concurrent code execution on multicore hardware in iOS and OS X.

The BSD subsystem, CoreFoundation, and Cocoa APIs have all been extended to use these enhancements to help both the system and your application to run faster, more efficiently, and with improved responsiveness. Consider how difficult it is for a single application to use multiple cores effectively, let alone doing it on different computers with different numbers of computing cores or in an environment with multiple applications competing for those cores. GCD, operating at the system level, can better accommodate the needs of all running applications, matching them to the available system resources in a balanced fashion.

This document describes the GCD API, which supports the asynchronous execution of operations at the Unix level of the system. You can use this API to manage interactions with file descriptors, Mach ports, signals, or timers. In OS X v10.7 and later, you can also use GCD to handle general purpose asynchronous I/O operations on file descriptors.

GCD is not restricted to system-level applications, but before you use it for higher-level applications, you should consider whether similar functionality provided in Cocoa (via NSOperation and block objects) would be easier to use or more appropriate for your needs. See Concurrency Programming Guide for more information.

IMPORTANT

Be careful when mixing GCD with the fork system call. If a process makes GCD calls prior to calling fork, it is not safe to make additional GCD calls in the resulting child process until after a successful call to execor related functions.

GCD Objects and Automatic Reference Counting

When you build your app using the Objective-C compiler, all dispatch objects are Objective-C objects. As such, when automatic reference counting (ARC) is enabled, dispatch objects are retained and released automatically just like any other Objective-C object. When ARC is not enabled, use the dispatch_retain anddispatch_release functions (or Objective-C semantics) to retain and release your dispatch objects. You cannot use the Core Foundation retain/release functions.

If you need to use retain/release semantics in an ARC-enabled app with a later deployment target (for maintaining compatibility with existing code), you can disable Objective-C-based dispatch objects by adding -DOS_OBJECT_USE_OBJC=0 to your compiler flags.

Functions

Creating and Managing Queues

Queuing Tasks for Dispatch

GCD provides and manages FIFO queues to which your application can submit tasks in the form of block objects. Blocks submitted to dispatch queues are executed on a pool of threads fully managed by the system. No guarantee is made as to the thread on which a task executes. GCD offers three kinds of queues:

  • Main: tasks execute serially on your application’s main thread

  • Concurrent: tasks are dequeued in FIFO order, but run concurrently and can finish in any order.

  • Serial: tasks execute one at a time in FIFO order

The main queue is automatically created by the system and associated with your application’s main thread. Your application uses one (and only one) of the following three approaches to invoke blocks submitted to the main queue:

Use concurrent queues to execute large numbers of tasks concurrently. GCD automatically creates four concurrent dispatch queues (three prior to iOS 5 or OS X v10.7) that are global to your application and are differentiated only by their priority level. Your application requests these queues using thedispatch_get_global_queue function. Because these concurrent queues are global to your application, you do not need to retain and release them; retain and release calls for them are ignored. In OS X v10.7 and later or iOS 4.3 and later, you can also create additional concurrent queues for use in your own code modules.

Use serial queues to ensure that tasks to execute in a predictable order. It’s a good practice to identify a specific purpose for each serial queue, such as protecting a resource or synchronizing key processes. Your application must explicitly create and manage serial queues. It can create as many of them as necessary, but should avoid using them instead of concurrent queues just to execute many tasks simultaneously.

IMPORTANT

GCD is a C level API; it does not catch exceptions generated by higher level languages. Your application must catch all exceptions before returning from a block submitted to a dispatch queue.

Using Dispatch Groups

Grouping blocks allows for aggregate synchronization. Your application can submit multiple blocks and track when they all complete, even though they might run on different queues. This behavior can be helpful when progress can’t be made until all of the specified tasks are complete.

Managing Dispatch Objects

GCD provides dispatch object interfaces to allow your application to manage aspects of processing such as memory management, pausing and resuming execution, defining object context, and logging task data. Dispatch objects must be manually retained and released and are not garbage collected.

Using Semaphores

A dispatch semaphore is an efficient implementation of a traditional counting semaphore. Dispatch semaphores call down to the kernel only when the calling thread needs to be blocked. If the calling semaphore does not need to block, no kernel call is made.

Using Barriers

A dispatch barrier allows you to create a synchronization point within a concurrent dispatch queue. When it encounters a barrier, a concurrent queue delays the execution of the barrier block (or any further blocks) until all blocks submitted before the barrier finish executing. At that point, the barrier block executes by itself. Upon completion, the queue resumes its normal execution behavior.

Managing Dispatch Sources

GCD provides a suite of dispatch sources—interfaces for monitoring (low-level system objects such as Unix descriptors, Mach ports, Unix signals, VFS nodes, and so forth) for activity. and submitting event handlers to dispatch queues when such activity occurs. When an event occurs, the dispatch source submits your task code asynchronously to the specified dispatch queue for processing.

Using the Dispatch I/O Convenience API

The dispatch I/O convenience API lets you perform asynchronous read and write operations on file descriptors. This API supports stream-based semantics for accessing the contents of the file-descriptor.

Using the Dispatch I/O Channel API

The dispatch I/O channel API lets you manage file descriptor–based operations. This API supports both stream-based and random-access semantics for accessing the contents of the file descriptor.

Managing Dispatch Data Objects

Dispatch data objects present an interface for managing a memory-based data buffer. Clients accessing the data buffer see it as a contiguous block of memory, but internally the buffer may be comprised of multiple discontiguous blocks of memory.

Managing Time

Managing Queue-Specific Context Data

Data Types

Constants

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值