OC底层探索(二十一)GCD异步、GCD同步、单例、信号量、调度组、栅栏函数等底层分析

OC底层文章汇总

在上一篇OC底层探索(十九) 多线程文章中介绍了OC中GCD的使用,那么GCD的底层原理是什么呢?

1、单例源码

1.1 前期准备

  • 新建一个GCD单例
 static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        <#code to be executed once#>
    });

1.2 源码分析

  • 查看dispatch_once,发现dispatch_once_f函数,参数valonceTokenblock函数
void
dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
	dispatch_once_f(val, block, _dispatch_Block_invoke(block));
}

👇

  • 查看dispatch_once_f,参数分别valonceTokenctxt函数指针func封装的函数

    • onceToken封装成l,然后再将l封装成v
    • 如果v的值等于 DLOCK_ONCE_DONE,就返回。
    • 有判断v是否还存在锁,如果存在就返回。
    • 如果以上都没有,那就就执行_dispatch_once_gate_tryenter,进入下面流程。
    • 然后再执行_dispatch_once_callout
DISPATCH_NOINLINE
void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
	dispatch_once_gate_t l = (dispatch_once_gate_t)val;

#if !DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER
	uintptr_t v = os_atomic_load(&l->dgo_once, acquire);
	if (likely(v == DLOCK_ONCE_DONE)) {
		return;
	}
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
	if (likely(DISPATCH_ONCE_IS_GEN(v))) {
		return _dispatch_once_mark_done_if_quiesced(l, v);
	}
#endif
#endif
	if (_dispatch_once_gate_tryenter(l)) {
		return _dispatch_once_callout(l, ctxt, func);
	}
	return _dispatch_once_wait(l);
}

👇

  • 查看_dispatch_once_gate_tryenter流程,

    • 比较了l,如果存在就执行DLOCK_ONCE_UNLOCKED解锁,如果不存在,就对l进行_dispatch_lock_value_for_self加锁
DISPATCH_ALWAYS_INLINE
static inline bool
_dispatch_once_gate_tryenter(dispatch_once_gate_t l)
{
	return os_atomic_cmpxchg(&l->dgo_once, DLOCK_ONCE_UNLOCKED,
			(uintptr_t)_dispatch_lock_value_for_self(), relaxed);
}

👇

  • 查看_dispatch_once_callout流程

    • 其中_dispatch_client_callout,对传入的函数func进行调用
    • 调用结束就对l解锁_dispatch_once_gate_broadcast
DISPATCH_NOINLINE
static void
_dispatch_once_callout(dispatch_once_gate_t l, void *ctxt,
		dispatch_function_t func)
{
//对函数的调用
	_dispatch_client_callout(ctxt, func);
	//解锁
	_dispatch_once_gate_broadcast(l);
}

👇

  • 查看_dispatch_once_gate_broadcast

    • 获取到目前锁住的value值
    • DISPATCH_ONCE_USE_QUIESCENT_COUNTER是定义的宏,用于判断是什么系统
    • 判断如果有锁直接返回不做任何操作
static inline void
_dispatch_once_gate_broadcast(dispatch_once_gate_t l)
{
	dispatch_lock value_self = _dispatch_lock_value_for_self();
	uintptr_t v;
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
	v = _dispatch_once_mark_quiescing(l);
#else
	v = _dispatch_once_mark_done(l);
#endif
//判断如果有锁直接返回不做任何操作 
	if (likely((dispatch_lock)v == value_self)) return;
	_dispatch_gate_broadcast_slow(&l->dgo_gate, (dispatch_lock)v);
}

👇

  • DISPATCH_ONCE_USE_QUIESCENT_COUNTER是定义的宏,用于判断是什么系统,所以查看_dispatch_once_mark_done
    在这里插入图片描述
    👇
  • 查看_dispatch_once_mark_doneos_atomic_xchg自动将dgo_once标记改成DLOCK_ONCE_DONE,下次进入时,判断是DLOCK_ONCE_DONE直接返回。
DISPATCH_ALWAYS_INLINE
static inline uintptr_t
_dispatch_once_mark_done(dispatch_once_gate_t dgo)
{
	return os_atomic_xchg(&dgo->dgo_once, DLOCK_ONCE_DONE, release);
}

2、异步线程

2.1 前期准备

  • 创建一个异步线程
    dispatch_queue_t conque = dispatch_queue_create("qiu", DISPATCH_QUEUE_CONCURRENT);

    dispatch_async(conque, ^{
        NSLog(@"12334");
    });

2.2 源码分析

2.2.1 dispatch_async

  • 查看dispatch_async,进入_dispatch_continuation_async

    • 其中part1_dispatch_continuation_init是对work函数的封装
    • 其中part2_dispatch_continuation_async是对线程的调用
void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DC_FLAG_CONSUME;
	dispatch_qos_t qos;

	// 任务包装器 - 接受 - 保存 - 函数式
	// 保存 block 
	qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
	_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}

👇

part1:进入_dispatch_continuation_init
  • ctxt指针指向需要执行的work
  • work封装成dispatch_function_t类型的func
  • 进入查看 _dispatch_continuation_init_f
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, dispatch_block_t work,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
//将指针指向需要执行的work
	void *ctxt = _dispatch_Block_copy(work);

	dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
	if (unlikely(_dispatch_block_has_private_data(work))) {
		dc->dc_flags = dc_flags;
		dc->dc_ctxt = ctxt;
		// will initialize all fields but requires dc_flags & dc_ctxt to be set
		return _dispatch_continuation_init_slow(dc, dqu, flags);
	}

//将work封装成dispatch_function_t类型
	dispatch_function_t func = _dispatch_Block_invoke(work);
	//当dispatch_async异步执行是传入的dc_flags是DC_FLAG_CONSUME
	if (dc_flags & DC_FLAG_CONSUME) {
	//此时_dispatch_call_block_and_release就是work
		func = _dispatch_call_block_and_release;
	}
	
	return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}

👇

  • 进入_dispatch_continuation_init_f,其中ctxt是执行work指针f是执行函数

    • 其中将传入的参数对dc进行赋值,
    • 并执行_dispatch_continuation_voucher_set_dispatch_continuation_priority_set函数
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
	pthread_priority_t pp = 0;
	dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
	dc->dc_func = f;
	dc->dc_ctxt = ctxt;
	// in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
	// should not be propagated, only taken from the handler if it has one
	if (!(flags & DISPATCH_BLOCK_HAS_PRIORITY)) {
		pp = _dispatch_priority_propagate();
	}
	_dispatch_continuation_voucher_set(dc, flags);
	return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}

👇

part2: 进入_dispatch_continuation_async
  • 发现执行了dx_push
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
		dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
	if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
		_dispatch_trace_item_push(dqu, dc);
	}
#else
	(void)dc_flags;
#endif
	return dx_push(dqu._dq, dc, qos);
}

👇

  • 全局查找dx_push,只能看到宏定义

在这里插入图片描述

👇

  • 查找dq_push,就会发现好多,我们查看并发队列_dispatch_lane_concurrent_push,

在这里插入图片描述

👇

  • 查看_dispatch_lane_concurrent_push,

  • 如没有等待或者非栅栏 就执行_dispatch_continuation_redirect_push

  • 如果有就执行_dispatch_lane_push

DISPATCH_NOINLINE
void
_dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
	// <rdar://problem/24738102&24743140> reserving non barrier width
	// doesn't fail if only the ENQUEUED bit is set (unlike its barrier
	// width equivalent), so we have to check that this thread hasn't
	// enqueued anything ahead of this call or we can break ordering
	if (dq->dq_items_tail == NULL &&
			!_dispatch_object_is_waiter(dou) &&
			!_dispatch_object_is_barrier(dou) &&
			_dispatch_queue_try_acquire_async(dq)) {
		return _dispatch_continuation_redirect_push(dq, dou, qos);
	}

	_dispatch_lane_push(dq, dou, qos);
}

👇

  • 查看_dispatch_continuation_redirect_push,

    • 又执行了 dx_push,那么这个又进行执行了一遍dx_push流程,且传入的dq参数是dl->do_targetq,大胆猜一下估计是父类
DISPATCH_NOINLINE
static void
_dispatch_continuation_redirect_push(dispatch_lane_t dl,
		dispatch_object_t dou, dispatch_qos_t qos)
{
	if (likely(!_dispatch_object_is_redirection(dou))) {
		dou._dc = _dispatch_async_redirect_wrap(dl, dou);
	} else if (!dou._dc->dc_ctxt) {
		// find first queue in descending target queue order that has
		// an autorelease frequency set, and use that as the frequency for
		// this continuation.
		dou._dc->dc_ctxt = (void *)
		(uintptr_t)_dispatch_queue_autorelease_frequency(dl);
	}

	dispatch_queue_t dq = dl->do_targetq;
	if (!qos) qos = _dispatch_priority_qos(dq->dq_priority);
	dx_push(dq, dou, qos);
}

2.2.2 dispatch_queue_create

  • 查看dispatch_queue_create,进入_dispatch_lane_create_with_target
dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
	return _dispatch_lane_create_with_target(label, attr,
			DISPATCH_TARGET_QUEUE_DEFAULT, true);
}

👇

  • 查看_dispatch_lane_create_with_target,其中tqroot queue,

    • 那么,就会执行_dispatch_root_queue_push
      在这里插入图片描述

👇

  • 进入_dispatch_root_queue_push,最终执行_dispatch_root_queue_push_inline
DISPATCH_NOINLINE
void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
#if DISPATCH_USE_KEVENT_WORKQUEUE
	dispatch_deferred_items_t ddi = _dispatch_deferred_items_get();
	if (unlikely(ddi && ddi->ddi_can_stash)) {
		dispatch_object_t old_dou = ddi->ddi_stashed_dou;
		dispatch_priority_t rq_overcommit;
		rq_overcommit = rq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;

		if (likely(!old_dou._do || rq_overcommit)) {
			dispatch_queue_global_t old_rq = ddi->ddi_stashed_rq;
			dispatch_qos_t old_qos = ddi->ddi_stashed_qos;
			ddi->ddi_stashed_rq = rq;
			ddi->ddi_stashed_dou = dou;
			ddi->ddi_stashed_qos = qos;
			_dispatch_debug("deferring item %p, rq %p, qos %d",
					dou._do, rq, qos);
			if (rq_overcommit) {
				ddi->ddi_can_stash = false;
			}
			if (likely(!old_dou._do)) {
				return;
			}
			// push the previously stashed item
			qos = old_qos;
			rq = old_rq;
			dou = old_dou;
		}
	}
#endif
#if HAVE_PTHREAD_WORKQUEUE_QOS
	if (_dispatch_root_queue_push_needs_override(rq, qos)) {
		return _dispatch_root_queue_push_override(rq, dou, qos);
	}
#else
	(void)qos;
#endif
	_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}

👇

  • 进入_dispatch_root_queue_push_inline,查看_dispatch_root_queue_poke
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
		dispatch_object_t _head, dispatch_object_t _tail, int n)
{
	struct dispatch_object_s *hd = _head._do, *tl = _tail._do;
	if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
		return _dispatch_root_queue_poke(dq, n, 0);
	}
}

👇

  • 进入_dispatch_root_queue_poke,查看_dispatch_root_queue_poke_slow
DISPATCH_NOINLINE
void
_dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor)
{
	if (!_dispatch_queue_class_probe(dq)) {
		return;
	}
#if !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_POOL
	if (likely(dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE))
#endif
	{
		if (unlikely(!os_atomic_cmpxchg2o(dq, dgq_pending, 0, n, relaxed))) {
			_dispatch_root_queue_debug("worker thread request still pending "
					"for global queue: %p", dq);
			return;
		}
	}
#endif // !DISPATCH_USE_INTERNAL_WORKQUEUE
	return _dispatch_root_queue_poke_slow(dq, n, floor);
}

👇

  • 进入_dispatch_root_queue_poke_slow,就会发现

    • _dispatch_root_queues_init
    • 线程的创建

在这里插入图片描述

👇

  • 查看_dispatch_root_queues_init,就会发现就是执行的单例dispatch_once_f
static inline void
_dispatch_root_queues_init(void)
{
	dispatch_once_f(&_dispatch_root_queues_pred, NULL,
			_dispatch_root_queues_init_once);
}

2.3 总结

  • 并发队列为例,dispatch_async异步线程的创建流程

dispatch_async --> _dispatch_continuation_async --> dx_push --> dq_push --> 并发队列:_dispatch_lane_concurrent_push --> _dispatch_continuation_redirect_push - -> dx_push(此时是global_queue) --> _dispatch_root_queue_push --> _dispatch_root_queue_push_inline --> _dispatch_root_queue_poke --> _dispatch_root_queue_poke_slow --> pthread_create

3、GCD 函数的调用

3.1 前期准备

  • 写一个GCD线程,在函数调用时打一个断点。

  • 在控制台输入bt查看一下堆栈

在这里插入图片描述

3.2 源码查看

  • 在打印的堆栈信息中,看到了_dispatch_worker_thread2的调用,那什么时候调用的这个方法呢,全局查找调用点,发现是在_dispatch_root_queues_init_once调用的
    在这里插入图片描述

👇

  • 查找_dispatch_root_queues_init_once的调用点,发现是在dispatch_once_f,并在_dispatch_root_queues_init在线程创建之前执行的。
static inline void
_dispatch_root_queues_init(void)
{
	dispatch_once_f(&_dispatch_root_queues_pred, NULL,
			_dispatch_root_queues_init_once);
}

👇

  • 查看_dispatch_worker_thread2源码,进入_dispatch_root_queue_drain
static void
_dispatch_worker_thread2(pthread_priority_t pp)
{
	bool overcommit = pp & _PTHREAD_PRIORITY_OVERCOMMIT_FLAG;
	dispatch_queue_global_t dq;

	pp &= _PTHREAD_PRIORITY_OVERCOMMIT_FLAG | ~_PTHREAD_PRIORITY_FLAGS_MASK;
	_dispatch_thread_setspecific(dispatch_priority_key, (void *)(uintptr_t)pp);
	dq = _dispatch_get_root_queue(_dispatch_qos_from_pp(pp), overcommit);

	_dispatch_introspection_thread_add();
	_dispatch_trace_runtime_event(worker_unpark, dq, 0);

	int pending = os_atomic_dec2o(dq, dgq_pending, relaxed);
	dispatch_assert(pending >= 0);
	_dispatch_root_queue_drain(dq, dq->dq_priority,
			DISPATCH_INVOKE_WORKER_DRAIN | DISPATCH_INVOKE_REDIRECTING_DRAIN);
	_dispatch_voucher_debug("root queue clear", NULL);
	_dispatch_reset_voucher(NULL, DISPATCH_THREAD_PARK);
	_dispatch_trace_runtime_event(worker_park, NULL, 0);
}

👇

  • 查看_dispatch_root_queue_drain,进入_dispatch_continuation_pop_inline
static void
_dispatch_root_queue_drain(dispatch_queue_global_t dq,
		dispatch_priority_t pri, dispatch_invoke_flags_t flags)
{
#if DISPATCH_DEBUG
	dispatch_queue_t cq;
	if (unlikely(cq = _dispatch_queue_get_current())) {
		DISPATCH_INTERNAL_CRASH(cq, "Premature thread recycling");
	}
#endif
	_dispatch_queue_set_current(dq);
	_dispatch_init_basepri(pri);
	_dispatch_adopt_wlh_anon();

	struct dispatch_object_s *item;
	bool reset = false;
	dispatch_invoke_context_s dic = { };
#if DISPATCH_COCOA_COMPAT
	_dispatch_last_resort_autorelease_pool_push(&dic);
#endif // DISPATCH_COCOA_COMPAT
	_dispatch_queue_drain_init_narrowing_check_deadline(&dic, pri);
	_dispatch_perfmon_start();
	while (likely(item = _dispatch_root_queue_drain_one(dq))) {
		if (reset) _dispatch_wqthread_override_reset();
		_dispatch_continuation_pop_inline(item, &dic, flags, dq);
		reset = _dispatch_reset_basepri_override();
		if (unlikely(_dispatch_queue_drain_should_narrow(&dic))) {
			break;
		}
	}

	// overcommit or not. worker thread
	if (pri & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
		_dispatch_perfmon_end(perfmon_thread_worker_oc);
	} else {
		_dispatch_perfmon_end(perfmon_thread_worker_non_oc);
	}

#if DISPATCH_COCOA_COMPAT
	_dispatch_last_resort_autorelease_pool_pop(&dic);
#endif // DISPATCH_COCOA_COMPAT
	_dispatch_reset_wlh();
	_dispatch_clear_basepri();
	_dispatch_queue_set_current(NULL);
}

👇

  • 查看_dispatch_continuation_pop_inline

    • 就会发现dx_invoke,dx_invokedx_push类似,都是递归dx_invoke,所以我们直接查看宏定义,或者下面的方法
    • 进入_dispatch_continuation_invoke_inline
static inline void
_dispatch_continuation_pop_inline(dispatch_object_t dou,
		dispatch_invoke_context_t dic, dispatch_invoke_flags_t flags,
		dispatch_queue_class_t dqu)
{
	dispatch_pthread_root_queue_observer_hooks_t observer_hooks =
			_dispatch_get_pthread_root_queue_observer_hooks();
	if (observer_hooks) observer_hooks->queue_will_execute(dqu._dq);
	flags &= _DISPATCH_INVOKE_PROPAGATE_MASK;
	if (_dispatch_object_has_vtable(dou)) {
		dx_invoke(dou._dq, dic, flags);
	} else {
		_dispatch_continuation_invoke_inline(dou, flags, dqu);
	}
	if (observer_hooks) observer_hooks->queue_did_execute(dqu._dq);
}

👇

  • 查看_dispatch_continuation_invoke_inline,就会发现_dispatch_client_callout,太像方法的调用了
static inline void
_dispatch_continuation_invoke_inline(dispatch_object_t dou,
		dispatch_invoke_flags_t flags, dispatch_queue_class_t dqu)
{
	dispatch_continuation_t dc = dou._dc, dc1;
	dispatch_invoke_with_autoreleasepool(flags, {
		uintptr_t dc_flags = dc->dc_flags;
		// Add the item back to the cache before calling the function. This
		// allows the 'hot' continuation to be used for a quick callback.
		//
		// The ccache version is per-thread.
		// Therefore, the object has not been reused yet.
		// This generates better assembly.
		_dispatch_continuation_voucher_adopt(dc, dc_flags);
		if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
			_dispatch_trace_item_pop(dqu, dou);
		}
		if (dc_flags & DC_FLAG_CONSUME) {
			dc1 = _dispatch_continuation_free_cacheonly(dc);
		} else {
			dc1 = NULL;
		}
		if (unlikely(dc_flags & DC_FLAG_GROUP_ASYNC)) {
			_dispatch_continuation_with_group_invoke(dc);
		} else {
		//像极了方法的调用呀 看一下
			_dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
			_dispatch_trace_item_complete(dc);
		}
		if (unlikely(dc1)) {
			_dispatch_continuation_free_to_cache_limit(dc1);
		}
	});
	_dispatch_perfmon_workitem_inc();
}

👇

  • 查看_dispatch_client_callout,发现f(ctxt)这不就是方法调用了么!!!
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
	return f(ctxt);
}

4、同步线程

4.1 前期准备

dispatch_sync(dispatch_get_global_queue(0, 0), ^{
        NSLog(@"123");
    });

4.2 源码分析

  • 查看dispatch_async源码,进入_dispatch_sync_f
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
	uintptr_t dc_flags = DC_FLAG_BLOCK;
	if (unlikely(_dispatch_block_has_private_data(work))) {
		return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
	}
	_dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}

👇

  • 查看_dispatch_sync_f,进入_dispatch_sync_f_inline
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
		uintptr_t dc_flags)
{
	_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}

👇

  • 查看_dispatch_sync_f_inline,

    • part1:判断是否为串行队列,如果是串行队列,就执行栅栏同步
    • part2:死锁
static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	//part1:判断是否为串行队列,如果是串行队列,就执行栅栏同步哦
	if (likely(dq->dq_width == 1)) {
		return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
	}

	if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
		DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
	}

	dispatch_lane_t dl = upcast(dq)._dl;
	// Global concurrent queues and queues bound to non-dispatch threads
	// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
	if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
		return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
	}

	if (unlikely(dq->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
	}
	_dispatch_introspection_sync_begin(dl);
	_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
			_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}

part1: _dispatch_barrier_sync_f

  • 查看_dispatch_barrier_sync_f,进入_dispatch_barrier_sync_f_inline
static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}

👇

  • 查看_dispatch_barrier_sync_f_inline

    • _dispatch_queue_try_acquire_barrier_sync,判断是否是挂起状态,如果是就执行_dispatch_sync_f_slow
    • _dispatch_introspection_sync_begin,前期准备
    • _dispatch_lane_barrier_sync_invoke_and_complete ,方法的调用
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	dispatch_tid tid = _dispatch_tid_self();

	if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
		DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
	}

	dispatch_lane_t dl = upcast(dq)._dl;
	// The more correct thing to do would be to merge the qos of the thread
	// that just acquired the barrier lock into the queue state.
	//
	// However this is too expensive for the fast path, so skip doing it.
	// The chosen tradeoff is that if an enqueue on a lower priority thread
	// contends with this fast path, this thread may receive a useless override.
	//
	// Global concurrent queues and queues bound to non-dispatch threads
	// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
	if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
		return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
				DC_FLAG_BARRIER | dc_flags);
	}

	if (unlikely(dl->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dl, ctxt, func,
				DC_FLAG_BARRIER | dc_flags);
	}
	_dispatch_introspection_sync_begin(dl);
	_dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
			DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
					dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}

👇

  • 查看_dispatch_lane_barrier_sync_invoke_and_complete

    • 先执行方法
    • 将当前线程的lock状态改为unlock状态
      在这里插入图片描述

此时是一个阻塞挂起状态
同步执行函数之前,当前线程是一个阻塞状态,只有当函数执行结束后,才会释放。

👇

  • 查看_dispatch_queue_try_acquire_barrier_sync,执行_dispatch_queue_try_acquire_barrier_sync_and_suspend
static inline bool
_dispatch_queue_try_acquire_barrier_sync(dispatch_queue_class_t dq, uint32_t tid)
{
	return _dispatch_queue_try_acquire_barrier_sync_and_suspend(dq._dl, tid, 0);
}

👇

  • 查看_dispatch_queue_try_acquire_barrier_sync_and_suspend

    • 返回os_atomic_rmw_loop2o,这里判断当前队列是否是挂起状态,如果是就直接放弃掉。
static inline bool
_dispatch_queue_try_acquire_barrier_sync_and_suspend(dispatch_lane_t dq,
		uint32_t tid, uint64_t suspend_count)
{
	uint64_t init  = DISPATCH_QUEUE_STATE_INIT_VALUE(dq->dq_width);
	uint64_t value = DISPATCH_QUEUE_WIDTH_FULL_BIT | DISPATCH_QUEUE_IN_BARRIER |
			_dispatch_lock_value_from_tid(tid) |
			(suspend_count * DISPATCH_QUEUE_SUSPEND_INTERVAL);
	uint64_t old_state, new_state;

	return os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, acquire, {
		uint64_t role = old_state & DISPATCH_QUEUE_ROLE_MASK;
		if (old_state != (init | role)) {
			os_atomic_rmw_loop_give_up(break);
		}
		new_state = value | role;
	});
}

👇

  • 那么把当前任务放弃掉,如何造成死锁呢?查看_dispatch_sync_f_slow,进入__DISPATCH_WAIT_FOR_QUEUE__
    在这里插入图片描述

👇

  • 查看__DISPATCH_WAIT_FOR_QUEUE__,进入_dq_state_drain_locked_by

    • 传入的参数分别为当前正在挂起的队列此时任务依赖的队列

在这里插入图片描述
👇

  • 查看_dq_state_drain_locked_by,进入_dispatch_lock_is_locked_by
static inline bool
_dq_state_drain_locked_by(uint64_t dq_state, dispatch_tid tid)
{
	return _dispatch_lock_is_locked_by((dispatch_lock)dq_state, tid);
}

👇

  • 查看_dispatch_lock_is_locked_by

    • 传入的参数分别为当前正在挂起的队列此时任务依赖的队列
    • 判断当前正在挂起的队列此时任务依赖的队列是否相等,如果相等就返回true
static inline bool
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{
	// equivalent to _dispatch_lock_owner(lock_value) == tid
	return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}

  • __DISPATCH_WAIT_FOR_QUEUE__中的 if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) 判断为true时,就抛出异常
	uint64_t dq_state = _dispatch_wait_prepare(dq);
	if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {
		DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,
				"dispatch_sync called on queue "
				"already owned by current thread");
	}
  • 小结
  1. 同步线程在执行任务是会将当前队列设置为挂起状态。同步执行函数之前,当前线程是一个阻塞状态,只有当函数执行结束后,才会释放。
  2. 在挂起状态时,又添加了一个任务进去,就会产生死锁,死锁的原因是自己在等待自己

5、栅栏函数

在进程管理中起到一个栅栏的作用,它等待所有位于barrier函数之前的操作执行完毕后执行,并且在barrier函数执行之后,barrier函数之后的操作才会得到执行,该函数需要同dispatch_queue_create函数生成的DISPATCH_QUEUE_CONCURRENT队列一起使用。

栅栏函数的应用

  • dispatch_barrier_async异步栅栏函数,无需等待栅栏执行完,会继续往下走(保留在队列里)

    • 新建一个并发队列,使用异步线程分别打印输出123456,在两个输出中间添加一个异步栅栏函数
    • 在主线程打印输出
    • 在执行结果上查看,主线程的打印先执行,且123打印之后,才打印的456

在这里插入图片描述

  • dispatch_barrier_sync同步栅栏函数, 需要等待栅栏执行完才会执行栅栏后面的任务

    • 将上述案例中dispatch_barrier_async改为dispatch_barrier_sync
    • 在打印输出中我们发下123输出后,才执行后面的打印任务,由于异步打印是一个耗时操作,所以先打印的主线程的任务。

在这里插入图片描述

  • 主队列中使用栅栏函数,会崩溃

    • 获取主队列
    • 异步获取一个图片,使用栅栏函数放到数组中,(此时的栅栏函数就相当于加锁)
    • 由于栅栏函数拦截的是主队列,但是主队列还有别的任务执行,但是被拦截了,所以导致程序崩溃。
      在这里插入图片描述
  • 小结

栅栏函数只能用在自定义并发队列!!!

源码分析

  • 查看dispatch_barrier_async,就会发现执行的是_dispatch_continuation_async,在同步线程中也是执行的_dispatch_continuation_async,这不是一抹抹一样样么,所以栅栏函数GCD同步底层的实现是一样的。
void
dispatch_barrier_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_BARRIER;
	dispatch_qos_t qos;

	qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
	_dispatch_continuation_async(dq, dc, qos, dc_flags);
}

6、信号量 dispatch_semaphore_t

信号量是通过信号的数量来控制相应的任务调度,需要设置一个固定的信号量,有可以操作的任务调度才可以执行,就好比停车场的空车位数一样,数量一定,如果有空就可以进行停车,如果车位已满就让后来的人等待。其中信号量控制有三个非常重要的函数:

dispatch_semaphore_create
dispatch_semaphore_wait
dispatch_semaphore_signal
  • 1 dispatch_semaphore_create(longvalue);和GCD的group等用法一致,这个函数是创建一个dispatch_semaphore_类型的信号量,并且创建的时候需要指定信号量的大小
  • 2 dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout); 等待信号量。该函数会对信号量进行减1操作。如果减1后信号量小于0(即减1前信号量值为0),那么该函数就会一直等待,也就是不返回(相当于阻塞当前线程),直到该函数等待的信号量的值大于等于1,该函数会对信号量的值进行减1操作,然后返回
  • 3 dispatch_semaphore_signal(dispatch_semaphore_t deem);
    发送信号量。该函数会对信号量的值进行加1操作。

通常等待信号量发送信号量的函数是成对出现的。并发执行任务时候,在当前任务执行之前,用dispatch_semaphore_wait函数进行等待(阻塞),直到上一个任务执行完毕后且通过dispatch_semaphore_signal函数发送信号量(使信号量的值加1),dispatch_semaphore_wait函数收到信号量之后判断信号量的值大于等于1,会再对信号量的值减1,然后当前任务可以执行,执行完毕当前任务后,再通过dispatch_semaphore_signal函数发送信号量(使信号量的值加1),通知执行下一个任务…如此一来,通过信号量,就达到了并发队列中的任务同步执行的要求。

6.1 信号量的应用

  • 新建dispatch_semaphore_t对象,dispatch_semaphore_create(2)
  • 在任务中分别添加dispatch_semaphore_waitdispatch_semaphore_signal
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);

    dispatch_semaphore_t sem = dispatch_semaphore_create(2);
    
    //任务1
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
        NSLog(@"执行任务1");
        sleep(1);
        NSLog(@"任务1完成");
        dispatch_semaphore_signal(sem);
    });
    
    //任务2
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
        NSLog(@"执行任务2");
        sleep(1);
        NSLog(@"任务2完成");
        dispatch_semaphore_signal(sem);
    });
    
    //任务3
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
        NSLog(@"执行任务3");
        sleep(1);
        NSLog(@"任务3完成");
        dispatch_semaphore_signal(sem);
    });

6.2 源码分析

dispatch_semaphore_signal的源码

  • 查看dispatch_semaphore_signal,

    • 进入os_atomic_inc2o,执行加一操作
    • 判断value 大于 0,返回0,表示一切正常
    • 否则进入长等待状态
long
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
	long value = os_atomic_inc2o(dsema, dsema_value, release);
	if (likely(value > 0)) {
		return 0;
	}
	if (unlikely(value == LONG_MIN)) {
		DISPATCH_CLIENT_CRASH(value,
				"Unbalanced call to dispatch_semaphore_signal()");
	}
	return _dispatch_semaphore_signal_slow(dsema);
}

👇

  • 查看os_atomic_inc2o,发现这是个宏定义,进入os_atomic_add2o
// p => dsema,f => dsema_value,m => release
#define os_atomic_inc2o(p, f, m) \
		os_atomic_add2o(p, f, 1, m)

👇

  • 查看os_atomic_add2o,进入os_atomic_add
// p => dsema,f => dsema_value, v => 1, m => release
#define os_atomic_add2o(p, f, v, m) \
		os_atomic_add(&(p)->f, (v), m)

👇

  • 查看os_atomic_add,进入_os_atomic_c11_op
//p => &(dsema)->dsema_value, v => 1, m => release
#define os_atomic_add(p, v, m) \
		_os_atomic_c11_op((p), (v), m, add, +)

👇

  • 查看_os_atomic_c11_op

    • 其中 _r = atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), _v,memory_order_##m)
      _r = atomic_fetch_add_explicit(_os_atomic_c11_atomic(&(dsema)->dsema_value), 1, memory_order_release)
    • 其中atomic_fetch_add_explicit是原子相加操作。
      在这里插入图片描述
#define _os_atomic_c11_op(p, v, m, o, op) \
		({ _os_atomic_basetypeof(p) _v = (v), _r = \
		atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), _v, \
		memory_order_##m); (__typeof__(_r))(_r op _v); })
  • 小结

dispatch_semaphore_create 主要就是初始化限号量
dispatch_semaphore_wait是对信号量的value进行–,即加锁操作
dispatch_semaphore_signal 是对信号量的value进行++,即解锁操作

dispatch_semaphore_wait的源码

  • 查看dispatch_semaphore_wait

    • 进入os_atomic_dec2o,执行减一操作
    • 判断value 大于等于 0,返回0,表示一切正常
    • 否则进入长等待状态
long
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
	long value = os_atomic_dec2o(dsema, dsema_value, acquire);
	if (likely(value >= 0)) {
		return 0;
	}
	return _dispatch_semaphore_wait_slow(dsema, timeout);
}

👇

  • 查看os_atomic_dec2o,进入os_atomic_sub2o
// p => dsema,f => dsema_value,m => release
#define os_atomic_dec2o(p, f, m) \
		os_atomic_sub2o(p, f, 1, m)

👇

  • 查看os_atomic_sub2o,进入os_atomic_sub
// p => dsema,f => dsema_value, v => 1, m => release
#define os_atomic_sub2o(p, f, v, m) \
		os_atomic_sub(&(p)->f, (v), m)

👇

  • 查看os_atomic_sub,进入_os_atomic_c11_op
//p => &(dsema)->dsema_value, v => 1, m => release
#define os_atomic_sub(p, v, m) \
		_os_atomic_c11_op((p), (v), m, sub, -)

👇

  • 查看_os_atomic_c11_op

    • _r = \atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), _v, \memory_order_##m); ==>
    • _r = \atomic_fetch_sub_explicit(_os_atomic_c11_atomic(&(dsema)->dsema_value), 1, \memory_order_release);
    • atomic_fetch_sub_explicit是原子减一操作

在这里插入图片描述

#define _os_atomic_c11_op(p, v, m, o, op) \
		({ _os_atomic_basetypeof(p) _v = (v), _r = \
		atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), _v, \
		memory_order_##m); (__typeof__(_r))(_r op _v); })
  • 小结

dispatch_semaphore_signal 就是减一操作
如果小于0,就进入长等待状态.

7、调度组

其实dispatch_group的相关函数的底层原理和信号量的底层原理的思想是一样的。都是在底层维护了一个value的值,进组出组操作时,对value的值进行操作,达到0这个临界值的时候,进行后续的操作。

  • 调度组的使用函数
//创建
dispatch_group_create();
//
dispatch_group_leave(group);
dispatch_group_enter(group);

//
dispatch_group_async(group, queue, ^{})

//通知
dispatch_group_notify(group, dispatch_get_main_queue(),^{} );

7.1 调度组的使用

第一种情况:dispatch_group_enterdispatch_group_leave成对在dispatch_group_notify之前执行

  • 两个异步耗时打印,加入调度组中
  • 在两个异步后面添加dispatch_group_notify
  • 在主线程打印一下。
 dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"123");
        dispatch_group_leave(group);
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"456");
        dispatch_group_leave(group);
    });
    
    //通知 
    dispatch_group_notify(group, queue,^{
        NSLog(@"异步线程执行结束");
    } );
    
    NSLog(@"主线程事物");
  • 打印结果
    在这里插入图片描述

第二种情况:dispatch_group_enter

 dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"123");
        dispatch_group_leave(group);
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"456");
        dispatch_group_leave(group);
    });
    
    dispatch_group_enter(group);
    
    //通知
    dispatch_group_notify(dispatch_get_main_queue(), queue,^{
        NSLog(@"异步线程执行结束");
    } );
    
    NSLog(@"主线程事物");
    
  • 打印结果: 长等待,不会执行到dispatch_group_notify中。
    在这里插入图片描述

第三种情况: dispatch_group_leave

 dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"123");
        dispatch_group_leave(group);
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"456");
        dispatch_group_leave(group);
    });
    
    dispatch_group_leave(group);
    
    //通知
    dispatch_group_notify(dispatch_get_main_queue(), queue,^{
        NSLog(@"异步线程执行结束");
    } );
    
    NSLog(@"主线程事物");
  • 打印结果:没有执行到dispatch_group_notify,程序崩溃
    在这里插入图片描述

第四种情况:dispatch_group_notify放到最前面

 dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    //通知
    dispatch_group_notify(group, dispatch_get_main_queue(),^{
        NSLog(@"异步线程执行结束");
    } );
    
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"123");
        dispatch_group_leave(group);
    });
    
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        NSLog(@"456");
        dispatch_group_leave(group);
    });
    
    
    
    
    NSLog(@"主线程事物");
  • 打印结果,只要dispatch_group_enterdispatch_group_leave成对执行结束,就会调用dispatch_group_notify
    ](https://img-blog.csdnimg.cn/20201107164244816.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80MDkxODEwNw==,size_16,color_FFFFFF,t_70#pic_center)

第五种情况:使用dispatch_group_async

  • 添加一个dispatch_group_async
 dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"123");
        dispatch_group_leave(group);
    });
    
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        NSLog(@"456");
        dispatch_group_leave(group);
    });
    
    
    dispatch_group_async(group, queue, ^{
        NSLog(@"789");
    });
    
    //通知
    dispatch_group_notify(group, dispatch_get_main_queue(),^{
        NSLog(@"异步线程执行结束");
    } );
    
    NSLog(@"主线程事物");
  • 打印结果:dispatch_group_async内嵌dispatch_group_enterdispatch_group_leave
    在这里插入图片描述

7.2 源码分析

1 dispatch_group_create();

  • 查看dispatch_group_create,进入_dispatch_group_create_with_count
dispatch_group_t
dispatch_group_create(void)
{
	return _dispatch_group_create_with_count(0);
}

👇

  • 查看_dispatch_group_create_with_count

    • 创建一个dispatch_group_t对象
    • 指定do_targetq
    • 返回dispatch_group_t对象
static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
	dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
			sizeof(struct dispatch_group_s));
	dg->do_next = DISPATCH_OBJECT_LISTLESS;
	dg->do_targetq = _dispatch_get_default_queue(false);
	if (n) {
		os_atomic_store2o(dg, dg_bits,
				(uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
		os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
	}
	return dg;
}

2 dispatch_group_leave(group)

  • 查看dispatch_group_leave

    • 执行dg->dg_bits 加一操作
    • 判断state,执行_dispatch_group_wake唤醒dispatch_group_notify
    • 如果不等于0,就崩溃
void
dispatch_group_leave(dispatch_group_t dg)
{
	// The value is incremented on a 64bits wide atomic so that the carry for
	// the -1 -> 0 transition increments the generation atomically.
	uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,
			DISPATCH_GROUP_VALUE_INTERVAL, release);
	uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);

	if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
		old_state += DISPATCH_GROUP_VALUE_INTERVAL;
		do {
			new_state = old_state;
			if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
				new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
				new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
			} else {
				// If the group was entered again since the atomic_add above,
				// we can't clear the waiters bit anymore as we don't know for
				// which generation the waiters are for
				new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
			}
			if (old_state == new_state) break;
		} while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,
				old_state, new_state, &old_state, relaxed)));
		return _dispatch_group_wake(dg, old_state, true);
	}

	if (unlikely(old_value == 0)) {
		DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
				"Unbalanced call to dispatch_group_leave()");
	}
}

3 dispatch_group_enter(group);

  • 查看dispatch_group_enter

    • 执行dg->dg_bits 减一操作
    • 如果等于0,就执行唤醒dispatch_group_notify
    • 如果等于DISPATCH_GROUP_VALUE_MAX0x0000000000000001ULL临界值,就会崩溃。
void
dispatch_group_enter(dispatch_group_t dg)
{
	// The value is decremented on a 32bits wide atomic so that the carry
	// for the 0 -> -1 transition is not propagated to the upper 32bits.
	//减一操作
	uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits,
			DISPATCH_GROUP_VALUE_INTERVAL, acquire);
	uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
	//如果等于0,就执行_dispatch_retain
	if (unlikely(old_value == 0)) {
		_dispatch_retain(dg); // <rdar://problem/22318411>
	}
	if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {
		DISPATCH_CLIENT_CRASH(old_bits,
				"Too many nested calls to dispatch_group_enter()");
	}
}

4 dispatch_group_notify(group, dispatch_get_main_queue(),^{} );

  • 查看_dispatch_group_notify,

    • dg状态码转成了os下层的state
    • 如果old_state等于0,就会执行_dispatch_group_wake
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_continuation_t dsn)
{
	uint64_t old_state, new_state;
	dispatch_continuation_t prev;

	dsn->dc_data = dq;
	_dispatch_retain(dq);

	prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
	if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
	os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
	if (os_mpsc_push_was_empty(prev)) {
		os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
			new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
			if ((uint32_t)old_state == 0) {
				os_atomic_rmw_loop_give_up({
					return _dispatch_group_wake(dg, new_state, false);
				});
			}
		});
	}
}

👇

  • 查看_dispatch_group_wake,

    • do while循环中,就会判断dc是否会命中,如果命中就执行_dispatch_continuation_async,有没有很眼熟,这个是异步执行函数的操作哦。
    • dgfunc释放
static void
_dispatch_group_wake(dispatch_group_t dg, uint64_t dg_state, bool needs_release)
{
	uint16_t refs = needs_release ? 1 : 0; // <rdar://problem/22318411>

	if (dg_state & DISPATCH_GROUP_HAS_NOTIFS) {
		dispatch_continuation_t dc, next_dc, tail;

		// Snapshot before anything is notified/woken <rdar://problem/8554546>
		dc = os_mpsc_capture_snapshot(os_mpsc(dg, dg_notify), &tail);
		do {
			dispatch_queue_t dsn_queue = (dispatch_queue_t)dc->dc_data;
			next_dc = os_mpsc_pop_snapshot_head(dc, tail, do_next);
			_dispatch_continuation_async(dsn_queue, dc,
					_dispatch_qos_from_pp(dc->dc_priority), dc->dc_flags);
			_dispatch_release(dsn_queue);
		} while ((dc = next_dc));

		refs++;
	}

	if (dg_state & DISPATCH_GROUP_HAS_WAITERS) {
		_dispatch_wake_by_address(&dg->dg_gen);
	}

	if (refs) _dispatch_release_n(dg, refs);
}

5 dispatch_group_async(group, queue, ^{})

  • 查看dispatch_group_async,进入_dispatch_continuation_group_async
void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_block_t db)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
	dispatch_qos_t qos;

	qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags);
	_dispatch_continuation_group_async(dg, dq, dc, qos);
}

👇

  • 查看_dispatch_continuation_group_async,内部实现dispatch_group_enter 减一异步执行函数操作
static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_continuation_t dc, dispatch_qos_t qos)
{
	dispatch_group_enter(dg);
	dc->dc_data = dg;
	_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}

👇

  • 问:找到了减一操作,那么加一是在哪执行的呢?在执行函数打一个断点,在控制台输入bt,查看堆栈的调用,在调用方法之前调用了_dispatch_client_callout方法。

在这里插入图片描述

👇

  • 查看_dispatch_client_callout方法,在_dispatch_continuation_with_group_invoke方法中实现了调用,

    • 判断typeDISPATCH_GROUP_TYPE调度组状态,就会执行 dispatch_group_leave 加一操作。

在这里插入图片描述

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值