struct percpu_ref结构体浅析

在网上找了一圈都没看到什么struct percpu_ref的资料,索性自己研究一下。

首先,参考这里的介绍,percpu_ref和kref变量类似,通常用做引用计数器,那么为什么还要做一个新的引用计数器呢?atomic_t有什么不足吗?
原文是这么说的:
If references are added and removed frequently over an object’s lifetime, though, that atomic_t variable can become a performance bottleneck.
**
These functions operate on a per-CPU array of reference counters, so they will not cause cache-line bouncing across the system
看来主要是为了解决多处理器对引用计数器的使用导致的cache-line抖动问题。
先对比一下两个结构体。

struct percpu_ref {
	atomic_long_t		count;
	/*
	 * The low bit of the pointer indicates whether the ref is in percpu
	 * mode; if set, then get/put will manipulate the atomic_t.
	 */
	unsigned long		percpu_count_ptr;
	percpu_ref_func_t	*release;
	percpu_ref_func_t	*confirm_switch;
	bool			force_atomic:1;
	struct rcu_head		rcu;
};

struct kref {
	refcount_t refcount;
};
typedef struct refcount_struct {
	atomic_t refs;
} refcount_t;

显然,一个用的是atomic_t作为引用计数器,另一个用了两个变量:atomic_t(count)和percpu(percpu_count_ptr)。
从结构体的设计上来看,可以大致猜到,percpu_ref是想通过percpu变量percpu_count_ptr来记录每个cpu上的引用计数,当每个cpu上的引用计数之和为0(如果程序在运行过程中切换过cpu,确实可能存在一个cpu上引用计数为正,另一个为负的情况),那么便可以对变量进行释放操作。
但是,这样设计是不是很傻?每次put计数器,都要去算一遍percpu变量在各个cpu上值的和,这样会存在并发、cache-line抖动等等问题呀。让我们继续看内核的介绍。
There is one potential problem, though: percpu_ref_put() must determine whether the reference count has dropped to zero and call the release() function if so. Summing an array of per-CPU counters would be expensive, to the point that it would defeat the whole purpose. This problem is avoided with a simple observation: as long as the initial reference is held, the count cannot be zero, so percpu_ref_put() does not bother to check.
概括一下就是说:只要初始时的引用没有被删除,那么整个计数就不会为0,也就不需要在执行percpu_ref_put() 的时候检查
好吧,那这玩意要怎么用呢?
首先,初始化

int percpu_ref_init(struct percpu_ref *ref, percpu_ref_release *release);

操作过程中,通过

void percpu_ref_get(struct percpu_ref *ref);
void percpu_ref_put(struct percpu_ref *ref);

增减引用计数。
最后,
The implication is that the thread which calls percpu_ref_init() must indicate when it is dropping its reference; that is done with a call to:

 void percpu_ref_kill(struct percpu_ref *ref);

After this call, the reference count degrades to the usual model with a single shared atomic_t counter; that counter will be decremented and checked whenever a reference is released
也就是说,初始化这个变量的代码,需要确定好什么时候该把这个变量对应的内存释放掉,如果时机到了,需要调用percpu_ref_kill函数,指明这个变量准备要释放了,调用percpu_ref_kill后,percpu_ref就降级成kref,也就是atomic_t的形式,这样当计数值为0的时候,就可以释放内存了。

下面看一下代码实现。
先看初始化函数,这里我们看一般的用法,主要干了几件事:分配percpu变量的内存,设置atomic_t变量的值为start_count,并设置了release函数。

/**
 * percpu_ref_init - initialize a percpu refcount
 * @ref: percpu_ref to initialize
 * @release: function which will be called when refcount hits 0
 * @flags: PERCPU_REF_INIT_* flags
 * @gfp: allocation mask to use
 *
 * Initializes @ref.  If @flags is zero, @ref starts in percpu mode with a
 * refcount of 1; analagous to atomic_long_set(ref, 1).  See the
 * definitions of PERCPU_REF_INIT_* flags for flag behaviors.
 *
 * Note that @release must not sleep - it may potentially be called from RCU
 * callback context by percpu_ref_kill().
 */
int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release,
		    unsigned int flags, gfp_t gfp)
{
	size_t align = max_t(size_t, 1 << __PERCPU_REF_FLAG_BITS,
			     __alignof__(unsigned long));
	unsigned long start_count = 0;

	ref->percpu_count_ptr = (unsigned long)
		__alloc_percpu_gfp(sizeof(unsigned long), align, gfp); //分配percpu内存
	if (!ref->percpu_count_ptr)
		return -ENOMEM;

	ref->force_atomic = flags & PERCPU_REF_INIT_ATOMIC;

	if (flags & (PERCPU_REF_INIT_ATOMIC | PERCPU_REF_INIT_DEAD))
		ref->percpu_count_ptr |= __PERCPU_REF_ATOMIC;
	else
		start_count += PERCPU_COUNT_BIAS;

	if (flags & PERCPU_REF_INIT_DEAD)
		ref->percpu_count_ptr |= __PERCPU_REF_DEAD;
	else
		start_count++;

	atomic_long_set(&ref->count, start_count);//设置atomic变量的值

	ref->release = release;
	ref->confirm_switch = NULL;
	return 0;
}
EXPORT_SYMBOL_GPL(percpu_ref_init);

下面看看get和put函数,可以看到,在一般情况下,get函数和put函数修改的只是修改的当前cpu上的引用计数,并不会去触碰atomic_t变量的值,从而也不会去调用到release函数;也就是说,一般使用过程中,操作percpu_ref只会修改一个percpu变量,这样就避免了cache抖动问题。并且,临界区都通过rcu锁保护起来了,主要是用于保护__ref_is_percpu函数,该函数会读percpu_ref结构体成员percpu_count_ptr。

/**
 * percpu_ref_get_many - increment a percpu refcount
 * @ref: percpu_ref to get
 * @nr: number of references to get
 *
 * Analogous to atomic_long_add().
 *
 * This function is safe to call as long as @ref is between init and exit.
 */
static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr)
{
	unsigned long __percpu *percpu_count;

	rcu_read_lock_sched();

	if (__ref_is_percpu(ref, &percpu_count))
		this_cpu_add(*percpu_count, nr);
	else
		atomic_long_add(nr, &ref->count);

	rcu_read_unlock_sched();
}

/**
 * percpu_ref_get - increment a percpu refcount
 * @ref: percpu_ref to get
 *
 * Analagous to atomic_long_inc().
 *
 * This function is safe to call as long as @ref is between init and exit.
 */
static inline void percpu_ref_get(struct percpu_ref *ref)
{
	percpu_ref_get_many(ref, 1);
}

/**
 * percpu_ref_put_many - decrement a percpu refcount
 * @ref: percpu_ref to put
 * @nr: number of references to put
 *
 * Decrement the refcount, and if 0, call the release function (which was passed
 * to percpu_ref_init())
 *
 * This function is safe to call as long as @ref is between init and exit.
 */
static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr)
{
	unsigned long __percpu *percpu_count;

	rcu_read_lock_sched();

	if (__ref_is_percpu(ref, &percpu_count))
		this_cpu_sub(*percpu_count, nr);
	else if (unlikely(atomic_long_sub_and_test(nr, &ref->count)))
		ref->release(ref);

	rcu_read_unlock_sched();
}

/**
 * percpu_ref_put - decrement a percpu refcount
 * @ref: percpu_ref to put
 *
 * Decrement the refcount, and if 0, call the release function (which was passed
 * to percpu_ref_init())
 *
 * This function is safe to call as long as @ref is between init and exit.
 */
static inline void percpu_ref_put(struct percpu_ref *ref)
{
	percpu_ref_put_many(ref, 1);
}

接下来我们看最关键的percpu_ref_kill函数,该函数给percpu_ref打上了__PERCPU_REF_DEAD标志位,然后调用__percpu_ref_switch_mode函数,__percpu_ref_switch_mode函数看到有__PERCPU_REF_DEAD标志位后,调用函数__percpu_ref_switch_to_atomic,把percpu_ref引用计数器降级为atomic_t的形式

/**
 * percpu_ref_kill - drop the initial ref
 * @ref: percpu_ref to kill
 *
 * Must be used to drop the initial ref on a percpu refcount; must be called
 * precisely once before shutdown.
 *
 * Switches @ref into atomic mode before gathering up the percpu counters
 * and dropping the initial ref.
 *
 * There are no implied RCU grace periods between kill and release.
 */
static inline void percpu_ref_kill(struct percpu_ref *ref)
{
	percpu_ref_kill_and_confirm(ref, NULL);
}
/**
 * percpu_ref_kill_and_confirm - drop the initial ref and schedule confirmation
 * @ref: percpu_ref to kill
 * @confirm_kill: optional confirmation callback
 *
 * Equivalent to percpu_ref_kill() but also schedules kill confirmation if
 * @confirm_kill is not NULL.  @confirm_kill, which may not block, will be
 * called after @ref is seen as dead from all CPUs at which point all
 * further invocations of percpu_ref_tryget_live() will fail.  See
 * percpu_ref_tryget_live() for details.
 *
 * This function normally doesn't block and can be called from any context
 * but it may block if @confirm_kill is specified and @ref is in the
 * process of switching to atomic mode by percpu_ref_switch_to_atomic().
 *
 * There are no implied RCU grace periods between kill and release.
 */
void percpu_ref_kill_and_confirm(struct percpu_ref *ref,
				 percpu_ref_func_t *confirm_kill)
{
	unsigned long flags;

	spin_lock_irqsave(&percpu_ref_switch_lock, flags);

	WARN_ONCE(ref->percpu_count_ptr & __PERCPU_REF_DEAD,
		  "%s called more than once on %pf!", __func__, ref->release);

	ref->percpu_count_ptr |= __PERCPU_REF_DEAD;
	__percpu_ref_switch_mode(ref, confirm_kill);
	percpu_ref_put(ref);

	spin_unlock_irqrestore(&percpu_ref_switch_lock, flags);
}
EXPORT_SYMBOL_GPL(percpu_ref_kill_and_confirm);

static void __percpu_ref_switch_mode(struct percpu_ref *ref,
				     percpu_ref_func_t *confirm_switch)
{
	lockdep_assert_held(&percpu_ref_switch_lock);

	/*
	 * If the previous ATOMIC switching hasn't finished yet, wait for
	 * its completion.  If the caller ensures that ATOMIC switching
	 * isn't in progress, this function can be called from any context.
	 */
	wait_event_lock_irq(percpu_ref_switch_waitq, !ref->confirm_switch,
			    percpu_ref_switch_lock);

	if (ref->force_atomic || (ref->percpu_count_ptr & __PERCPU_REF_DEAD))
		__percpu_ref_switch_to_atomic(ref, confirm_switch);
	else
		__percpu_ref_switch_to_percpu(ref);
}

static void __percpu_ref_switch_to_atomic(struct percpu_ref *ref,
					  percpu_ref_func_t *confirm_switch)
{
	if (ref->percpu_count_ptr & __PERCPU_REF_ATOMIC) {
		if (confirm_switch)
			confirm_switch(ref);
		return;
	}

	/* switching from percpu to atomic */
	ref->percpu_count_ptr |= __PERCPU_REF_ATOMIC;

	/*
	 * Non-NULL ->confirm_switch is used to indicate that switching is
	 * in progress.  Use noop one if unspecified.
	 */
	ref->confirm_switch = confirm_switch ?: percpu_ref_noop_confirm_switch;

	percpu_ref_get(ref);	/* put after confirmation */
	call_rcu_sched(&ref->rcu, percpu_ref_switch_to_atomic_rcu);//重点在此
}

具体的降级操作在函数percpu_ref_switch_to_atomic_rcu里实现,我们来看看这个函数。这个函数也就是把每个cpu上的引用计数加起来,并把计数和设置到percpu_ref的count成员中。这个函数执行完成后,回去看percpu_ref_put函数,走的就是atomic_t的分支了,当引用计数降为0,就会调用release函数了。

static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu)
{
	struct percpu_ref *ref = container_of(rcu, struct percpu_ref, rcu);
	unsigned long __percpu *percpu_count = percpu_count_ptr(ref);
	unsigned long count = 0;
	int cpu;

	for_each_possible_cpu(cpu)
		count += *per_cpu_ptr(percpu_count, cpu);

	pr_debug("global %ld percpu %ld",
		 atomic_long_read(&ref->count), (long)count);

	/*
	 * It's crucial that we sum the percpu counters _before_ adding the sum
	 * to &ref->count; since gets could be happening on one cpu while puts
	 * happen on another, adding a single cpu's count could cause
	 * @ref->count to hit 0 before we've got a consistent value - but the
	 * sum of all the counts will be consistent and correct.
	 *
	 * Subtracting the bias value then has to happen _after_ adding count to
	 * &ref->count; we need the bias value to prevent &ref->count from
	 * reaching 0 before we add the percpu counts. But doing it at the same
	 * time is equivalent and saves us atomic operations:
	 */
	atomic_long_add((long)count - PERCPU_COUNT_BIAS, &ref->count);

	WARN_ONCE(atomic_long_read(&ref->count) <= 0,
		  "percpu ref (%pf) <= 0 (%ld) after switching to atomic",
		  ref->release, atomic_long_read(&ref->count));

	/* @ref is viewed as dead on all CPUs, send out switch confirmation */
	percpu_ref_call_confirm_rcu(rcu);
}

综合以上分析我们可以看出,与kref不同,percpu_ref在初始化时做了些手脚,并且需要在准备释放的时候调用percpu_ref_kill函数(kref只需要put和get就可以了),但是这么做带来的好处是中间过程中的put和get函数只需要操作当前cpu的percpu变量,不用为了一个atomic_t变量而去争总线,也不会有cache抖动问题,因而在中间过程put和get调用次数较多的时候,使用percpu_ref变量会带来不错的效果。

  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值