linux kernel工作队列之cmwq

现象与疑问

在版本为5.15.64的内核中创建了工作队列并向该工作队列添加work,发现work运行在系统启动就创建的kworker线程,且内核并没有为该工作队列在每个CPU core上创建一个新线程。

查找资料后,kernel后面对工作队列进行了修改,原先的MT wq会为每个CPU core创建一个worker线程,随着kernel使用MT wq的越来越多,并且CPU core数量也在持续增长,一些系统默认的32K PID空间已经无法满足使用。所以kernel引入了cmwq,资料如下:

Concurrency Managed Workqueue (cmwq) — The Linux Kernel documentation

Why cmwq?

In the original wq implementation, a multi threaded (MT) wq had one worker thread per CPU and a single threaded (ST) wq had one worker thread system-wide. A single MT wq needed to keep around the same number of workers as the number of CPUs. The kernel grew a lot of MT wq users over the years and with the number of CPU cores continuously rising, some systems saturated the default 32k PID space just booting up.

Although MT wq wasted a lot of resource, the level of concurrency provided was unsatisfactory. The limitation was common to both ST and MT wq albeit less severe on MT. Each wq maintained its own separate worker pool. An MT wq could provide only one execution context per CPU while an ST wq one for the whole system. Work items had to compete for those very limited execution contexts leading to various problems including proneness to deadlocks around the single execution context.

The tension between the provided level of concurrency and resource usage also forced its users to make unnecessary tradeoffs like libata choosing to use ST wq for polling PIOs and accepting an unnecessary limitation that no two polling PIOs can progress at the same time. As MT wq don’t provide much better concurrency, users which require higher level of concurrency, like async or fscache, had to implement their own thread pool.

Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with focus on the following goals.

  • Maintain compatibility with the original workqueue API.
  • Use per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource.
  • Automatically regulate worker pool and level of concurrency so that the API users don’t need to worry about such details.

The Design

In order to ease the asynchronous execution of functions a new abstraction, the work item, is introduced.

A work item is a simple struct that holds a pointer to the function that is to be executed asynchronously. Whenever a driver or subsystem wants a function to be executed asynchronously it has to set up a work item pointing to that function and queue that work item on a workqueue.

Special purpose threads, called worker threads, execute the functions off of the queue, one after the other. If no work is queued, the worker threads become idle. These worker threads are managed in so called worker-pools.

The cmwq design differentiates between the user-facing workqueues that subsystems and drivers queue work items on and the backend mechanism which manages worker-pools and processes the queued work items.

There are two worker-pools, one for normal work items and the other for high priority ones, for each possible CPU and some extra worker-pools to serve work items queued on unbound workqueues - the number of these backing pools is dynamic.

Subsystems and drivers can create and queue work items through special workqueue API functions as they see fit. They can influence some aspects of the way the work items are executed by setting flags on the workqueue they are putting the work item on. These flags include things like CPU locality, concurrency limits, priority and more. To get a detailed overview refer to the API description of alloc_workqueue() below.

When a work item is queued to a workqueue, the target worker-pool is determined according to the queue parameters and workqueue attributes and appended on the shared worklist of the worker-pool. For example, unless specifically overridden, a work item of a bound workqueue will be queued on the worklist of either normal or highpri worker-pool that is associated to the CPU the issuer is running on.

For any worker pool implementation, managing the concurrency level (how many execution contexts are active) is an important issue. cmwq tries to keep the concurrency at a minimal but sufficient level. Minimal to save resources and sufficient in that the system is used at its full capacity.

Each worker-pool bound to an actual CPU implements concurrency management by hooking into the scheduler. The worker-pool is notified whenever an active worker wakes up or sleeps and keeps track of the number of the currently runnable workers. Generally, work items are not expected to hog a CPU and consume many cycles. That means maintaining just enough concurrency to prevent work processing from stalling should be optimal. As long as there are one or more runnable workers on the CPU, the worker-pool doesn’t start execution of a new work, but, when the last running worker goes to sleep, it immediately schedules a new worker so that the CPU doesn’t sit idle while there are pending work items. This allows using a minimal number of workers without losing execution bandwidth.

Keeping idle workers around doesn’t cost other than the memory space for kthreads, so cmwq holds onto idle ones for a while before killing them.

For unbound workqueues, the number of backing pools is dynamic. Unbound workqueue can be assigned custom attributes using apply_workqueue_attrs() and workqueue will automatically create backing worker pools matching the attributes. The responsibility of regulating concurrency level is on the users. There is also a flag to mark a bound wq to ignore the concurrency management. Please refer to the API section for details.

Forward progress guarantee relies on that workers can be created when more execution contexts are necessary, which in turn is guaranteed through the use of rescue workers. All work items which might be used on code paths that handle memory reclaim are required to be queued on wq’s that have a rescue-worker reserved for execution under memory pressure. Else it is possible that the worker-pool deadlocks waiting for execution contexts to free up.

从这句Use per-CPU unified worker pools shared by all wq to provide flexible level of concurrency on demand without wasting a lot of resource.可知所有工作队列使用每个CPU的统一worker pool,而不再为每个bound workqueue创建与CPU core个数相等数量的worker thread,避免浪费资源。

实验

同一个工作队列同一个CPU上运行两个work

创建一个工作队列,并创建两个work,其中一个work打印出运行的thread id、CPU id后睡眠120秒。另一个delay work延迟60秒后入队运行,打印出运行的thread id、CPU id后一直进行计算,直到60秒后结束。它们运行在同一个工作队列、CPU 2上。

代码如下:

#include <linux/module.h>
#include <linux/init.h>
#include <linux/string.h>
#include <linux/list.h>
#include <linux/sysfs.h>
#include <linux/ctype.h>
#include <linux/workqueue.h>
#include <linux/delay.h>

static struct workqueue_struct *test_wq = NULL;

static struct work_struct work;

static struct delayed_work delay_work;

void work_func(struct work_struct *work){
    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
    msleep(120*1000);
    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u wakeup\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
}

void delay_work_func(struct work_struct *work){
    long long i = 0;
    long long j = 0;
    u64 current_jif;
    u64 start_jif;

    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u\n",__func__, __LINE__, current->comm, current->pid, get_cpu());

    start_jif=get_jiffies_64();

    while (1){
        for (j = 0; j < 1000000; j = j + 2) {
            i = i+j;
        }

        current_jif=get_jiffies_64();

        if (jiffies64_to_msecs(current_jif-start_jif) > 60*1000)
            break;
    }

    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u run over\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
}

static int __init test_init(void){

        test_wq = create_workqueue("test_wq");

        INIT_WORK(&work, work_func);

        INIT_DELAYED_WORK(&delay_work, delay_work_func);

        queue_work_on(2, test_wq, &work);

        queue_delayed_work_on(2, test_wq, &delay_work, msecs_to_jiffies(60*1000));  //60秒后delay_work_func开始运行

        printk(KERN_ERR "-----%s %d-----\n",__func__, __LINE__);

        return 0;
}

static void __exit test_exit(void){

}

module_init(test_init);
module_exit(test_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("xxx@outlook.com");

运行之前,系统中已经存在pid为1361的kworker,如下所示: 

root@test-System-Product-Name:/home/test# ps -elf|grep kworker

.......

1 I root        1350       2  0  80   0 -     0 worker Jun24 ?        00:00:00 [kworker/7:0-events]
1 I root        1361       2  0  80   0 -     0 worker Jun24 ?        00:00:00 [kworker/2:0-mm_percpu_wq]
......

 运行后打印信息如下:

[ 4008.511615] -----test_init 59-----

[ 4008.511886] ####-----work_func 17-----kworker/2:0 pid=1361 cpu=2
[ 4129.899473] ####-----work_func 19-----kworker/2:0 pid=1361 cpu=2 wakeup
[ 4129.899550] ####-----delay_work_func 28-----kworker/2:0 pid=1361 cpu=2
[ 4189.903391] ####-----delay_work_func 43-----kworker/2:0 pid=1361 cpu=2 run over 

从打印信息可知,两个work都运行在之前系统中就已经存在的kworker上下文,且都运行在CPU 2上。即使前面的work休眠了120秒,delay work也需要等待前面的work完全运行完后才能开始运行,因为cmwq文档中语句Maintain compatibility with the original workqueue API.说明cmwq需要兼容原先的工作队列API,原先的unbound workqueue需要为每个CPU创建一个worker thread,而这两个work都在同一个CPU上同一个工作队列运行,所以它们应该在同一个worker thread上运行,故即使delay work延迟时间到了,也需要等待前面的work运行完才能开始运行。

不同工作队列同一个CPU上运行两个work

创建两个工作队列,并创建两个work,其中一个work打印出运行的thread id、CPU id后睡眠150秒。另一个delay work延迟60秒后入队运行,打印出运行的thread id、CPU id后一直进行计算,直到60秒后结束。它们运行在不同的工作队列、但是都运行在CPU 2上。

代码如下:

#include <linux/module.h>
#include <linux/init.h>
#include <linux/string.h>
#include <linux/list.h>
#include <linux/sysfs.h>
#include <linux/ctype.h>
#include <linux/workqueue.h>
#include <linux/delay.h>

static struct workqueue_struct *test_wq = NULL;
static struct workqueue_struct *test_delay_wq = NULL;

static struct work_struct work;

static struct delayed_work delay_work;

void work_func(struct work_struct *work){
    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
    msleep(150*1000);
    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u wakeup\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
}

void delay_work_func(struct work_struct *work){
    long long i = 0;
    long long j = 0;
    u64 current_jif;
    u64 start_jif;

    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u\n",__func__, __LINE__, current->comm, current->pid, get_cpu());

    start_jif=get_jiffies_64();

    while (1){
        for (j = 0; j < 1000000; j = j + 2) {
            i = i+j;
        }

        current_jif=get_jiffies_64();

        if (jiffies64_to_msecs(current_jif-start_jif) > 60*1000)
            break;
    }

    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u run over\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
}

static int __init test_init(void){

        test_wq = create_workqueue("test_wq");

        test_delay_wq = create_workqueue("test_delay_wq");

        INIT_WORK(&work, work_func);

        INIT_DELAYED_WORK(&delay_work, delay_work_func);

        queue_work_on(2, test_wq, &work);

        queue_delayed_work_on(2, test_delay_wq, &delay_work, msecs_to_jiffies(60*1000));  //60秒后delay_work_func开始运行

        printk(KERN_ERR "-----%s %d-----\n",__func__, __LINE__);

        return 0;
}

static void __exit test_exit(void){

}

module_init(test_init);
module_exit(test_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("xxx@outlook.com");

运行之前,系统中已经存在pid为29和574的kworker,如下所示: 

root@test-System-Product-Name:/home/test# ps -elf|grep kworker

.......

1 I root          29       2  0  80   0 -     0 worker 00:49 ?        00:00:00 [kworker/2:0-cgroup_destroy]

.......
1 I root         574       2  0  80   0 -     0 worker 00:50 ?        00:00:00 [kworker/2:2-events]
......

 运行后打印信息如下:

[  255.010707] -----test_init 62-----
[  255.010759] ####-----work_func 18-----kworker/2:2 pid=574 cpu=2
[  316.523683] ####-----delay_work_func 29-----kworker/2:0 pid=29 cpu=2
.....
[  376.527580] ####-----delay_work_func 44-----kworker/2:0 pid=29 cpu=2 run over
[  414.827563] ####-----work_func 20-----kworker/2:2 pid=574 cpu=2 wakeup

从打印信息可知,两个work都运行在之前系统中就已经存在的kworker上下文,且都运行在CPU 2上。非delay work在pid为574的kworker上运行,然后休眠150秒。delay work延迟60秒后在pid为29的kworker上运行,运行60秒后退出。最后非delay work休眠结束后继续运行直到运行完成退出。因为cmwq文档中语句Maintain compatibility with the original workqueue API.说明cmwq需要兼容原先的工作队列API,原先的unbound workqueue需要为每个CPU创建一个worker thread,而这两个work在不同工作队列运行,所以它们应该在不同的worker thread上运行,故delay work延迟时间到了,不需要等待前面的work运行完才能开始运行。

同一个工作队列不同CPU上运行两个work

代码如下:

#include <linux/module.h>
#include <linux/init.h>
#include <linux/string.h>
#include <linux/list.h>
#include <linux/sysfs.h>
#include <linux/ctype.h>
#include <linux/workqueue.h>
#include <linux/delay.h>

static struct workqueue_struct *test_wq = NULL;

static struct work_struct work;

static struct delayed_work delay_work;

void work_func(struct work_struct *work){
    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
    msleep(150*1000);
    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u wakeup\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
}

void delay_work_func(struct work_struct *work){
    long long i = 0;
    long long j = 0;
    u64 current_jif;
    u64 start_jif;

    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u\n",__func__, __LINE__, current->comm, current->pid, get_cpu());

    start_jif=get_jiffies_64();

    while (1){
        for (j = 0; j < 1000000; j = j + 2) {
            i = i+j;
        }

        current_jif=get_jiffies_64();

        if (jiffies64_to_msecs(current_jif-start_jif) > 60*1000)
            break;
    }

    printk(KERN_ERR "####-----%s %d-----%s pid=%d cpu=%u run over\n",__func__, __LINE__, current->comm, current->pid, get_cpu());
}

static int __init test_init(void){

        test_wq = create_workqueue("test_wq");

        INIT_WORK(&work, work_func);

        INIT_DELAYED_WORK(&delay_work, delay_work_func);

        queue_work_on(2, test_wq, &work);

        queue_delayed_work_on(3, test_wq, &delay_work, msecs_to_jiffies(60*1000));  //60秒后delay_work_func开始运行

        printk(KERN_ERR "-----%s %d-----\n",__func__, __LINE__);

        return 0;
}

static void __exit test_exit(void){

}

module_init(test_init);
module_exit(test_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("xxx@outlook.com");

 运行之前,系统中已经存在pid为183和320的kworker。

 运行后打印信息如下:

[ 1461.632982] ####-----work_func 17-----kworker/2:1 pid=183 cpu=2
[ 1522.795685] ####-----delay_work_func 28-----kworker/3:2 pid=320 cpu=3
.......
[ 1582.799604] ####-----delay_work_func 43-----kworker/3:2 pid=320 cpu=3 run over
[ 1627.243682] ####-----work_func 19-----kworker/2:1 pid=183 cpu=2 wakeup

从打印信息可知,除了两个work运行在不同CPU上,其余与上面的不同工作队列同一个CPU上运行两个work结果类似。即同一个工作队列,不同CPU上运行两个work时,也不会严格按照进入工作队列的顺序运行(我还做了一个实验,不让非delay work休眠,而是做delay work一样的工作,delay work只延迟10秒,发现10秒后CPU 2和CPU 3使用率都为100%,说明两个work同时在运行)。接口alloc_ordered_workqueue可以创建ordered workqueue,它是unbound worqueue。create_singlethread_workqueue也一个宏,调用的就是alloc_ordered_workqueue。

总结

cmwq不仅兼容原始的work queue,同时节约了资源。如果创建了多个workqueue,且原先系统中的kworker数不够,则会动态创建kworker满足需求;同时如果worker pool中的idle kworker数太多,且idle kworker处于idle的时间超过5分钟(IDLE_WORKER_TIMEOUT),则会destroy该kworker(参见函数idle_worker_timeout)。

  • 16
    点赞
  • 24
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值