有一天,突然大量CentOS 7服务器出现异常,表现为systemd CPU占用100%,并出现大量僵尸进程,top信息如下:top信息
随着僵尸进程的增加,系统资源渐渐被消耗完,导致宕机。
在CentOS7中,systemd作为pid为1的进程,负责给孤儿进程收尸。这个问题中,systemd CPU占用100%是因,出现大量僵尸进程是果,所以看看systemd为什么占用了100%的CPU。
装上systemd的debuginfo包,并用perf对systemd进行观察,发现在systemd的用户态中占用较高CPU的函数有endswith,hidden_file_allow_backup,dirent_ensure_type,hidden_file,find_symlinks_fd,内核态占用CPU高的函数中有dcache_readdir,推断内核在读目录。perf report的结果
编写hidden_file.stp文件,在hidden_file函数被调用时将用户态stack打印:
probe process("/usr/lib/systemd/systemd").function("hidden_file").call{
print_usyms(ubacktrace())
}
执行stap hidden_file.stp,结果如下:
0x7f927f3b4b35 : 0x7f927f3b4b35 [/usr/lib64/libc-2.17.so+0x21b35/0x3bc000]
0x7f9280d04b20 : hidden_file+0x0/0x50 [/usr/lib/systemd/systemd]
0x7f9280cdeaf8 : find_symlinks_fd+0xc8/0x510 [/usr/lib/systemd/systemd]
0x7f9280ce386e : unit_file_lookup_state+0x26e/0x410 [/usr/lib/systemd/systemd]
0x7f9280ce4be7 : unit_file_get_list+0x1b7/0x390 [/usr/lib/systemd/systemd]
0x7ff9280cc77ce : method_list_unit_files+0xae/0x220 [/usr/lib/systemd/systemd]
0x7f9280d294c7 : object_find_and_run+0xac7/0x1670 [/usr/lib/systemd/systemd]
0x7f9280d2a189 : bus_process_object+0x119/0x310 [/usr/lib/systemd/systemd]
0x7f9280d32e53 : bus_process_internal+0xdb3/0x1210 [/usr/lib/systemd/systemd]
0x7f9280d332d3 : io_callback+0x13/0x50 [/usr/lib/systemd/systemd]
0x7f9280d387d0 : source_dispatch+0x1c0/0x320 [/usr/lib/systemd/systemd]
0x7f9280d3986a : sd_event_dispatch+0x6a/0x1b0 [/usr/lib/systemd/systemd]
0x7f9280c992e3 : manager_loop+0x403/0x500 [/usr/lib/systemd/systemd]
0x7f9280c8d72b : main+0x1e7b/0x3e00 [/usr/lib/systemd/systemd]
从systemd的source_dispatch函数的代码看,这里触发了一个SOURCE_IO事件。
static int source_dispatch(sd_event_source *s) {
int r = 0;
assert(s);
assert(s->pending || s->type == SOURCE_EXIT);
if (s->type != SOURCE_DEFER && s->type != SOURCE_EXIT) {
r = source_set_pending(s, false);
if (r < 0)
return r;
}
if (s->type != SOURCE_POST) {
sd_event_source *z;
Iterator i;
/* If we execute a non-post source, let's mark all
* post sources as pending */
SET_FOREACH(z, s->event->post_sources, i) {
if (z->enabled == SD_EVENT_OFF)
continue;
r = source_set_pending(z, true);
if (r < 0)
return r;
}
}
if (s->enabled == SD_EVENT_ONESHOT) {
r = sd_event_source_set_enabled(s, SD_EVENT_OFF);
if (r < 0)
return r;
}
s->dispatching = true;
switch (s->type) {
case SOURCE_IO:
r = s->io.callback(s, s->io.fd, s->io.revents, s->userdata);
break;
case SOURCE_TIME_REALTIME:
case SOURCE_TIME_BOOTTIME:
case SOURCE_TIME_MONOTONIC:
case SOURCE_TIME_REALTIME_ALARM:
case SOURCE_TIME_BOOTTIME_ALARM:
r = s->time.callback(s, s->time.next, s->userdata);
break;
所以怀疑机器上有大量的SOURCE_IO事件被触发。找到触发新建IO事件的函数sd_event_add_io,用systemtap打用户stack,居然没进这个函数。说明出问题时没人大量发SOURCE_IO事件。那就是systemd卡在哪个循环里了。回到用户stack,看代码检查循环,怀疑是这里,大概看了下是遍历paths.unitpath中的所有目录:
int unit_file_get_list(
UnitFileScope scope,
const char *root_dir,
Hashmap *h) {
_cleanup_lookup_paths_free_ LookupPaths paths = {};
char **i;
int r;
assert(scope >= 0);
assert(scope < _UNIT_FILE_SCOPE_MAX);
assert(h);
r = verify_root_dir(scope, &root_dir);
if (r < 0)
return r;
r = lookup_paths_init_from_scope(&paths, scope, root_dir);
if (r < 0)
return r;
STRV_FOREACH(i, paths.unit_path) {
_cleanup_closedir_ DIR *d = NULL;
_cleanup_free_ char *units_dir;
units_dir = path_join(root_dir, *i, NULL);
if (!units_dir)
return -ENOMEM;
d = opendir(units_dir);
if (!d) {
if (errno == ENOENT)
continue;
return -errno;
}
for (;;) {
_cleanup_(unit_file_list_free_onep) UnitFileList *f = NULL;
struct dirent *de;
errno = 0;
de = readdir(d);
if (!de && errno != 0)
return -errno;
if (!de)
break;
if (hidden_file(de->d_name))
continue;
为了验证循环的位置,使用systemtap脚本验证不同的行代码是否正在被运行,找到不断循环的地方确实是那个for (;;)。
probe process("/usr/lib/systemd/systemd").function("*@src/shared/install.c:2461").call{
printf("hit\n")
}
再打印出目录,发现是/run/systemd/system/session-***.scope.d/
到/run/systemd/下一看,可真大,可真多。有的机器目录本身的大小已经有32M里面有几十W个东西
google一下,大量/run/systemd/system/session的原因,找到一个dbus的bug,现象一样:https://bugs.freedesktop.org/show_bug.cgi?id=95263。根据帖子里的说明
I think what I'm going to do here is:
* For 1.11.x (master): apply the patches you used, and revert the
"uid 0" workaround.
* For 1.10.x: stick with the "uid 0" workaround, because that workaround
is enough to address this for logind, which is the most important impact.
We can consider additionally backporting the patches from Bug #95619
and this bug later, once they have had more testing in master; but
if we do, we will also keep the workaround.
这个bug在1.11.x 和1.10.x上各自有了修复。一看问题机器的dbus版本,dbus-1.10.24-13.el7_6.x86_64。这好像是已修复版本啊。。。陷入沉思。观察了一下dbus升级时间:
再对一台问题机器按文件时间排序,可以发现,3月4月的还大量存在,到5月就突然没了。这与dbus的升级时间吻合。
所以这些sessions是在dbus升级之前老版本的dbus下积累的,升级dbus后sessions没有被清除。没清除本来也没什么问题,直到有一天执行了systemd的某个命令触发了一个SOURCE_IO事件引发了问题。将/run/systemd/system/下早于dbus升级时间的session*.scope删除,问题解决。