读内核(Linux4.9.9)之epoll内核实现简单分析记录

1 epoll三个系统调用函数原型

#include <sys/epoll.h>
int  epoll_create(int  size);
int  epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
int  epoll_wait(int epfd, struct epoll_event* events, int maxevents. int timeout);

第一,epoll_create中的size被忽略,但是小于0会出错
第二,epoll_create之后得到文件描述符,也可以作为被epoll的管理对象
第三,epoll_wait中的events是一块可用内存空间,epoll是不会为我们分配内存的

2 epoll数据结构

epoll数据结构主要有epoll事件epoll_event,监听系统本身eventpoll ,被监听的对象epitem,以及被监听对象在设备等待队列中的存在形式eppoll_entry。
如果我们将epoll子系统比喻为国家社会,那么eventpoll 可视为家庭(法律意义上和传统道德观念上),epitem是家中的个体(你,你的父母,哥哥姐姐,爷爷奶奶等),eppoll_entry代表家庭个体在其他社会组织中的个体身份(如你父亲是某个学校的老师,但在家中还是你的父亲),而epoll_event代表了各种事务(如,你回到家了这件事)

1) 代表epoll监听事件的epoll_event

//感兴趣的事件和被触发的事件
 struct epoll_event {
     __uint32_t events; /* Epoll events */
     epoll_data_t data; /* User data variable */
};
//保存触发事件的某个文件描述符相关的数据(与具体使用方式有关)
 typedef union epoll_data {
     void *ptr;
     int fd;
     __uint32_t u32;
     __uint64_t u64;
 } epoll_data_t;

events可以是以下几个宏的集合:
EPOLLIN :表示对应的文件描述符可以读(包括对端SOCKET正常关闭);
EPOLLOUT:表示对应的文件描述符可以写;
EPOLLPRI:表示对应的文件描述符有紧急的数据可读(这里应该表示有带外数据到来);
EPOLLERR:表示对应的文件描述符发生错误;
EPOLLHUP:表示对应的文件描述符被挂断;
EPOLLET: 将EPOLL设为边缘触发(Edge Triggered)模式,这是相对于水平触发(Level Triggered)来说的。
EPOLLONESHOT:只监听一次事件,当监听完这次事件之后,如果还需要继续监听这个socket的话,需要再次把这个socket加入到EPOLL队列里

2) 代表epoll监听本身在内核中的存在

/*
 * This structure is stored inside the "private_data" member of the file
 * structure and represents the main data structure for the eventpoll
 * interface.
 */

struct eventpoll {
    /* Protect the access to this structure */

    //对本数据结构的访问
    spinlock_t lock;

    /*
     * This mutex is used to ensure that files are not removed
     * while epoll is using them. This is held during the event
     * collection loop, the file cleanup path, the epoll file exit
     * code and the ctl operations.
     */

    //防止使用时被删除
    struct mutex mtx;

    /* Wait queue used by sys_epoll_wait() */

    //sys_epoll_wait() 使用的等待队列 --> 用于epoll_pwait()事件的等待队列
    wait_queue_head_t wq;

    /* Wait queue used by file->poll() */

    //file->poll()使用的等待队列
    wait_queue_head_t poll_wait;

    /* List of ready file descriptors */

    //事件满足条件的链表,就绪链表
    struct list_head rdllist;

    /* RB tree root used to store monitored fd structs */

    //用于管理所有fd的红黑树(树根)
    struct rb_root rbr;

    /*
     * This is a single linked list that chains all the "struct epitem" that
     * happened while transferring ready events to userspace w/out
     * holding ->lock.
     */

    //将事件到达的fd进行链接起来发送至用户空间
    struct epitem *ovflist;

    /* wakeup_source used when ep_scan_ready_list is running */
    struct wakeup_source *ws;

    /* The user that created the eventpoll descriptor */
    struct user_struct *user;

    struct file *file;

    /* used to optimize loop detection check */
    int visited;
    struct list_head visited_list_link;
};

3) 代表被监听文件描述符在epoll子系统的存在的epitem

 //当向系统中添加一个fd时,就创建一个epitem结构体,这是内核管理epoll的基本数据结构
struct epitem {
    union {
        /* RB tree node links this structure to the eventpoll RB tree */

        //用于主结构管理的红黑树
        struct rb_node rbn;

        /* Used to free the struct epitem */


        struct rcu_head rcu;
    };

    /* List header used to link this structure to the eventpoll ready list */

    //事件就绪队列
    struct list_head rdllink;

    /*
     * Works together "struct eventpoll"->ovflist in keeping the
     * single linked chain of items.
     */

    //用于主结构体中的链表
    struct epitem *next;

    /* The file descriptor information this item refers to */

    //这个结构体对应的被监听的文件描述符信息
    struct epoll_filefd ffd;

    /* Number of active wait queue attached to poll operations */

    //poll操作中事件的个数
    int nwait;

    /* List containing poll wait queues */

    //双向链表,保存着被监视文件的等待队列,功能类似于select/poll中的poll_table
    struct list_head pwqlist;

    /* The "container" of this item */

    //该项属于哪个主结构体(多个epitm从属于一个eventpoll)
    struct eventpoll *ep;

    /* List header used to link this item to the "struct file" items list */

    //双向链表,用来链接被监视的文件描述符对应的struct file。因为file里有f_ep_link,用来保存所有监视这个文件的epoll节点
    struct list_head fllink;

    /* wakeup_source used when EPOLLWAKEUP is set */
    struct wakeup_source __rcu *ws;

    /* The structure that describe the interested events and the source fd */

    //注册的感兴趣的事件,也就是用户空间的epoll_event
    struct epoll_event event;
};

4) 代表epitem在设备等待队列中存在的eppoll_entry

//回调是为了将epitem中的rdllink结构加入到ready list中
struct eppoll_entry {
    /* List header used to link this structure to the "struct epitem" */
    struct list_head llink;

    /* The "base" pointer is set to the container "struct epitem" */

    //所属epitem
    struct epitem *base;

    /*
     * Wait queue item that will be linked to the target file wait
     * queue head.
     */

    //作为一元素挂入被监听fd的wait队列中
    wait_queue_t wait;

    /* The wait queue head that linked the "wait" wait queue item */

    //被监听fd的等待队列,如果fd为socket,那么whead为sock->sk_sleep
    wait_queue_head_t *whead;
};

当我们访问设备时,如果设备暂时不可用,那么进程将被挂到设备等待队列中,并且将让出CPU控制权,处于休眠状态。
eppoll_entry主要完成epitem和epitem事件发生时的callback函数之间的关联。首先将eppoll_entry的whead指向fd的设备等待队列(同select中的wait_address),然后初始化eppoll_entry的base变量指向epitem,最后通过add_wait_queue将epoll_entry挂载到fd的设备等待队列上。当在设备硬件数据到来时,硬件中断处理函数中会唤醒该等待队列上等待的进程时,会调用唤醒函数ep_poll_callback(ep_poll_callback: 当fd上出发事件后,将epitem中的rdllink节点加入到readlist中(epfd-file->eventpoll->rdlist))

3 epoll简单实例

#include <unistd.h>
#include <iostream>
#include <sys/epoll.h>
using namespace std;
int main(void)
{
    int epfd,nfds;
    //ev用于注册事件,数组用于返回要处理的事件
    struct epoll_event ev,events[5];

    //只需要监听一个描述符——标准输出
    epfd=epoll_create(1);
    ev.data.fd=STDOUT_FILENO;

    //监听读状态同时设置ET(边缘触发)模式
    ev.events=EPOLLOUT|EPOLLET;

    //注册epoll事件,监听标准输入
    epoll_ctl(epfd,EPOLL_CTL_ADD,STDOUT_FILENO,&ev);
    for(;;)
   {
      //获取发生的事件信息,不设定超时
      nfds=epoll_wait(epfd,events,5,-1);
      for(int i=0;i<nfds;i++)
     {
         if(events[i].data.fd==STDOUT_FILENO)
             cout<<"hello world!"<<endl;
     }
   }
};

4 epoll子系统初始化

主要是初始化内存分配的两个cache——存放epitem的eventpoll_epi和存放eventpoll_pwq的eppoll_entry

static int __init eventpoll_init(void)
{
    struct sysinfo si;

    si_meminfo(&si);
    /*
     * Allows top 4% of lomem to be allocated for epoll watches (per user).
     */

     // 限制可添加到epoll的最多的描述符数量  
    max_user_watches = (((si.totalram - si.totalhigh) / 25) << PAGE_SHIFT) /
        EP_ITEM_COST;
    BUG_ON(max_user_watches < 0);

    /*
     * Initialize the structure used to perform epoll file descriptor
     * inclusion loops checks.
     */

    // 初始化递归检查队列  

    //epoll本身也是文件,也可以被poll/select/epoll监视,如果epoll之间互相监视就有可能导致死循环。
    ep_nested_calls_init(&poll_loop_ncalls);

    /* Initialize the structure used to perform safe poll wait head wake ups */


    ep_nested_calls_init(&poll_safewake_ncalls);

    /* Initialize the structure used to perform file's f_op->poll() calls */
    ep_nested_calls_init(&poll_readywalk_ncalls);

    /*
     * We can have many thousands of epitems, so prevent this from
     * using an extra cache line on 64-bit (and smaller) CPUs
     */
    BUILD_BUG_ON(sizeof(void *) <= 8 && sizeof(struct epitem) > 128);

    /* Allocates slab cache used to allocate "struct epitem" items */



    // epoll 使用的slab分配器分别用来分配epitem和eppoll_entry  

    //存放item的cache
    epi_cache = kmem_cache_create("eventpoll_epi", sizeof(struct epitem),
            0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, NULL);


    /* Allocates slab cache used to allocate "struct eppoll_entry" */

    //存放eppoll_entry的cache
    pwq_cache = kmem_cache_create("eventpoll_pwq",
            sizeof(struct eppoll_entry), 0, SLAB_PANIC, NULL);

    return 0;
}

5 epoll_create系统调用内核实现

涉及到两个系统调用,最终都会调用epoll_create1。该系统调用实质上干了两件事:
第一,构建eventpoll对象,该对象包含一个红黑树rbr用于存储epitem对象(也就是为epoll_ctl时添加监听文件描述符在epoll系统中对应的数据结构),等待队列wq ,绪链表rdllist等
第二,构建file对象,以适配万物皆文件的理念,同时将eventpoll对象ep挂到file对象的private_data对象上


/*
 * Open an eventpoll file descriptor.
 */
SYSCALL_DEFINE1(epoll_create1, int, flags)
{
    int error, fd;
    //每次epoll_create会得到一个eventpoll对象,代表了epoll监听体系中一个小集团

    //eventpoll对象中有用于管理所有被监听fd的红黑树(树根)

    //eventpoll对象中有事件满足条件的链表,就绪链表

    //eventpoll对象中有个wait_queue_head_t对象wq,当进程调epoll_wait被阻塞时(rdllist为空)
    //当前进程将被挂到该队列,当rdllist不为空时,将会唤醒该队列上的进程

    struct eventpoll *ep = NULL;

    struct file *file;

    /* Check the EPOLL_* constant for consistency.  */
    BUILD_BUG_ON(EPOLL_CLOEXEC != O_CLOEXEC);

    if (flags & ~EPOLL_CLOEXEC)
        return -EINVAL;
    /*
     * Create the internal data structure ("struct eventpoll").
     */

    //每个epoll fd(epfd)对应的主要数据结构

    //为ep分配内存并进行初始化,因为这个数据结构不像epitem那么常用,没有用到slab
    error = ep_alloc(&ep);
    if (error < 0)
        return error;
    /*
     * Creates all the items needed to setup an eventpoll file. That is,
     * a file structure and a free file descriptor.
     */

    //调用anon_inode_getfd 新建一个file instance,也就是epoll可以看成一个文件(匿名文件)。因此我们可以看到epoll_create会返回一个fd。

    fd = get_unused_fd_flags(O_RDWR | (flags & O_CLOEXEC));
    if (fd < 0) {
        error = fd;
        goto out_free_ep;
    }

    //epoll所管理的所有的fd都是放在一个大的结构eventpoll(红黑树)中,

    //将epoll文件系统的操作集对象赋值给file,并将eventpoll对象挂到file的private_data对象上(sys_epoll_ctl会取用)
    file = anon_inode_getfile("[eventpoll]", &eventpoll_fops, ep,
                 O_RDWR | (flags & O_CLOEXEC));

    if (IS_ERR(file)) {
        error = PTR_ERR(file);
        goto out_free_fd;
    }
    ep->file = file;

    //将文件对象挂到当前文件描述符表中
    fd_install(fd, file);
    return fd;

out_free_fd:
    put_unused_fd(fd);
out_free_ep:
    ep_free(ep);
    return error;
}

**

6 epoll_ctl系统调用内核实现

**
主要干两件事:
第一,构建epitem,挂到epoll红黑树上
第二,构建eppoll_entry挂到设备等待队列,其中eppoll_entry包含回调函数ep_poll_callback和对应的epitem


/*
 * The following function implements the controller interface for
 * the eventpoll file that enables the insertion/removal/change of
 * file descriptors inside the interest set.
 */
SYSCALL_DEFINE4(epoll_ctl, int, epfd, int, op, int, fd,
        struct epoll_event __user *, event)
{
    int error;
    int full_check = 0;
    struct fd f, tf;
    //每个epoll fd(epfd)对应的主要数据结构
    struct eventpoll *ep;

    //当向系统中添加一个fd时,就创建一个epitem结构体,这是内核管理epoll的基本数据结构

    //构建该对象将插入eventpoll对象中的红黑树中集中管理
    struct epitem *epi;

    //存储user空间传入的描述监听事件的对象
    struct epoll_event epds;

    //应该是用于循环监听检测的
    struct eventpoll *tep = NULL;

    error = -EFAULT;

    //判断参数的合法性,将 __user *event 复制给 epds。
    if (ep_op_has_event(op) &&
        copy_from_user(&epds, event, sizeof(struct epoll_event)))
        goto error_return;

    error = -EBADF;

    //得到epoll的file对象
    f = fdget(epfd);
    if (!f.file)
        goto error_return;

    /* Get the "struct file *" for the target file */

    //得到被监听对象的file对象
    tf = fdget(fd);
    if (!tf.file)
        goto error_fput;

    /* The target file descriptor must support poll */
    error = -EPERM;

    //操作集必须有poll函数
    if (!tf.file->f_op->poll)
        goto error_tgt_fput;

    /* Check if EPOLLWAKEUP is allowed */
    if (ep_op_has_event(op))
        ep_take_care_of_epollwakeup(&epds);

    /*
     * We have to check that the file structure underneath the file descriptor
     * the user passed to us _is_ an eventpoll file. And also we do not permit
     * adding an epoll file descriptor inside itself.
     */
    error = -EINVAL;
    if (f.file == tf.file || !is_file_epoll(f.file))
        goto error_tgt_fput;

    /*
     * epoll adds to the wakeup queue at EPOLL_CTL_ADD time only,
     * so EPOLLEXCLUSIVE is not allowed for a EPOLL_CTL_MOD operation.
     * Also, we do not currently supported nested exclusive wakeups.
     */
    if (epds.events & EPOLLEXCLUSIVE) {
        if (op == EPOLL_CTL_MOD)
            goto error_tgt_fput;
        if (op == EPOLL_CTL_ADD && (is_file_epoll(tf.file) ||
                (epds.events & ~EPOLLEXCLUSIVE_OK_BITS)))
            goto error_tgt_fput;
    }

    /*
     * At this point it is safe to assume that the "private_data" contains
     * our own data structure.
     */

    //在create时存入进去的(anon_inode_getfd),现在取用。
    ep = f.file->private_data;

    /*
     * When we insert an epoll file descriptor, inside another epoll file
     * descriptor, there is the change of creating closed loops, which are
     * better be handled here, than in more critical paths. While we are
     * checking for loops we also determine the list of files reachable
     * and hang them on the tfile_check_list, so we can check that we
     * haven't created too many possible wakeup paths.
     *
     * We do not need to take the global 'epumutex' on EPOLL_CTL_ADD when
     * the epoll file descriptor is attaching directly to a wakeup source,
     * unless the epoll file descriptor is nested. The purpose of taking the
     * 'epmutex' on add is to prevent complex toplogies such as loops and
     * deep wakeup paths from forming in parallel through multiple
     * EPOLL_CTL_ADD operations.
     */
    mutex_lock_nested(&ep->mtx, 0);
    if (op == EPOLL_CTL_ADD) {
        if (!list_empty(&f.file->f_ep_links) ||
                        is_file_epoll(tf.file)) {
            full_check = 1;
            mutex_unlock(&ep->mtx);
            mutex_lock(&epmutex);
            if (is_file_epoll(tf.file)) {
                error = -ELOOP;
                //epoll本身也是文件,也可以被poll/select/epoll监视,如果epoll之间互相监视就有可能导致死循环。

                //epoll的实现中,所有可能产生递归调用的函数都由函函数 ep_call_nested 进行包裹,
                //递归调用过程中出现死循环或递归过深就会打破死循环和递归调用直接返回。
                if (ep_loop_check(ep, tf.file) != 0) {
                    clear_tfile_check_list();
                    goto error_tgt_fput;
                }
            } else
                list_add(&tf.file->f_tfile_llink,
                            &tfile_check_list);
            mutex_lock_nested(&ep->mtx, 0);
            if (is_file_epoll(tf.file)) {
                tep = tf.file->private_data;
                mutex_lock_nested(&tep->mtx, 1);
            }
        }
    }

    /*
     * Try to lookup the file inside our RB tree, Since we grabbed "mtx"
     * above, we can be sure to be able to use the item looked up by
     * ep_find() till we release the mutex.
     */

    //防止重复添加(在ep的红黑树中查找是否已经存在这个fd)
    epi = ep_find(ep, tf.file, fd);

    error = -EINVAL;
    switch (op) {
    //增加监听一个fd
    case EPOLL_CTL_ADD:

        //没有找到的情况下才添加
        if (!epi) {
            //默认包含POLLERR和POLLHUP事件
            epds.events |= POLLERR | POLLHUP;

             //在ep的红黑树中插入这个fd对应的epitm结构体。

             //实际上也是调用 ep_ptable_queue_proc ,注册回调函数 ep_poll_callback

             //事件发生时,ep_poll_callback被调用,将epitm插入ep的就绪队列rdllist
            error = ep_insert(ep, &epds, tf.file, fd, full_check);
        } else

          //重复添加(在ep的红黑树中查找已经存在这个fd)
            error = -EEXIST;
        if (full_check)
            clear_tfile_check_list();
        break;
    case EPOLL_CTL_DEL:
        if (epi)
            error = ep_remove(ep, epi);
        else
            error = -ENOENT;
        break;
    case EPOLL_CTL_MOD:
        if (epi) {
            if (!(epi->event.events & EPOLLEXCLUSIVE)) {
                epds.events |= POLLERR | POLLHUP;
                error = ep_modify(ep, epi, &epds);
            }
        } else
            error = -ENOENT;
        break;
    }
    if (tep != NULL)
        mutex_unlock(&tep->mtx);
    mutex_unlock(&ep->mtx);

error_tgt_fput:
    if (full_check)
        mutex_unlock(&epmutex);

    fdput(tf);
error_fput:
    fdput(f);
error_return:

    return error;
}

看下ep_insert函数以及ep_ptable_queue_proc(被间接调用)函数:

/*
 * Must be called with "mtx" held.
 */
static int ep_insert(struct eventpoll *ep, struct epoll_event *event,
             struct file *tfile, int fd, int full_check)
{
    int error, revents, pwake = 0;
    unsigned long flags;
    long user_watches;

     //当向系统中添加一个fd时,就创建一个epitem结构体,这是内核管理epoll的基本数据结构
    struct epitem *epi;

    //初始ep_pqueue这样一个结构将等待队列回调函数注册,然后通过poll函数执行注册的回调函数将等待队列节点加入对应的等待队列
    struct ep_pqueue epq;

    user_watches = atomic_long_read(&ep->user->epoll_watches);
    if (unlikely(user_watches >= max_user_watches))
        return -ENOSPC;

    //从cache中分配一个实例
    //eventpoll_init 中初始化 cache
    if (!(epi = kmem_cache_alloc(epi_cache, GFP_KERNEL)))
        return -ENOMEM;

    /* Item initialization follow here ... */
    INIT_LIST_HEAD(&epi->rdllink);
    INIT_LIST_HEAD(&epi->fllink);
    INIT_LIST_HEAD(&epi->pwqlist);

    //该项属于哪个主结构体(多个epitm从属于一个eventpoll)
    epi->ep = ep;
    ep_set_ffd(&epi->ffd, tfile, fd);

    //注册的感兴趣的事件,也就是用户空间的epoll_event
    epi->event = *event;

    //poll操作中事件的个数
    epi->nwait = 0;

    //用于主结构体中的链表
    epi->next = EP_UNACTIVE_PTR;
    if (epi->event.events & EPOLLWAKEUP) {
        error = ep_create_wakeup_source(epi);
        if (error)
            goto error_create_wakeup_source;
    } else {
        RCU_INIT_POINTER(epi->ws, NULL);
    }

    /* Initialize the poll table using the queue callback */
    epq.epi = epi;

    //安装poll回调函数

    //实现在21行


    init_poll_funcptr(&epq.pt, ep_ptable_queue_proc);

    /*
     * Attach the item to the poll hooks and get current event bits.
     * We can safely use the file* here because its usage count has
     * been increased by the caller of this function. Note that after
     * this operation completes, the poll callback can start hitting
     * the new item.
     */


    //  调用poll函数来获取当前事件位,其实是利用它来调用注册函数ep_ptable_queue_proc(poll_wait中调用)


    //  以tcp为例,当状态发生改变时会有如下调用流程 sock_def_wakeup (sock_init_data对sock初始化)--->wake_up_interruptible_all-->
    //  __wake_up--->curr->func(对于加入epoll的文件描述符而言即ep_poll_callback)  



    // sock_poll --> tcp_poll --> sock_poll_wait --> poll_wait , 最终调用 ep_ptable_queue_proc

    //将等待队列元素添加到套接字等待队列中

    //同时将等待队列元素添加到file对应的epitem被监视文件的等待队列(同一个epoll文件描述符管理的文件)

    //事件发生时将回调 ep_poll_callback
    revents = ep_item_poll(epi, &epq.pt);

    /*
     * We have to check if something went wrong during the poll wait queue
     * install process. Namely an allocation for a wait queue failed due
     * high memory pressure.
     */
    error = -ENOMEM;
    if (epi->nwait < 0)
        goto error_unregister;

    /* Add the current item to the list of active epoll hook for this file */
    spin_lock(&tfile->f_lock);
    list_add_tail_rcu(&epi->fllink, &tfile->f_ep_links);
    spin_unlock(&tfile->f_lock);

    /*
     * Add the current item to the RB tree. All RB tree operations are
     * protected by "mtx", and ep_insert() is called with "mtx" held.
     */

    //将该epi插入到ep的红黑树中

    ep_rbtree_insert(ep, epi);

    /* now check if we've created too many backpaths */
    error = -EINVAL;
    if (full_check && reverse_path_check())
        goto error_remove_epi;

    /* We have to drop the new item inside our item list to keep track of it */
    spin_lock_irqsave(&ep->lock, flags);

    /* If the file is already "ready" we drop it inside the ready list */
    if ((revents & event->events) && !ep_is_linked(&epi->rdllink)) {
        list_add_tail(&epi->rdllink, &ep->rdllist);
        ep_pm_stay_awake(epi);

        /* Notify waiting tasks that events are available */
        if (waitqueue_active(&ep->wq))
            wake_up_locked(&ep->wq);
        if (waitqueue_active(&ep->poll_wait))
            pwake++;
    }

    spin_unlock_irqrestore(&ep->lock, flags);

    atomic_long_inc(&ep->user->epoll_watches);

    /* We have to call this outside the lock */
    if (pwake)
        ep_poll_safewake(&ep->poll_wait);

    return 0;

error_remove_epi:
    spin_lock(&tfile->f_lock);
    list_del_rcu(&epi->fllink);
    spin_unlock(&tfile->f_lock);

    rb_erase(&epi->rbn, &ep->rbr);

error_unregister:
    ep_unregister_pollwait(ep, epi);

    /*
     * We need to do this because an event could have been arrived on some
     * allocated wait queue. Note that we don't care about the ep->ovflist
     * list, since that is used/cleaned only inside a section bound by "mtx".
     * And ep_insert() is called with "mtx" held.
     */
    spin_lock_irqsave(&ep->lock, flags);
    if (ep_is_linked(&epi->rdllink))
        list_del_init(&epi->rdllink);
    spin_unlock_irqrestore(&ep->lock, flags);

    wakeup_source_unregister(ep_wakeup_source(epi));

error_create_wakeup_source:
    kmem_cache_free(epi_cache, epi);

    return error;
}

static inline unsigned int ep_item_poll(struct epitem *epi, poll_table *pt)
{
    pt->_key = epi->event.events;

    //struct epoll_filefd 这个结构体对应的被监听的文件描述符信息

    //file->f_op 为 socket_file_ops,进而为sock->ops

    //对于tcp套接字来说,在 inet_create中设置为 inet_stream_ops

    //epi->ffd.file->f_op->poll即为 tcp_poll

    //最终 poll_wait 会调 ep_ptable_queue_proc 
    return epi->ffd.file->f_op->poll(epi->ffd.file, pt) & epi->event.events;
}

/*
 * This is the callback that is used to add our wait queue to the
 * target file wakeup lists.
 */

//tcp_poll --> poll_wait
static void ep_ptable_queue_proc(struct file *file, wait_queue_head_t *whead,
                 poll_table *pt)
{

    //实质是ep_pqueue包含epitem对象,在ep_item_poll函数中传入的是ep_pqueue对象
    struct epitem *epi = ep_item_from_epqueue(pt);

    //设备等待队列上的元素对象
    struct eppoll_entry *pwq;

    if (epi->nwait >= 0 && (pwq = kmem_cache_alloc(pwq_cache, GFP_KERNEL))) {

        //初始化等待队列元素的对象,回调函数为ep_poll_callback

        // 当在设备硬件数据到来时,硬件中断处理函数中会唤醒该等待队列上等待的进程时,
        // 
        // 会调用唤醒函数ep_poll_callback(ep_poll_callback: 当fd上出发事件后,
        // 
        // 将epitem中的rdllink节点加入到readlist中(epfd-file->eventpoll->rdlist))
        init_waitqueue_func_entry(&pwq->wait, ep_poll_callback);
        pwq->whead = whead;
        pwq->base = epi;

        //将等待队列元素添加到套接字等待队列中
        //EPOLLEXCLUSIVE标志是4.5版本内核才有的,主要是为了解决同一个文件描述符同时被添加到多个epoll实例中造成的“惊群”问题
        if (epi->event.events & EPOLLEXCLUSIVE)
            add_wait_queue_exclusive(whead, &pwq->wait);
        else
            add_wait_queue(whead, &pwq->wait);

        //同时将等待队列元素添加到file对应的epitem被监视文件的等待队列(同一个epoll文件描述符管理的文件)
        list_add_tail(&pwq->llink, &epi->pwqlist);
        epi->nwait++;
    } else {
        /* We have to signal that an error occurred */
        epi->nwait = -1;
    }
}

可以看到在函数ep_insert中,构建了epitem对象(随后被加入epoll红黑树中),然后通过函数调用链ep_item_poll –> sock_poll –> tcp_poll –> sock_poll_wait –> poll_wait , 最终调用 ep_ptable_queue_proc,将epitem对象打包为eppoll_entry对象,挂到设备等待队列:

/*
 * This is the callback that is used to add our wait queue to the
 * target file wakeup lists.
 */

//tcp_poll --> poll_wait --> ep_ptable_queue_proc

//函数将会将eppoll_entry添加到设备等待队列,其中eppoll_entry包含回调函数和对应的epitem对象
static void ep_ptable_queue_proc(struct file *file, wait_queue_head_t *whead,
                 poll_table *pt)
{

    //实质是ep_pqueue包含epitem对象和poll_table对象,在ep_item_poll函数中传入的是poll_table对象
    struct epitem *epi = ep_item_from_epqueue(pt);

    //设备等待队列上的元素对象
    struct eppoll_entry *pwq;

    if (epi->nwait >= 0 && (pwq = kmem_cache_alloc(pwq_cache, GFP_KERNEL))) {

        //初始化等待队列元素的对象,回调函数为 ep_poll_callback

        // 当在设备硬件数据到来时,硬件中断处理函数中会唤醒该等待队列上等待的进程时,
        // 
        // 会调用唤醒函数 ep_poll_callback(ep_poll_callback: 当fd上出发事件后,
        // 
        // 将epitem中的rdllink节点加入到readlist中(epfd-file->eventpoll->rdlist))
        init_waitqueue_func_entry(&pwq->wait, ep_poll_callback);
        pwq->whead = whead;
        pwq->base = epi;

        //将等待队列元素添加到套接字等待队列中
        //EPOLLEXCLUSIVE标志是4.5版本内核才有的,主要是为了解决同一个文件描述符同时被添加到多个epoll实例中造成的“惊群”问题
        if (epi->event.events & EPOLLEXCLUSIVE)
            add_wait_queue_exclusive(whead, &pwq->wait);
        else
            add_wait_queue(whead, &pwq->wait);

        //同时将等待队列元素添加到file对应的epitem被监视文件的等待队列 -->
        list_add_tail(&pwq->llink, &epi->pwqlist);
        epi->nwait++;
    } else {
        /* We have to signal that an error occurred */
        epi->nwait = -1;
    }
}

当在设备硬件数据到来时,硬件中断处理函数中会唤醒该等待队列上等待的进程时, 同时调用回调函数ep_poll_callback

/*
 * This is the callback that is passed to the wait queue wakeup
 * mechanism. It is called by the stored file descriptors when they
 * have events to report.
 */
//这个函数是事件发生时,被回调的函数,在 ep_ptable_queue_proc 中注册该函数
static int ep_poll_callback(wait_queue_t *wait, unsigned mode, int sync, void *key)
{
    int pwake = 0;
    unsigned long flags;
    struct epitem *epi = ep_item_from_wait(wait);
    struct eventpoll *ep = epi->ep;
    int ewake = 0;

    if ((unsigned long)key & POLLFREE) {
        ep_pwq_from_wait(wait)->whead = NULL;
        /*
         * whead = NULL above can race with ep_remove_wait_queue()
         * which can do another remove_wait_queue() after us, so we
         * can't use __remove_wait_queue(). whead->lock is held by
         * the caller.
         */
        list_del_init(&wait->task_list);
    }

    spin_lock_irqsave(&ep->lock, flags);

    /*
     * If the event mask does not contain any poll(2) event, we consider the
     * descriptor to be disabled. This condition is likely the effect of the
     * EPOLLONESHOT bit that disables the descriptor when an event is received,
     * until the next EPOLL_CTL_MOD will be issued.
     */
    if (!(epi->event.events & ~EP_PRIVATE_BITS))
        goto out_unlock;

    /*
     * Check the events coming with the callback. At this stage, not
     * every device reports the events in the "key" parameter of the
     * callback. We need to be able to handle both cases here, hence the
     * test for "key" != NULL before the event match test.
     */
    if (key && !((unsigned long) key & epi->event.events))
        goto out_unlock;

    /*
     * If we are transferring events to userspace, we can hold no locks
     * (because we're accessing user memory, and because of linux f_op->poll()
     * semantics). All the events that happen during that period of time are
     * chained in ep->ovflist and requeued later on.
     */
    if (unlikely(ep->ovflist != EP_UNACTIVE_PTR)) {
        if (epi->next == EP_UNACTIVE_PTR) {
            epi->next = ep->ovflist;
            ep->ovflist = epi;
            if (epi->ws) {
                /*
                 * Activate ep->ws since epi->ws may get
                 * deactivated at any time.
                 */
                __pm_stay_awake(ep->ws);
            }

        }
        goto out_unlock;
    }

    /* If this file is already in the ready list we exit soon */

    //将epitem添加到就绪队列
    if (!ep_is_linked(&epi->rdllink)) {
        list_add_tail(&epi->rdllink, &ep->rdllist);
        ep_pm_stay_awake_rcu(epi);
    }

    /*
     * Wake up ( if active ) both the eventpoll wait list and the ->poll()
     * wait list.
     */
     //唤醒调用epoll_wait被阻塞而休眠的进程
    if (waitqueue_active(&ep->wq)) {
        if ((epi->event.events & EPOLLEXCLUSIVE) &&
                    !((unsigned long)key & POLLFREE)) {
            switch ((unsigned long)key & EPOLLINOUT_BITS) {
            case POLLIN:
                if (epi->event.events & POLLIN)
                    ewake = 1;
                break;
            case POLLOUT:
                if (epi->event.events & POLLOUT)
                    ewake = 1;
                break;
            case 0:
                ewake = 1;
                break;
            }
        }
        wake_up_locked(&ep->wq);
    }
    if (waitqueue_active(&ep->poll_wait))
        pwake++;

out_unlock:
    spin_unlock_irqrestore(&ep->lock, flags);

    /* We have to call this outside the lock */
    if (pwake)
        ep_poll_safewake(&ep->poll_wait);

    if (epi->event.events & EPOLLEXCLUSIVE)
        return ewake;

    return 1;
}

ep_poll_callbackepitem添加到就绪队列,同时唤醒调用epoll_wait被阻塞而休眠的进程

**

7 epoll_wait系统调用内核实现

**
当设备就绪时,会唤醒等待队列上的进程,同时会取出等待队列上的eppoll_entry元素,调用其回调函数,对于epoll来说,回调函数为ep_poll_callback

epoll_wait做的事情就是检查就绪队列是否为空,如果为空就休眠,如果不为空就取出小于等于要求事件对象


/*
 * Implement the event wait interface for the eventpoll file. It is the kernel
 * part of the user space epoll_wait(2).
 */
SYSCALL_DEFINE4(epoll_wait, int, epfd, struct epoll_event __user *, events,
        int, maxevents, int, timeout)
{
    int error;
    struct fd f;
    struct eventpoll *ep;

    /* The maximum number of event must be greater than zero */


    if (maxevents <= 0 || maxevents > EP_MAX_EVENTS)
        return -EINVAL;

    /* Verify that the area passed by the user is writeable */

    //检查用户空间传入的events指向的内存是否可写。参见__range_not_ok()。
    if (!access_ok(VERIFY_WRITE, events, maxevents * sizeof(struct epoll_event)))
        return -EFAULT;

    /* Get the "struct file *" for the eventpoll file */

    //获取epfd对应的eventpoll文件的file实例,file结构是在epoll_create中创建。
    f = fdget(epfd);

    if (!f.file)
        return -EBADF;

    /*
     * We have to check that the file structure underneath the fd
     * the user passed to us _is_ an eventpoll file.
     */
    error = -EINVAL;

    //通过检查epfd对应的文件操作是不是eventpoll_fops 来判断epfd是否是一个eventpoll文件。如果不是则返回EINVAL错误。
    if (!is_file_epoll(f.file))
        goto error_fput;

    /*
     * At this point it is safe to assume that the "private_data" contains
     * our own data structure.
     */

    //得到eventpoll对象
    ep = f.file->private_data;

    /* Time to fish for events ... */

    //实际功能函数执行
    error = ep_poll(ep, events, maxevents, timeout);

error_fput:
    fdput(f);
    return error;
}

看下epoll_wait的实质功能函数

//epoll_wait调用
static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
           int maxevents, long timeout)
{
    int res = 0, eavail, timed_out = 0;
    unsigned long flags;
    u64 slack = 0;
    wait_queue_t wait;
    ktime_t expires, *to = NULL;

//  timeout是以毫秒为单位,这里是要转换为jiffies时间。这里加上999(即1000-1),是为了向上取整。

    if (timeout > 0) {
        struct timespec64 end_time = ep_set_mstimeout(timeout);

        slack = select_estimate_accuracy(&end_time);
        to = &expires;
        *to = timespec64_to_ktime(end_time);
    } else if (timeout == 0) {
        /*
         * Avoid the unnecessary trip to the wait queue loop, if the
         * caller specified a non blocking operation.
         */
        timed_out = 1;
        spin_lock_irqsave(&ep->lock, flags);
        goto check_events;
    }

fetch_events:
    spin_lock_irqsave(&ep->lock, flags);


    //实质是判断就绪队列ep->rdllist是否为空
    if (!ep_events_available(ep)) {
        /*
         * We don't have any available event to return to the caller.
         * We need to sleep here, and we will be wake up by
         * ep_poll_callback() when events will become available.
         */

        // 没有事件,所以需要睡眠。当有事件到来时,睡眠会被ep_poll_callback函数唤醒

        init_waitqueue_entry(&wait, current);//将current进程放在wait这个等待队列中。


       // 将当前进程加入到eventpoll的等待队列中,等待文件状态就绪或直到超时,或被信号中断。
        __add_wait_queue_exclusive(&ep->wq, &wait);

        for (;;) {
            /*
             * We don't want to sleep if the ep_poll_callback() sends us
             * a wakeup in between. That's why we set the task state
             * to TASK_INTERRUPTIBLE before doing the checks.
             */



            //执行ep_poll_callback()唤醒时应当需要将当前进程唤醒,所以当前进程状态应该为“可唤醒”TASK_INTERRUPTIBLE 
            set_current_state(TASK_INTERRUPTIBLE);

            if (ep_events_available(ep) || timed_out)



            //如果就绪队列不为空,也就是说已经有文件的状态就绪或者超时,则退出循环
                break;
            if (signal_pending(current)) {

            //如果当前进程接收到信号,则退出循环,返回EINTR错误
                res = -EINTR;
                break;
            }

            spin_unlock_irqrestore(&ep->lock, flags);

            //主动让出处理器,等待ep_poll_callback()将当前进程唤醒或者超时,返回值是剩余的时间。

            //从这里开始当前进程会进入睡眠状态,直到某些文件的状态就绪或者超时。

            //当文件状态就绪时,eventpoll的回调函数ep_poll_callback()会唤醒在ep->wq指向的等待队列中的进程。
            if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS))
                timed_out = 1;

            spin_lock_irqsave(&ep->lock, flags);
        }

        __remove_wait_queue(&ep->wq, &wait);
        __set_current_state(TASK_RUNNING);
    }
check_events:
    /* Is it worth to try to dig for events ? */
    eavail = ep_events_available(ep);

    spin_unlock_irqrestore(&ep->lock, flags);

    /*
     * Try to transfer events to user space. In case we get 0 events and
     * there's still timeout left over, we go trying again in search of
     * more luck.
     */

    //如果没有被信号中断,并且有事件就绪,但是没有获取到事件(有可能被其他进程获取到了),并且没有超时,则跳转到retry标签处,重新等待文件状态就绪。
    if (!res && eavail &&
        !(res = ep_send_events(ep, events, maxevents)) && !timed_out)
        goto fetch_events;

    return res;
}

接着看下ep_send_events以及ep_scan_ready_list

static int ep_send_events(struct eventpoll *ep,
              struct epoll_event __user *events, int maxevents)
{
    struct ep_send_events_data esed;

    esed.maxevents = maxevents;
    esed.events = events;

    return ep_scan_ready_list(ep, ep_send_events_proc, &esed, 0, false);
}
/**
 * ep_scan_ready_list - Scans the ready list in a way that makes possible for
 *                      the scan code, to call f_op->poll(). Also allows for
 *                      O(NumReady) performance.
 *
 * @ep: Pointer to the epoll private data structure.
 * @sproc: Pointer to the scan callback.
 * @priv: Private opaque data passed to the @sproc callback.
 * @depth: The current depth of recursive f_op->poll calls.
 * @ep_locked: caller already holds ep->mtx
 *
 * Returns: The same integer error code returned by the @sproc callback.
 */
static int ep_scan_ready_list(struct eventpoll *ep,
                  int (*sproc)(struct eventpoll *,
                       struct list_head *, void *),
                  void *priv, int depth, bool ep_locked)
{
    int error, pwake = 0;
    unsigned long flags;
    struct epitem *epi, *nepi;
    LIST_HEAD(txlist);

    /*
     * We need to lock this because we could be hit by
     * eventpoll_release_file() and epoll_ctl().
     */

    if (!ep_locked)
        mutex_lock_nested(&ep->mtx, depth);

    /*
     * Steal the ready list, and re-init the original one to the
     * empty list. Also, set ep->ovflist to NULL so that events
     * happening while looping w/out locks, are not lost. We cannot
     * have the poll callback to queue directly on ep->rdllist,
     * because we want the "sproc" callback to be able to do it
     * in a lockless way.
     */
    spin_lock_irqsave(&ep->lock, flags);
    list_splice_init(&ep->rdllist, &txlist);
    ep->ovflist = NULL;
    spin_unlock_irqrestore(&ep->lock, flags);

    /*
     * Now call the callback function.
     */

    //扫描回调函数 --> ep_send_events_proc
    error = (*sproc)(ep, &txlist, priv);

    spin_lock_irqsave(&ep->lock, flags);
    /*
     * During the time we spent inside the "sproc" callback, some
     * other events might have been queued by the poll callback.
     * We re-insert them inside the main ready-list here.
     */
    for (nepi = ep->ovflist; (epi = nepi) != NULL;
         nepi = epi->next, epi->next = EP_UNACTIVE_PTR) {
        /*
         * We need to check if the item is already in the list.
         * During the "sproc" callback execution time, items are
         * queued into ->ovflist but the "txlist" might already
         * contain them, and the list_splice() below takes care of them.
         */
        if (!ep_is_linked(&epi->rdllink)) {
            list_add_tail(&epi->rdllink, &ep->rdllist);
            ep_pm_stay_awake(epi);
        }
    }
    /*
     * We need to set back ep->ovflist to EP_UNACTIVE_PTR, so that after
     * releasing the lock, events will be queued in the normal way inside
     * ep->rdllist.
     */
    ep->ovflist = EP_UNACTIVE_PTR;

    /*
     * Quickly re-inject items left on "txlist".
     */
    list_splice(&txlist, &ep->rdllist);
    __pm_relax(ep->ws);

    if (!list_empty(&ep->rdllist)) {
        /*
         * Wake up (if active) both the eventpoll wait list and
         * the ->poll() wait list (delayed after we release the lock).
         */
        if (waitqueue_active(&ep->wq))
            wake_up_locked(&ep->wq);
        if (waitqueue_active(&ep->poll_wait))
            pwake++;
    }
    spin_unlock_irqrestore(&ep->lock, flags);

    if (!ep_locked)
        mutex_unlock(&ep->mtx);

    /* We have to call this outside the lock */
    if (pwake)
        ep_poll_safewake(&ep->poll_wait);

    return error;
}

可以看到会调用ep_send_events_procep_item进行处理:

static int ep_send_events_proc(struct eventpoll *ep, struct list_head *head,
                   void *priv)
{
    struct ep_send_events_data *esed = priv;
    int eventcnt;
    unsigned int revents;
    struct epitem *epi;
    struct epoll_event __user *uevent;
    struct wakeup_source *ws;
    poll_table pt;


    //注意这里跟ep_insert不一样,回调函数被设置为NULL    
    init_poll_funcptr(&pt, NULL);

    /*
     * We can loop without lock because we are passed a task private list.
     * Items cannot vanish during the loop because ep_scan_ready_list() is
     * holding "mtx" during this call.
     */
    for (eventcnt = 0, uevent = esed->events;
         !list_empty(head) && eventcnt < esed->maxevents;) {
        epi = list_first_entry(head, struct epitem, rdllink);

        /*
         * Activate ep->ws before deactivating epi->ws to prevent
         * triggering auto-suspend here (in case we reactive epi->ws
         * below).
         *
         * This could be rearranged to delay the deactivation of epi->ws
         * instead, but then epi->ws would temporarily be out of sync
         * with ep_is_linked().
         */
        ws = ep_wakeup_source(epi);
        if (ws) {
            if (ws->active)
                __pm_stay_awake(ep->ws);
            __pm_relax(ws);
        }

        list_del_init(&epi->rdllink);

        //注意这里跟ep_insert不一样,回调函数被设置为NULL,而不是ep_ptable_queue_proc
        //这里主要是为了利用tcp_poll,获得事件位与上用户感兴趣的事件
        revents = ep_item_poll(epi, &pt);

        /*
         * If the event mask intersect the caller-requested one,
         * deliver the event to userspace. Again, ep_scan_ready_list()
         * is holding "mtx", so no operations coming from userspace
         * can change the item.
         */

        //对于LT模式,只有检测到有事件,才会被再次加入就绪队列,对于ET模式无论有无事件,都不会在这里被再次加入就绪队列
        if (revents) {
            if (__put_user(revents, &uevent->events) ||
                __put_user(epi->event.data, &uevent->data)) {

                //如果copy过程中发生错误, 会中断链表的扫描,
                //并把当前发生错误的epitem重新插入到ready list.
                //剩下的没处理的epitem也不会丢弃, 在ep_scan_ready_list()
                //中它们也会被重新插入到ready list 
                list_add(&epi->rdllink, head);
                ep_pm_stay_awake(epi);
                return eventcnt ? eventcnt : -EFAULT;
            }
            eventcnt++;
            uevent++;
            if (epi->event.events & EPOLLONESHOT)
                epi->event.events &= EP_PRIVATE_BITS;

            //如果不是ET模式,epitem会被再次加入rdllist就绪链表
            else if (!(epi->event.events & EPOLLET)) {
                /*
                 * If this file has been added with Level
                 * Trigger mode, we need to insert back inside
                 * the ready list, so that the next call to
                 * epoll_wait() will check again the events
                 * availability. At this point, no one can insert
                 * into ep->rdllist besides us. The epoll_ctl()
                 * callers are locked out by
                 * ep_scan_ready_list() holding "mtx" and the
                 * poll callback will queue them in ep->ovflist.
                 */

                //EPOLLET和非ET的区别就在这一步之差,如果是ET, epitem是不会再进入到readly list,
                //除非fd再次发生了状态改变, ep_poll_callback被调用.如果是非ET, 不管你还有没有有效的事件或者数据,
                //都会被重新插入到ready list, 再下一次epoll_wait时, 会立即返回, 并通知给用户空间. 当然如果这个
                //被监听的fds确实没事件也没数据了, epoll_wait会返回一个0,空转一次.
                list_add_tail(&epi->rdllink, &ep->rdllist);
                ep_pm_stay_awake(epi);
            }
        }
    }

    return eventcnt;
}

static inline unsigned int ep_item_poll(struct epitem *epi, poll_table *pt)
{
    pt->_key = epi->event.events;

    //struct epoll_filefd 这个结构体对应的被监听的文件描述符信息
    //file->f_op 为 socket_file_ops,进而为sock->ops
    //对于tcp套接字来说,在 inet_create中设置为 inet_stream_ops
    //epi->ffd.file->f_op->poll即为 tcp_poll
    //最终 poll_wait 会调用 ep_ptable_queue_proc 

    //tcp_poll会去设置之间事件位,通过与上epi->event.events,得到感兴趣的而且发生的事件
    return epi->ffd.file->f_op->poll(epi->ffd.file, pt) & epi->event.events;
}

对于TCP流式套接字来说,会调用tcp_poll获得事件位(即发生了什么事件),然后与上用户感兴趣的事件epi->event.events,如果不为0,即发生了用户感兴趣的事件,这时会将ep_item拷贝到txlist,同时清空rdllist。最后对于LT模式的epoll,还会将ep_item拷贝回rdllist,这样下次epoll_wait时,用户还会得到通知,直到没有用户感兴趣的事件(事件位被置0)。这就是ET模式和LT模式的不同。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值