转自:http://blog.csdn.net/ear5cm/article/details/45093807
Android HardwareComposer中的fence机制中讨论了hwc中的fence,hwc最终把layer的acqireFenceFd送进fb driver,再由fb drvier生成新的reitreFenceFd并return回user space.本篇文章我们来探讨下fb driver中的fence,看看S3CFB_WIN_CONFIG ioctl都做了些什么.
kernel代码下载地址: https://android.googlesource.com/kernel/exynos.git
文章中用到的code:
exynos/include/linux/sync.h
exynos/drivers/base/sync.c
exynos/include/linux/sw_sync.h
exynos/drivers/base/sw_sync.c
exynos/drivers/video/s3c-fb.c
在讨论fb driver中的fence之前,先来简单介绍几个和fence相关的基本数据结构:
- /**
- * struct sync_timeline - sync object
- * @kref: reference count on fence.
- * @ops: ops that define the implementaiton of the sync_timeline
- * @name: name of the sync_timeline. Useful for debugging
- * @destoryed: set when sync_timeline is destroyed
- * @child_list_head: list of children sync_pts for this sync_timeline
- * @child_list_lock: lock protecting @child_list_head, destroyed, and
- * sync_pt.status
- * @active_list_head: list of active (unsignaled/errored) sync_pts
- * @sync_timeline_list: membership in global sync_timeline_list
- */
- struct sync_timeline {
- struct kref kref;
- const struct sync_timeline_ops *ops;
- char name[32];
- /* protected by child_list_lock */
- bool destroyed;
- struct list_head child_list_head;
- spinlock_t child_list_lock;
- struct list_head active_list_head;
- spinlock_t active_list_lock;
- struct list_head sync_timeline_list;
- };
sync_timeline中包含一个由list_head串起来的sync_pt双向链表child_list_head.
- /**
- * struct sync_pt - sync point
- * @parent: sync_timeline to which this sync_pt belongs
- * @child_list: membership in sync_timeline.child_list_head
- * @active_list: membership in sync_timeline.active_list_head
- * @signaled_list: membership in temorary signaled_list on stack
- * @fence: sync_fence to which the sync_pt belongs
- * @pt_list: membership in sync_fence.pt_list_head
- * @status: 1: signaled, 0:active, <0: error
- * @timestamp: time which sync_pt status transitioned from active to
- * singaled or error.
- */
- struct sync_pt {
- struct sync_timeline *parent;
- struct list_head child_list;
- struct list_head active_list;
- struct list_head signaled_list;
- struct sync_fence *fence;
- struct list_head pt_list;
- /* protected by parent->active_list_lock */
- int status;
- ktime_t timestamp;
- };
sync_pt中parent指针指向了sync_pt所属的sync_timeline,child_list表示了sync_pt在sync_timeline.child_list_head中的位置.fence指针指向了sync_pt所属的fence,pt_list表示了sync_pt在fence.pt_list_head中的位置.
- /**
- * struct sync_fence - sync fence
- * @file: file representing this fence
- * @kref: referenace count on fence.
- * @name: name of sync_fence. Useful for debugging
- * @pt_list_head: list of sync_pts in ths fence. immutable once fence
- * is created
- * @waiter_list_head: list of asynchronous waiters on this fence
- * @waiter_list_lock: lock protecting @waiter_list_head and @status
- * @status: 1: signaled, 0:active, <0: error
- *
- * @wq: wait queue for fence signaling
- * @sync_fence_list: membership in global fence list
- */
- struct sync_fence {
- struct file *file;
- struct kref kref;
- char name[32];
- /* this list is immutable once the fence is created */
- struct list_head pt_list_head;
- struct list_head waiter_list_head;
- spinlock_t waiter_list_lock; /* also protects status */
- int status;
- wait_queue_head_t wq;
- struct list_head sync_fence_list;
- };
file指针表示fence所对应的file,linux中一切皆是file.pt_list_head是一个由list_head串起来的sync_pt双向链表.
sync_timeline,sync_pt和sync_fence的关系可以用下面的图来表示:
syc_timeline来管理所有在这条timeline上的sync_pt,可以决定sync_pt何时被signal.sync_fence中可以包含一个或者多个sync_pt,当sync_fence中所有的sync_pt被signal的时候,sync_fence被signal.
不过sync_timeline sync_pt有点像一个virtual class,真正在使用的时候需要"继承"它,并实现它定义的sync_timeline_ops *ops接口,s3c-fb.c 使用的是sw_sync_timeline和sw_sync_pt,定义如下:
- struct sw_sync_timeline {
- struct sync_timeline obj;
- u32 value;
- };
- struct sw_sync_pt {
- struct sync_pt pt;
- u32 value;
- };
sw_sync_timeline和sw_sync_pt相当简单,只不过是在原本的sync_timeline和sync_pt基础上多加了一个u32的value而已.另外sw_sync_timeline和sync_pt还开出了几个新的api.
- struct sw_sync_timeline *sw_sync_timeline_create(const char *name);
- void sw_sync_timeline_inc(struct sw_sync_timeline *obj, u32 inc);
- struct sync_pt *sw_sync_pt_create(struct sw_sync_timeline *obj, u32 value);
上面这三个api在s3c-fb.c 中都会用到,我们在分析到对应code的时候再来深入分析.
接下来我们进入s3c-fb.c,具体看下sw_sync_timeline,sw_sync_pt和sync_fence是如何使用的.首先,s3c-fb定义了一些和处理fence相关的member
- struct s3c_fb {
- ...
- struct fb_info *fbinfo;
- struct list_head update_regs_list;
- struct mutex update_regs_list_lock;
- struct kthread_worker update_regs_worker;
- struct task_struct *update_regs_thread;
- struct kthread_work update_regs_work;
- struct sw_sync_timeline *timeline;
- int timeline_max;
- ...
- }
s3c-fb是把buffer的显示放在一个单独的kthread里面来做,
struct list_head update_regs_list;
struct mutex update_regs_list_lock;
struct kthread_worker update_regs_worker;
struct task_struct *update_regs_thread;
struct kthread_work update_regs_work;
这几个都是kthread相关的struct,在prob的时候会进行初始化.
- static struct platform_driver s3c_fb_driver = {
- .probe = s3c_fb_probe,
- .remove = __devexit_p(s3c_fb_remove),
- .id_table = s3c_fb_driver_ids,
- .driver = {
- .name = "s3c-fb",
- .owner = THIS_MODULE,
- .pm = &s3cfb_pm_ops,
- },
- };
- static int __devinit s3c_fb_probe(struct platform_device *pdev)
- {
- ...
- INIT_LIST_HEAD(&sfb->update_regs_list);
- mutex_init(&sfb->update_regs_list_lock);
- init_kthread_worker(&sfb->update_regs_worker);
- sfb->update_regs_thread = kthread_run(kthread_worker_fn,
- &sfb->update_regs_worker, "s3c-fb");
- if (IS_ERR(sfb->update_regs_thread)) {
- int err = PTR_ERR(sfb->update_regs_thread);
- sfb->update_regs_thread = NULL;
- dev_err(dev, "failed to run update_regs thread\n");
- return err;
- }
- init_kthread_work(&sfb->update_regs_work, s3c_fb_update_regs_handler);
- sfb->timeline = sw_sync_timeline_create("s3c-fb");
- sfb->timeline_max = 1;
- ...
- }
从上面这段code可以看出,最终kthread的工作会由s3c_fb_update_regs_handler来完成,有关kthread的具体细节这里就不详细讨论了.我们只来看timeline = sw_sync_timeline_create(),同时timeline_max被初始化为1.
- struct sw_sync_timeline *sw_sync_timeline_create(const char *name)
- {
- struct sw_sync_timeline *obj = (struct sw_sync_timeline *)
- sync_timeline_create(&sw_sync_timeline_ops,
- sizeof(struct sw_sync_timeline),
- name);
- return obj;
- }
- struct sync_timeline_ops sw_sync_timeline_ops = {
- .driver_name = "sw_sync",
- .dup = sw_sync_pt_dup,
- .has_signaled = sw_sync_pt_has_signaled,
- .compare = sw_sync_pt_compare,
- .fill_driver_data = sw_sync_fill_driver_data,
- .timeline_value_str = sw_sync_timeline_value_str,
- .pt_value_str = sw_sync_pt_value_str,
- };
sw_sync_timeline_create以sw_sync_timeline_ops为参数,构造了一个"基类"sync_timeline_create的结构体,sw_sync_timeline_ops每个函数的作用我们在遇到的时候再来讨论.
当hwc通过S3CFB_WIN_CONFIG ioctl把所有layer的信息送进fb driver时(详细过程请参考Android HardwareComposer中的fence机制):
- static int s3c_fb_ioctl(struct fb_info *info, unsigned int cmd,
- unsigned long arg)
- {
- ...
- case S3CFB_WIN_CONFIG:
- if (copy_from_user(&p.win_data,
- (struct s3c_fb_win_config_data __user *)arg,
- sizeof(p.win_data))) {
- ret = -EFAULT;
- break;
- }
- ret = s3c_fb_set_win_config(sfb, &p.win_data);
- if (ret)
- break;
- if (copy_to_user((struct s3c_fb_win_config_data __user *)arg,
- &p.win_data,
- sizeof(p.user_ion_client))) {
- ret = -EFAULT;
- break;
- }
- break;
- ...
- }
copy_from_user和copy_to_user读写的都是s3c_fb_win_config_data类型的data,只不过在copy_to_user时写回user space的data大小变了,因为user space感兴趣的只是一个fenceFd而已.
- static int s3c_fb_set_win_config(struct s3c_fb *sfb,
- struct s3c_fb_win_config_data *win_data)
- {
- struct s3c_fb_win_config *win_config = win_data->config;
- int ret = 0;
- unsigned short i;
- struct s3c_reg_data *regs;
- //这个fence就是需要return给user space的retireFence
- struct sync_fence *fence;
- struct sync_pt *pt;
- //fb会和上面的fence绑定
- int fd;
- unsigned int bw = 0;
- fd = get_unused_fd();
- if (fd < 0)
- return fd;
- mutex_lock(&sfb->output_lock);
- regs = kzalloc(sizeof(struct s3c_reg_data), GFP_KERNEL);
- ...
- ret = s3c_fb_set_win_buffer(sfb, win, config, regs);
- ...
- mutex_lock(&sfb->update_regs_list_lock);
- //timeline_max初始值为1
- sfb->timeline_max++;
- //以time_line_max为参数在sw_sync_timeline上构建了一个新的sw_sync_pt
- pt = sw_sync_pt_create(sfb->timeline, sfb->timeline_max);
- //以pt为参数构建一个fence
- fence = sync_fence_create("display", pt);
- //将fence install到file中,以fd表示这个file,之后对于fd的操作其实都是对fence的操作
- sync_fence_install(fence, fd);
- //把fd赋值给win_data->fence,win_data->fence将写回user space
- win_data->fence = fd;
- //buffer data的具体显示工作交给kthread完成,
- list_add_tail(®s->list, &sfb->update_regs_list);
- mutex_unlock(&sfb->update_regs_list_lock);
- queue_kthread_work(&sfb->update_regs_worker,
- &sfb->update_regs_work);
- mutex_unlock(&sfb->output_lock);
- return ret;
- }
code中关键的地方都加了注释.s3c_fb_set_win_buffer和kthread工作的内容稍后再来展开,我们先来看将要return回user space的这个fence是怎么生成的.
- struct sync_pt *sw_sync_pt_create(struct sw_sync_timeline *obj, u32 value)
- {
- struct sw_sync_pt *pt;
- pt = (struct sw_sync_pt *)
- sync_pt_create(&obj->obj, sizeof(struct sw_sync_pt));
- pt->value = value;
- return (struct sync_pt *)pt;
- }
- struct sync_pt *sync_pt_create(struct sync_timeline *parent, int size)
- {
- struct sync_pt *pt;
- if (size < sizeof(struct sync_pt))
- return NULL;
- pt = kzalloc(size, GFP_KERNEL);
- if (pt == NULL)
- return NULL;
- INIT_LIST_HEAD(&pt->active_list);
- kref_get(&parent->kref);
- sync_timeline_add_pt(parent, pt);
- return pt;
- }
sw_sync_pt把value,也就是timeline_max作为pt->value的值保存下来,接着call了"基类"sync_pt_create的"构造函数".在sync_pt_create中为pt分配空间,并把自己加入到timeline的child_list_head中去.这时候,新建立的sw_sync_pt->value == timeline_max.注意,因为pt的buffer是由kzalloc分配的,所以pt->status的值是0,status各个值的含义在之前的struct定义中可以看到1: signaled, 0:active, <0: error,所以现在pt的satus是 active状态.
- struct sync_fence *sync_fence_create(const char *name, struct sync_pt *pt)
- {
- struct sync_fence *fence;
- if (pt->fence)
- return NULL;
- //sync_fence_alloc中通过anon_inode_getfile()为fence分配一个file,这个file在sync_fence_installde时候会用到
- fence = sync_fence_alloc(name);
- if (fence == NULL)
- return NULL;
- //保存fence指针到pt->fence
- pt->fence = fence;
- //将pt加入到fence的pt_list_head链表中
- list_add(&pt->pt_list, &fence->pt_list_head);
- //将pt加入到timelien的active_list_head链表中
- sync_pt_activate(pt);
- //马上判断一次pt是否已经处于signal的状态,如果fence中所有的pt都处于signal状态,fence就要被signal
- sync_fence_signal_pt(pt);
- return fence;
- }
这里有两个关键点,sync_pt_activate和sync_fence_signal_pt,我们来各个击破,先来看sync_pt_activate.
- static void sync_pt_activate(struct sync_pt *pt)
- {
- struct sync_timeline *obj = pt->parent;
- unsigned long flags;
- int err;
- spin_lock_irqsave(&obj->active_list_lock, flags);
- err = _sync_pt_has_signaled(pt);
- if (err != 0)
- goto out;
- list_add_tail(&pt->active_list, &obj->active_list_head);
- out:
- spin_unlock_irqrestore(&obj->active_list_lock, flags);
- }
在spin_lock的保护下去call了_sync_pt_has_signaled,如果err==0,也就是pt的status是activate,就把pt加入到timeline的active_list_head中去.我们来看看_sync_pt_has_signaled里面是如何判断pt的status的.
- static int _sync_pt_has_signaled(struct sync_pt *pt)
- {
- int old_status = pt->status;
- //call到了parent->ops的has_signaled,也就是sw_sync_timeline->ops->has_signaled
- //我们还记得在sw_sync_timeline_create的时候使用的ops参数是sw_sync_timeline_ops
- //其中.has_signaled = sw_sync_pt_has_signaled,
- if (!pt->status)
- pt->status = pt->parent->ops->has_signaled(pt);
- if (!pt->status && pt->parent->destroyed)
- pt->status = -ENOENT;
- if (pt->status != old_status)
- pt->timestamp = ktime_get();
- return pt->status;
- }
- static int sw_sync_pt_has_signaled(struct sync_pt *sync_pt)
- {
- struct sw_sync_pt *pt = (struct sw_sync_pt *)sync_pt;
- struct sw_sync_timeline *obj =
- (struct sw_sync_timeline *)sync_pt->parent;
- return sw_sync_cmp(obj->value, pt->value) >= 0;
- }
- static int sw_sync_cmp(u32 a, u32 b)
- {
- if (a == b)
- return 0;
- return ((s32)a - (s32)b) < 0 ? -1 : 1;
- }
sw_sync_cmp比较sw_sync_timeline的value和sync_pt的value,我们之前分析过,sw_sync_timeline create的时候value为0,timeline_max初始值为1,在sw_sync_pt_create之前,先执行了timeline_max++,所以这个时候timeline_max的值是2,也就是sw_sync_pt的value为2.这里要注意的是,sw_sync_pt_has_signaled返回的不是sw_sync_cmp的返回值,而是它的返回值与0比较的结果 return sw_sync_cmp(obj->value, pt->value) >= 0;
如果两者相等,sw_sync_cmp return 0,sw_sync_tp_has_signaled返回1.
如果timeline的value大于pt的value,sw_sync_cmp return 1,sw_sync_tp_has_signaled返回1.
如果timeline的value小于pt的value,sw_sync_cmp return -1,sw_sync_tp_has_signaled返回0.
因为我们的timeline->value == 0 pt->value == 2,所以这里sw_sync_tp_has_signaled返回的是0,也就是说pt处于activate状态,不处于signaled状态,pt不需要加入到timeline的activate_list_head列表中去.
分析完第一个关键点sync_pt_activate,我们接下来分析第二个关键点sync_fence_signal_pt
- static void sync_fence_signal_pt(struct sync_pt *pt)
- {
- LIST_HEAD(signaled_waiters);
- struct sync_fence *fence = pt->fence;
- struct list_head *pos;
- struct list_head *n;
- unsigned long flags;
- int status;
- //这个函数需要进去分析
- status = sync_fence_get_status(fence);
- spin_lock_irqsave(&fence->waiter_list_lock, flags);
- /*
- * this should protect against two threads racing on the signaled
- * false -> true transition
- */
- //如果status不为0,也就是fence处于signaled或者error状态,那么在spin_lock的保护下,
- //把fence的waiter_list_header中的sync_fence_waiter移动到signaled_waiters list中去
- if (status && !fence->status) {
- list_for_each_safe(pos, n, &fence->waiter_list_head)
- list_move(pos, &signaled_waiters);
- //更新fence的status
- fence->status = status;
- } else {
- status = 0;
- }
- spin_unlock_irqrestore(&fence->waiter_list_lock, flags);
- if (status) {
- //遍历signaled_waiters,把每个waiter移除list,并call他们的callback
- list_for_each_safe(pos, n, &signaled_waiters) {
- struct sync_fence_waiter *waiter =
- container_of(pos, struct sync_fence_waiter,
- waiter_list);
- list_del(pos);
- waiter->callback(fence, waiter);
- }
- //wake up wait_queue_head_t
- wake_up(&fence->wq);
- }
- }
其他的部分code中已经有注释,我们来看sync_fence_get_status
- static int sync_fence_get_status(struct sync_fence *fence)
- {
- struct list_head *pos;
- //预设默认值是signaled
- int status = 1;
- list_for_each(pos, &fence->pt_list_head) {
- struct sync_pt *pt = container_of(pos, struct sync_pt, pt_list);
- int pt_status = pt->status;
- if (pt_status < 0) {
- //如果有一个pt的status是error,则整个fence的status就是error
- status = pt_status;
- break;
- } else if (status == 1) {
- //如果有一个pt的status是activate,则覆盖掉默认值,继续遍历直到遇到error或者到list尾部.
- status = pt_status;
- }
- }
- return status;
- }
因为我们的pt现在处于activate状态,而fence中只有这一个pt,所以fence也处于activate状态.sync_fence_create分析完毕,我们来看sync_fence_install的过程.
- void sync_fence_install(struct sync_fence *fence, int fd)
- {
- fd_install(fd, fence->file);
- }
简单的通过fd_install把fence_file和fd关联起来.
至此我们新建了一个sync_pt,并把pt加入到timeline,之后用这个pt新建了一个sync_fence,time_line中最新的pt->value = timeline->value+2.接下来fb driver把fence对应的fd写回user space,retireFence的处理也就告一段落了,至于retireFence何时被signal,在我们稍后分析kthread处理buffer data的时候就会揭晓.
之前在分析s3c_fb_set_win_config的时候我们提到过:"s3c_fb_set_win_buffer和kthread工作的内容稍后再来展开",下面我们就来分析s3c_fb_set_win_buffer.
- static int s3c_fb_set_win_config(struct s3c_fb *sfb,
- struct s3c_fb_win_config_data *win_data)
- {
- ...
- ret = s3c_fb_set_win_buffer(sfb, win, config, regs);
- ...
- }
- static int s3c_fb_set_win_buffer(struct s3c_fb *sfb, struct s3c_fb_win *win,
- struct s3c_fb_win_config *win_config, struct s3c_reg_data *regs)
- {
- struct ion_handle *handle;
- struct fb_var_screeninfo prev_var = win->fbinfo->var;
- struct s3c_dma_buf_data dma_buf_data;
- if (win_config->fence_fd >= 0) {
- //如果从hwc传下来的win_config->fence_fd>=0,则通过sync_fence_fdget获取到它对应的sync_fence
- dma_buf_data.fence = sync_fence_fdget(win_config->fence_fd);
- if (!dma_buf_data.fence) {
- dev_err(sfb->dev, "failed to import fence fd\n");
- ret = -EINVAL;
- goto err_offset;
- }
- }
- //dma_buf_data连同刚刚获取的fence一起保存在regs中.
- regs->dma_buf_data[win_no] = dma_buf_data;
- return 0;
- }
- struct sync_fence *sync_fence_fdget(int fd)
- {
- struct file *file = fget(fd);
- if (file == NULL)
- return NULL;
- if (file->f_op != &sync_fence_fops)
- goto err;
- return file->private_data;
- err:
- fput(file);
- return NULL;
- }
regs会保存到update_regs_list中,最终由kthread在s3c_fb_update_regs_handler函数中处理.其中regs->dma_buf_data[i].fence就是hwc中的acquireFence.
- static void s3c_fb_update_regs_handler(struct kthread_work *work)
- {
- struct s3c_fb *sfb =
- container_of(work, struct s3c_fb, update_regs_work);
- struct s3c_reg_data *data, *next;
- struct list_head saved_list;
- mutex_lock(&sfb->update_regs_list_lock);
- saved_list = sfb->update_regs_list;
- list_replace_init(&sfb->update_regs_list, &saved_list);
- mutex_unlock(&sfb->update_regs_list_lock);
- list_for_each_entry_safe(data, next, &saved_list, list) {
- //每处理一个update_regs_list中的regs,就把它从list中移除
- s3c_fb_update_regs(sfb, data);
- list_del(&data->list);
- kfree(data);
- }
- }
- static void s3c_fb_update_regs(struct s3c_fb *sfb, struct s3c_reg_data *regs)
- {
- for (i = 0; i < sfb->variant.nr_windows; i++) {
- old_dma_bufs[i] = sfb->windows[i]->dma_buf_data;
- //这里会等待acquireFence被signal,需要进去看下
- if (regs->dma_buf_data[i].fence)
- s3c_fd_fence_wait(sfb, regs->dma_buf_data[i].fence);
- }
- //具体显示相关,就不展开了
- __s3c_fb_update_regs(sfb, regs);
- //这里很重要,也需要进去看下
- sw_sync_timeline_inc(sfb->timeline, 1);
- //释放上一个cycle分配的buffer,里面会call到sync_fence_put(dma->fence),也需要看一下.
- for (i = 0; i < sfb->variant.nr_windows; i++)
- s3c_fb_free_dma_buf(sfb, &old_dma_bufs[i]);
- }
注释中标出了3个需要注意的地方
1. s3c_fd_fence_wait
2. sw_sync_timeline_inc
3. sync_fence_put
我们先看1. s3c_fd_fence_wait
- static void s3c_fd_fence_wait(struct s3c_fb *sfb, struct sync_fence *fence)
- {
- int err = sync_fence_wait(fence, 1000);
- if (err >= 0)
- return;
- if (err == -ETIME)
- err = sync_fence_wait(fence, 10 * MSEC_PER_SEC);
- if (err < 0)
- dev_warn(sfb->dev, "error waiting on fence: %d\n", err);
- }
两次call到sync_fence_wait,只是参数不同而已,这里可以看出如果wait不到,buffer data的处理还是要继续的,只是报了个warn.
- int sync_fence_wait(struct sync_fence *fence, long timeout)
- {
- int err = 0;
- if (timeout > 0) {
- timeout = msecs_to_jiffies(timeout);
- //等待sync_fence_check的结果
- err = wait_event_interruptible_timeout(fence->wq,
- sync_fence_check(fence),
- timeout);
- } else if (timeout < 0) {
- err = wait_event_interruptible(fence->wq,
- sync_fence_check(fence));
- }
- return 0;
- }
- static bool sync_fence_check(struct sync_fence *fence)
- {
- /*
- * Make sure that reads to fence->status are ordered with the
- * wait queue event triggering
- */
- //对status的读取放在了read barrier内存屏障之后
- smp_rmb();
- return fence->status != 0;
- }
从这里看出,s3c_fd_fence_wait只是在等待fence->status状态变成非0,1是signaled,-1是error.
我们接着来看2.sw_sync_timeline_inc
- void sw_sync_timeline_inc(struct sw_sync_timeline *obj, u32 inc)
- {
- obj->value += inc;
- sync_timeline_signal(&obj->obj);
- }
sw_sync_timeline的value增加了inc,我们的情况是+1,如果是第一次进来,timeline中只有一个pt,它的value是2,timeline的value是1.之后call到sync_timeline_signal.经过n个cycle后,value最大的pt的value是n+1,timeline的value是n.
- void sync_timeline_signal(struct sync_timeline *obj)
- {
- unsigned long flags;
- LIST_HEAD(signaled_pts);
- struct list_head *pos, *n;
- spin_lock_irqsave(&obj->active_list_lock, flags);
- //在spin_lock的保护下,把被判断为处于signaled状态的pt从activa_list_header中移除
- //添加到signaled_list中去
- list_for_each_safe(pos, n, &obj->active_list_head) {
- struct sync_pt *pt =
- container_of(pos, struct sync_pt, active_list);
- if (_sync_pt_has_signaled(pt)) {
- list_del_init(pos);
- list_add(&pt->signaled_list, &signaled_pts);
- kref_get(&pt->fence->kref);
- }
- }
- spin_unlock_irqrestore(&obj->active_list_lock, flags);
- list_for_each_safe(pos, n, &signaled_pts) {
- struct sync_pt *pt =
- container_of(pos, struct sync_pt, signaled_list);
- list_del_init(pos);
- //每个处于signaled状态的pt都要call一次sync_fence_signal_pt,
- //来判断它所属的fence是否需要被signal.
- sync_fence_signal_pt(pt);
- kref_put(&pt->fence->kref, sync_fence_free);
- }
- }
_sync_pt_has_signaled之前已经分析过了,在第n个cycle的时候,value值是n-1的pt被signal.
分析完sw_sync_timeline_inc,我们接着来看 3. sync_fence_put
- void sync_fence_put(struct sync_fence *fence)
- {
- fput(fence->file);
- }
只有一句话,非常简单!
到这里,fb driver中fence的机制就分析完了,总结一下:
1. fb driver构建一个sw_sync_timeline.sw_sync_timeline中的每个pt都有一个value,通过比较timeline->value和pt->value判断哪个pt需要被signal
2. 每次需要构建一个fence的时候,先在timeline上构建一个sw_sync_pt,这个pt的value值是递增的.再由sw_sync_pt构建sync_fence.
3. 当sw_sync_timeline_inc(struct sw_sync_timeline *obj, u32 inc)的时候,timeline->增加inc,马上判断timeline中哪些pt被singal,接着判断这些pt所属的fence是否被signal.
4. 当fence使用完毕时,通过sync_fence_put释放fence.
另外,sw_sync还可以通过open dev, ioctrl的方式来操作fence.
假如我们的hwc没有现成的ioctl可以用,又没有办法改到driver的code,hwc可以打开/dev/sw_sync设备,通过一系列的ioctl来监控和控制fence.