Documentation\driver-model\drives


Chinese translated version of Documentation\driver-model\drives.txt


If you have any comment or update to the content, please contact theoriginal document maintainer directly.  However, if you have a problem
communicating in English you can also ask the Chinese 


maintainer forhelp.  Contact the Chinese maintainer if this 


translation is outdated
or if there is a problem with the translation.




Chinese maintainer: 朱锋志 605509916@qq.com
---------------------------------------------------------------------
Documentation\driver-model\drives的中文翻译




如果想评论或更新本文的内容,请直接联系原文档的维护者


。如果你使用英文
交流有困难的话,也可以向中文版维护者求助。如果本翻译


更新不及时或者翻
译存在问题,请联系中文版维护者。




中文版维护者:朱锋志 605509916@qq.com
中文版翻译者: 朱锋志 605509916@qq.com




以下为正文




---------------------------------------------------------------------








Devres - Managed Device Resource

================================


Tejun Heo <teheo@suse.de>


First draft 10 January 
2007于2007年1月10号第一次起草




1. Intro : Huh? Devres?


1.简介        : 嘿? Devres(设备资源)?


2. Devres : Devres in a nutshell


2.Devres(设备资源)      : 果壳里的Devres(设备资源)
3. Devres Group : Group devres'es and release them together


3.Devres(设备资源)组    : 聚集Devres(设备资源)和释放Devres(设备资源)
4. Details : Life time rules, calling context, ...


4.详细内容    : 生命周期规则,调用上下文, ...
5. Overhead : How much do we have to pay for this?


5.费用        : 我们必须为此付出多大的代价?
6. List of managed interfaces : Currently implemented managed interfaces


6.管理级别的接口列表: 现有的已被实现的管理级别的接口
Tejun Heo <teheo@suse.de>


























  1. Intro1.简介
  --------


devres came up while trying to convert libata to use iomap.  Each
iomapped address should be kept and unmapped on driver detach.  For
example, a plain SFF ATA controller (that is, good old PCI IDE) in
native mode makes use of 5 PCI BARs and all of them should be
maintained.
当你正试着用IO映射来转化libata, devres(设备资源)就产生了。每个被IO映射的
地址都应该被保存起来,而当驱动被卸载时都应该变成未映射的。举个例子来说,
一个简单的SFF ATA控制器(就是那种很经典的PCI IDE)在本地模式需要使用五个PCI BARS,
并且所有的PCI BARS都需要被维护。


As with many other device drivers, libata low level drivers have
sufficient bugs in ->remove and ->probe failure path.  Well, yes,
that's probably because libata low level driver developers are lazy
bunch, but aren't all low level driver developers?  After spending a
day fiddling with braindamaged hardware with no document or
braindamaged document, if it's finally working, well, it's working.
就和大多数其他的设备驱动一样,在->remove和->probe失败路径里libata底层
驱动也有很多的漏洞。是的,那也许是因为libata底层驱动开发者们都是一群懒家伙,
但是是不是所有的底层开发者都是这样呢?在没有任何关于Braindamaged的文档的情况下,
我花了一天的时间来折腾Braindamaged硬件,最后它成功的工作了。


For one reason or another, low level drivers don't receive as much
attention or testing as core code, and bugs on driver detach or
initialization failure don't happen often enough to be noticeable.
Init failure path is worse because it's much less travelled while
needs to handle multiple entry points.
出于某种原因,我们并没有像测试核心代码一样来测试底层驱动,并且一些在驱动
卸载时发生的漏洞以及一些初始化失败也没能引起我们的注意。而关于初始化失败的
路径则是更加糟糕,因为它需要应付多个入口点,更不用说完全遍历了。


So, many low level drivers end up leaking resources on driver detach
and having half broken failure path implementation in ->probe() which
would leak resources or even cause oops when failure occurs.  iomap
adds more to this mix.  So do msi and msix.


所以,许多底层驱动都会在卸载时产生资源泄露,一半的失败原因都是在实现probe()时
导致的,当它执行失败后就会产生泄露,也许还会产生更加意想不到的错误。IO映射添加了更多
与probe()的结合,当然也包括msi和msix。




  2. Devres
  ---------


devres is basically linked list of arbitrarily sized memory areas
associated with a struct device.  Each devres entry is associated with
a release function.  A devres can be released in several ways.  No
matter what, all devres entries are released on driver detach.  On
release, the associated release function is invoked and then the
devres entry is freed.
设备资源从根本上说就是一个占着内存的由设备结构体连接而成的链表。每个设备
资源入口都与一个释放资源的函数相关联。一个设备资源能通过多种方式来释放。
不管是那一种方式,所有的设备资源入口都应该在驱动被卸载时而释放掉。释放资源时,
相关联的释放函数就会被应用,并且设备资源入口也会被释放掉。


Managed interface is created for resources commonly used by device
drivers using devres.  For example, coherent DMA memory is acquired
using dma_alloc_coherent().  The managed version is called
dmam_alloc_coherent().  It is identical to dma_alloc_coherent() except
for the DMA memory allocated using it is managed and will be
automatically released on driver detach.  Implementation looks like
the following.


设备驱动使用资源时,管理级别接口就会被创建用来使用设备资源。举例来说,
使用dma_alloc_coherent()来获得连续的DMA内存。这个管理级别的版本被称作dmam_alloc_coherent()。
它和dma_alloc_coherent()功能大致一样,除了DMA内存分配时使用它来管理,并且在卸载时自动释放所分配的内存。
实现看起来和下面相似:


  struct dma_devres {
size_t size;
void *vaddr;
dma_addr_t dma_handle;
  };


  static void dmam_coherent_release(struct device *dev, void *res)
  {
struct dma_devres *this = res;


dma_free_coherent(dev, this->size, this->vaddr, this->dma_handle);
  }


  dmam_alloc_coherent(dev, size, dma_handle, gfp)
  {
struct dma_devres *dr;
void *vaddr;


dr = devres_alloc(dmam_coherent_release, sizeof(*dr), gfp);
...


/* alloc DMA memory as usual */
vaddr = dma_alloc_coherent(...);
...


/* record size, vaddr, dma_handle in dr */
dr->vaddr = vaddr;
...


devres_add(dev, dr);


return vaddr;
  }


If a driver uses dmam_alloc_coherent(), the area is guaranteed to be
freed whether initialization fails half-way or the device gets
detached.  If most resources are acquired using managed interface, a
driver can have much simpler init and exit code.  Init path basically
looks like the following.
如果一个驱动使用dmam_alloc_coherent(), 那么被分配的区域都会保证被释放不管
是否是中途初始化失败还是驱动被卸载。如果大多数的资源都是用管理级别的接口来获得,
那么一个驱动在编写初始化和退出代码时就会显得更加简单。初始化路径看起来大概和下面很像:




  my_init_one()
  {
struct mydev *d;


d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL);
if (!d)
return -ENOMEM;


d->ring = dmam_alloc_coherent(...);
if (!d->ring)
return -ENOMEM;


if (check something)
return -EINVAL;
...


return register_to_upper_layer(d);
  }


And exit path,退出时:


  my_remove_one()
  {
unregister_from_upper_layer(d);
shutdown_my_hardware();
  }


As shown above, low level drivers can be simplified a lot by using
devres.  Complexity is shifted from less maintained low level drivers
to better maintained higher layer.  Also, as init failure path is
shared with exit path, both can get more testing.
就像上面写的那样,底层驱动在使用设备资源的时候更加简单化。在维护底层驱动时也
会更加容易。当然,当初始化失败路径被退出路径所共享时,两者都需要更多的测试。






  3. Devres group设备资源组
  ---------------


Devres entries can be grouped using devres group.  When a group is
released, all contained normal devres entries and properly nested
groups are released.  One usage is to rollback series of acquired
resources on failure.  For example,
设备资源入口可以用设备资源组来集中起来。当一个组被释放时,所有被包含的设
备资源入口以及被嵌套的组都会被释放。一个习惯是当获取资源失败时就回滚到之前
的状态。举例如下:
  if (!devres_open_group(dev, NULL, GFP_KERNEL))
return -ENOMEM;


  acquire A;
  if (failed)
goto err;


  acquire B;
  if (failed)
goto err;
  ...


  devres_remove_group(dev, NULL);
  return 0;


 err:
  devres_release_group(dev, NULL);
  return err_code;


As resource acquisition failure usually means probe failure, constructs
like above are usually useful in midlayer driver (e.g. libata core
layer) where interface function shouldn't have side effect on failure.
For LLDs, just returning error code suffices in most cases.
通常当资源获取失败时就意味着probe失败,像上面的架构通常在中间层驱动
(就像libata核心层)上是十分有用的,这样也不会在失败时引起所谓的边际效应。
对于LLDs,在大多数例子中仅仅是返回错误代码就已经足够了。
Each group is identified by void *id.  It can either be explicitly
specified by @id argument to devres_open_group() or automatically
created by passing NULL as @id as in the above example.  In both
cases, devres_open_group() returns the group's id.  The returned id
can be passed to other devres functions to select the target group.
If NULL is given to those functions, the latest open group is
selected.


每一个组被一个void*id所标识。它既可以显式的通过devres_open_group()中的参数id来
指定,也可以像上面的例子一样通过传递一个NULL给void *id来自动创建。这两种方式,
devres_open_group()都会返回一个组id。这个被返回的ID可以被传递给其他的设备资源函数
用来选择目标组。如果NULL参数指定给那些函数,那么最近打开的组将会被选择。


For example, you can do something like the following.


举例来说,你可以像下面一样来做一些事。




  int my_midlayer_create_something()
  {
if (!devres_open_group(dev, my_midlayer_create_something, GFP_KERNEL))
return -ENOMEM;


...


devres_close_group(dev, my_midlayer_create_something);
return 0;
  }


  void my_midlayer_destroy_something()
  {
devres_release_group(dev, my_midlayer_create_something);
  }




  4. Details详细内容
  ----------


Lifetime of a devres entry begins on devres allocation and finishes
when it is released or destroyed (removed and freed) - no reference
counting.
一个设备资源入口的生命周期在设备资源分配时开始,而结束于它被释放或者是被销毁(删除和释放)的时候 - 没有涉及计算。


devres core guarantees atomicity to all basic devres operations and
has support for single-instance devres types (atomic
lookup-and-add-if-not-found).  Other than that, synchronizing
concurrent accesses to allocated devres data is caller's
responsibility.  This is usually non-issue because bus ops and
resource allocations already do the job.
设备资源核心保证所有对设备资源的基本操作同时完成,并且支持单实例设备资源类型
(下面是一系列操作(原子操作,不可分割的):检索,如果没有发现就添加)。除了那些,
同步那些需要同时去存取被分配的设备资源数据的进程是调用者的责任。这通常不用去讨论,因为总线ops和资源分配已经做了这个工作。


For an example of single-instance devres type, read pcim_iomap_table()
in lib/devres.c.
对单实例设备资源类型这个例子来说,可以去lib/devres.c中阅读关于pcim_iomap_table()的代码。


All devres interface functions can be called without context if the
right gfp mask is given.






如果正确的gfp mask被给定,那么所有的设备资源接口函数都可以不需要上文就能被调用。






  5. Overhead费用
  -----------


Each devres bookkeeping info is allocated together with requested data
area.  With debug option turned off, bookkeeping info occupies 16
bytes on 32bit machines and 24 bytes on 64bit (three pointers rounded
up to ull alignment).  If singly linked list is used, it can be
reduced to two pointers (8 bytes on 32bit, 16 bytes on 64bit).


每个设备资源记账信息会和被请求的数据一起被分配。在调试选项关闭时,在32位机器上记账信息占16个字节,而在64位机器上占24个字节。如果链表被单独使用,它可以减少到2个指针(在32位机器为8个字节,64位为16个字节)。


Each devres group occupies 8 pointers.  It can be reduced to 6 if
singly linked list is used.
每个设备资源组占有8个指针。如果链表被单独使用,它能被减少到6个。


Memory space overhead on ahci controller with two ports is between 300
and 400 bytes on 32bit machine after naive conversion (we can
certainly invest a bit more effort into libata core layer).




经过本地转化(毫无疑问我们需要付出更多的努力来转化成libata核心层)后,在32位机器上,在AHCI控制器使用两个端口时,内存空间花费为300-400个字节之间。


  6. List of managed interfaces管理级别接口列表
  -----------------------------


MEM
  devm_kzalloc()
  devm_kfree()


IIO
  devm_iio_device_alloc()
  devm_iio_device_free()
  devm_iio_trigger_alloc()
  devm_iio_trigger_free()


IO region
  devm_request_region()
  devm_request_mem_region()
  devm_release_region()
  devm_release_mem_region()


IRQ
  devm_request_irq()
  devm_free_irq()


DMA
  dmam_alloc_coherent()
  dmam_free_coherent()
  dmam_alloc_noncoherent()
  dmam_free_noncoherent()
  dmam_declare_coherent_memory()
  dmam_pool_create()
  dmam_pool_destroy()


PCI
  pcim_enable_device() : after success, all PCI ops become managed执行成功后,所有的PCI ops都能被管理
  pcim_pin_device() : keep PCI device enabled after release经过释放后能保持PCI设备被激


IOMAP
  devm_ioport_map()
  devm_ioport_unmap()
  devm_ioremap()
  devm_ioremap_nocache()
  devm_iounmap()
  devm_ioremap_resource() : checks resource, requests memory region, ioremaps
  devm_request_and_ioremap() : obsoleted by devm_ioremap_resource()
  pcim_iomap()
  pcim_iounmap()
  pcim_iomap_table() : array of mapped addresses indexed by BAR 一个被BAR索引的映射地址的数组
  pcim_iomap_regions() : do request_region() and iomap() on multiple BARs 在多个BARS上执行request_region()和iomap()


REGULATOR
  devm_regulator_get()
  devm_regulator_put()
  devm_regulator_bulk_get()


CLOCK
  devm_clk_get()
  devm_clk_put()


PINCTRL
  devm_pinctrl_get()
  devm_pinctrl_put()


PWM
  devm_pwm_get()
  devm_pwm_put()


PHY
  devm_usb_get_phy()
  devm_usb_put_phy()


SLAVE DMA ENGINE
  devm_acpi_dma_controller_register()
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值