Python中的greelet模块的线程安全问题 ( by quqi99 )


Python中的greelet模块的线程安全问题 ( by quqi99 )

作者:张华  发表于:2013-06-17
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明

( http://blog.csdn.net/quqi99 )


         最近遇到一件很有趣事情,FVT team在对openstack进行压力测试时,偶尔的qunatum网络这块会抛异常,从日志看很无法理喻。经过长时间的摸索,找到的根源如下,见openstack社区的一个patch, https://review.openstack.org/#/c/23198/10/nova/network/quantumv2/__init__.py,这个patch做了这样一件事,将new_client= client.Client(**params) (是httplib2.Http一个子类)也就是一个socket对象放在缓存中在多个greenthread间共享了。你看完下面文章就知道是怎么一回事了,摘自:http://blog.eventlet.net/2010/03/18/safety/

         我的想法,如果只是单纯的nova boot可能还没事,因为那是一个个进程。但像$nova/nova/compute/manager.py里的_heal_instance_info_cache之类的periodic_task可是一个单独的greenlet,那么它和正常的nova boot为虚拟分配网络时所用的greenlet就有可能刚才运行到下面的情况。

        所谓greenlet实切上是用户态的NIO非阻塞线程,用户态说明它不是由操作系统内核来切换而是由python虚拟机来切换的,一个greenthread就是一个死循环,哪个多线程的任务的IO准备好了就先处理哪个,这个任务的IO阻塞了它不会等,继续做其他IO准备好了的任务。像下面例子中的第三种greenthread池的情况,一个池内的多个greentthread同一时间也只能有一个greenthread在运行。

         出现这个问题的根在于python语言对socket这些对象没有做线程同步,这从另一个角度也就说明了greenthread性能高效的原因,socket本来就不应该被共享。线程之间可以通过共享对象本身的同步来避免竞争,对于比线程更小的greenthread的设计哲学本来就是自己拥有自己的数据结构,而不是去共享。这点有点类似于java中的ThreadLocal对象,一个Thread可以拥有自己的局部数据结构(2013.11.13日更新:关于局部数据结构,一个patch https://review.openstack.org/#/c/56075/ 想做这件事)。在https://review.openstack.org/#/c/33555/ 这里我和Chris有一个讨论。


       2013.11.25更新,neutron这块的代码后仿造java,在每个greenthread的local里缓存socket,但是又出现了上面的错误,这个patch (https://review.openstack.org/#/c/57509/2/nova/openstack/common/local.py )将greenthread改成了普通的thread对象从而解决了问题。我的理解是 (不一定对),greenthread底层使用的httplib2库可能会存在前一个请求没处理完又接受第二个请求的问题,由于socket这时对同一个greenthread是共享的,但socket本身由于使用的是green socket没有像java的synchronized之类的同步机制,这样有可能会出问题。改成普通的thread刚好可以利用语言级的socket自由的同步从而解决了问题。


总结一下:
httplib2.Http不应该在greenthread之间共享;可以每个greenthread一个httplib2.Http实例;也可以使用eventlet.pools.Pool机制还构建httplib2.Http实例池在不同greenthread之间作一定程度共享,pool会保证 一个httplib2.HTTP实例在服务完一个greenthread之后再被共他greenthread实例共享。

同样类似的,还有这个问题, rados.Rados在不同的greenthread之间共享出了问题,这个patch(https://review.openstack.org/#/c/175555/)将它改成用tpool.Proxy来构建rados.Rados实例池的方法在不同的greenthread之间共享,但是rados.Rados这个实例来自python-rbd,它本身又会spawn thread去连接rados, 所以之前的改法造成了回归问题,见https://review.openstack.org/#/c/197710/。这样又回到了不同的greenthrad共享rados.Rados一个实例,rados.Rados实例再去使用native python thread的同步功能,这会同时block掉这些greenthread。


One of the simple user errors that keeps on cropping up is accidentally having multiple greenthreads reading from the same socket at the same time.  It’s a simple thing to accidentally do; just create a shared resource that contains a socket and spawn at least two greenthreads to use it:

 import eventlet
 httplib2 = eventlet.import_patched('httplib2')
 shared_resource = httplib2.Http()
 def get_url():
     resp, content = shared_resource.request("http://eventlet.net")
     return content
 p = eventlet.GreenPile()
 p.spawn(get_url)
 p.spawn(get_url)
 results = list(p)
 assert results[0] == results[1]

Running this with Eventlet 0.9.7 results in an httplib.IncompleteRead exception being raised. It’s because both calls to get_url are divvying up the data from the socket between them, and neither is getting the full picture.  The IncompleteRead error is pretty hard to debug — you’ll have no idea why it’s doing that, and you’ll be frustrated.

What’s new in the tip of Eventlet’s trunk is that Eventlet itself will warn you with a clear error message when you try to do this. If you run the above code with development Eventlet (see sidebar for instructions on how to get it) you now get this error instead:

RuntimeError: Second simultaneous read on fileno 3 detected.  Unless
 you really know what you're doing, make sure that only one greenthread
 can read any particular socket.  Consider using a pools.Pool. If you do know
 what you're doing and want to disable this error, call 
 eventlet.debug.hub_multiple_reader_prevention(False)

Cool, huh? A little clearer about what exactly is going wrong here. And if you really want to do multiple readers or multiple writers on the same socket simultaneously, there’s a way to disable the protection.

Of course, the fix for this particular toy example is to have a single instance of Http() for every greenthread:

 import eventlet
 httplib2 = eventlet.import_patched('httplib2')
 def get_url():
     resp, content = httplib2.Http().request("http://eventlet.net")
     return content
 p = eventlet.GreenPile()
 p.spawn(get_url)
 p.spawn(get_url)
 results = list(p)
 assert results[0] == results[1]

But you probably created that shared_resource because you wanted to reuse Http() instances between requests. So you need some other way to sharing connections. This is what pools.Pool objects are for! Use them like this:

 from __future__ import with_statement
 import eventlet
 from eventlet import pools
 httplib2 = eventlet.import_patched('httplib2')
 
 httppool = pools.Pool()
 httppool.create = httplib2.Http
 
 def get_url():
     with httppool.item() as http:
         resp, content = http.request("http://eventlet.net")
         return content
 
 p = eventlet.GreenPile()
 p.spawn(get_url)
 p.spawn(get_url)
 results = list(p)
 assert results[0] == results[1]

The Pool class will guarantee that the Http instances are reused if possible, and that only one greenthread can access each at a time. If you’re looking for somewhat more advanced usage of this design pattern, take a look at the source code to Heroshi, a concurrent web crawler written on top of Eventlet


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

quqi99

你的鼓励就是我创造的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值