Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Lever

https://indico.cern.ch/event/304944/session/3/contribution/402/attachments/578582/796733/CHEP2015_Swift_Ceph_v8.pdf

Performance Comparison Studies With Real-Life Scenarios in an Experimental Data Taking Context Leveraging OpenStack Swift & Ceph

 

http://lists.openstack.org/pipermail/openstack/2014-January/004641.html

 

 https://swiftstack.com/blog/2013/04/18/openstack-summit-benchmarking-swift/

 

http://www.g363.com/s/openstack%20swift%20performance%20analysis_p2.html

 

 

>> Hi,
>>
>> This question is specific to Openstack Swift. I am trying to understand
>> just how much is the proxy server a bottleneck when multiple clients are
>> concurrently trying to write to a swift cluster. Has anyone done
>> experiments to measure this? It'll be great to see some results.
>>
>> I see that the proxy-server already has a "workers" config option.
>> However, looks like that is the # of threads in one proxy-server process.
>> Does having multiple proxy-servers themselves, running on different nodes
>> (and having some load-balancer in front of them) help in satisfying more
>> concurrent writes? Or will these multiple proxy-servers also get
>> bottlenecked on the account/container/obj server?
>>
>> Also, looking at the code in swift/proxy-server/controllers/obj.py, it
>> seems that each request that the proxy-server sends to the backend servers
>> (account/container/obj) is synchronous. It does not send the request and go
>> back do accept more requests. Is this one of the reasons why write requests
>> can be slow?
>>
>> Thanks in advance.

##############################################################################################

 

It's not synchronous, each request/eventlet co-rotine will yield/trampoline
back to the reactor/hub on every socket operation that raises EWOULDBLOCK.
 In cases where there's a tight long running read/write loop you'll
normally find a call to eventlet.sleep (or in at least one case a queue) to
avoid starvation.

Tuning workers and concurrency has a lot to do with the hardware and some
with the work-load.  The processing rate of an individual proxy server is
mostly cpu bound and depends on if you're doing ssl termination in front of
your proxy.  Request rate throughput is easily scaled by adding more proxy
servers (assuming your client doesn't bottleneck, look to
https://github.com/swiftstack/ssbench for a decently scalable Swift
benchmark suite). Throughput is harder to scale wide because of load
balancing - round-robin dns seems to be a good choice, or ssbench has an
option to benchmark a set of storage urls (list of proxy servers).

Have you read:
http://docs.openstack.org/developer/swift/deployment_guide.html#general-service-tuning


> Hi Shrinand,
>
> The concurrency bottleneck of Swift cluster could be various.
> Here's a list :
>
>    - Settings of each workers, workers count, max_clients,
>    threads_per_disk.
>    - Proxy CPU bound
>    - Storage nodes CPU bound
>    - Total Disk IO capacity (includes available memory for xfs caching)
>    - The power of your client machines
>    - Network issue
>
>
> You need to analyze the monitoring data to find the real bottleneck.
> The range of concurrency connections performance depends on the
> deployment.
> Concurrent connections from 150(VMs) to 6K+(physical sever pool). Of
> course that you can setup multiple proxy servers for handling higher
> concurrency as long as your storage nodes can stand for it.
>
> The path of a request in my knowing:
>
> Client --> Proxy-server --> object-server --> container-server (optional
> async) --> object-server --> Proxy-server --> Client --> close connection.
>
>
> Hope it help

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值