..
声明:
本博客欢迎转发,但请保留原作者信息!
博客地址:http://blog.csdn.net/liujiong63
新浪微博:@Jeremy____Liu
内容系本人学习、研究和总结,如有雷同,实属荣幸!
参考链接:
osprofiler概述
背景
OpenStack由很多的项目组成,每个项目有多个服务。有一些API请求的处理会经过多个不同的服务(例如,虚拟机的创建过程)。如果有些请求处理过程太慢,分析其中的处理细节会非常的困难和复杂。osprofiler项目由此产生。
osprofiler项目精简但是功能强大,即将被所有的OpenStack服务项目及其客户端项目所采用。使用osprofiler会对每个请求生成一条痕迹(trace),不管该请求会经过多少服务处理,最终生成请求处理过程的“树”,方便开发测试人员分析请求处理过程和调优。
OpenStack consists of multiple projects. Each project, in turn, is composed of multiple services. To process some request, e.g. to boot a virtual machine, OpenStack uses multiple services from different projects. In the case something works too slow, it’s extremely complicated to understand what exactly goes wrong and to locate the bottleneck.
To resolve this issue, we introduce a tiny but powerful library, osprofiler, that is going to be used by all OpenStack projects and their python clients. It generates 1 trace per request, that goes through all involved services, and builds a tree of calls.
osprofiler的使用
API
有5种方法可以在代码段中添加一处新的“trace point”:
首先在模块入口引入osprofiler:
from osprofiler import profiler
一、
def some_func():
profiler.start("point_name", {"any_key": "with_any_value"})
# your code
profiler.stop({"any_info_about_point": "in_this_dict"})
二、
@profiler.trace("point_name",
info={"any_info_about_point": "in_this_dict"},
hide_args=False)
def some_func2(*args, **kwargs):
# If you need to hide args in profile info, put hide_args=True
pass
三、
def some_func3():
with profiler.Trace("point_name",
info={"any_key": "with_any_value"}):
# some code here
四、
@profiler.trace_cls("point_name", info={}, hide_args=False,
trace_private=False)
class TracedClass(object):
def traced_method(self):
pass
def _traced_only_if_trace_private_true(self):
pass
五、
@six.add_metaclass(profiler.TracedMeta)
class RpcManagerClass(object):
__trace_args__ = {'name': 'rpc',
'info': None,
'hide_args': False,
'trace_private': False}
def my_method(self, some_args):
pass
def my_method2(self, some_arg1, some_arg2, kw=None, kw2=None)
pass
profiler.Trace()会调用profiler.start()和profiler.stop()方法。profiler.start()和profiler.stop()均会给collector发送一条消息,即每一处“trace point”会给collector发送两条消息。
osprofiler发送给collector的消息格式如下:
{
"name": <point_name>-(start|stop)
"base_id": <uuid>,
"parent_id": <uuid>,
"trace_id": <uuid>,
"info": <dict>
}
各参数定义如下:
- base_id - (uuid) that is equal for all trace points that belong to one trace, this is done to simplify the process of retrieving all trace points related to one trace from collector
- parent_id - (uuid) of parent trace point
- trace_id - (uuid) of current trace point
- info - the dictionary that contains user information passed when calling profiler start() & stop() methods
更详细的osprofiler与OpenStack项目集成的用法,见链接https://docs.openstack.org/developer/osprofiler/integration.html
配置
如果项目已经集成osprofiler,需要修改各项目的配置文件激活osprofiler
[profiler]
enabled = True
trace_sqlalchemy = True
hmac_keys = SECRET_KEY
connection_string = messaging://
关于配置cinder使用osprofiler,参考http://niusmallnan.com/_build/html/_templates/openstack/osprofiler.html
osprofiler的命令行操作
一、在控制台中打印某一条“trace”的信息
osprofiler trace show <trace_id> --json/--html
二、将输出信息重定向到文件(加上–out参数)
osprofiler trace show <trace_id> --json/--html --out /path/to/file
有哪些API请求应该被追踪?
所有的HTTP请求
helps to get information about: what HTTP requests were done, duration of calls (latency of service), information about projects involved in request.
所有的RPC调用
helps to understand duration of parts of request related to different services in one project. This information is essential to understand which service produce the bottleneck.
所有的数据库API调用
in some cases slow DB query can produce bottleneck. So it’s quite useful to track how much time request spend in DB layer.
所有的驱动调用
in case of nova, cinder and others we have vendor drivers
所有的SQL请求
turned off by default, because it produce a lot of traffic