python 内存分析工具_Python逐行内存分析器?

I'm looking to generate, from a large Python codebase, a summary of heap usage or memory allocations over the course of a function's run.

I'm familiar with heapy, and it's served me well for taking "snapshots" of the heap at particular points in my code, but I've found it difficult to generate a "memory-over-time" summary with it. I've also played with line_profiler, but that works with run time, not memory.

My fallback right now is Valgrind with massif, but that lacks a lot of the contextual Python information that both Heapy and line_profiler give. Is there some sort of combination of the latter two that give a sense of memory usage or heap growth over the execution span of a Python program?

解决方案

I would use sys.settrace at program startup to register a custom tracer function. The custom_trace_function will be called for each line of code. Then you can use that function to store information gathered by heapy or meliae in a file for later processing.

Here is a very simple example which logs the output of hpy.heap() each second to a plain text file:

import sys

import time

import atexit

from guppy import hpy

_last_log_time = time.time()

_logfile = open('logfile.txt', 'w')

def heapy_profile(frame, event, arg):

currtime = time.time()

if currtime - _last_log_time < 1:

return

_last_log_time = currtime

code = frame.f_code

filename = code.co_filename

lineno = code.co_firstlineno

idset = hpy().heap()

logfile.write('%s %s:%s\n%s\n\n' % (currtime, filename, lineno, idset))

logfile.flush()

atexit.register(_logfile.close)

sys.settrace(heapy_profile)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值