【Scapy】使用Python的Scapy包对Wirshark捕获的Http数据包进行解析(格式输出)

  1. 通过Anaconda安装scapy
pip install scapy-http
  1. Python源码如下,实现功能:
    1)读取本地pcap文件(文件内容为Wirshark捕获的数据二进制流);
    2)通过scapy将二进制数据流解析为有结构的paket包;
    3)输出Http数据报的头部信息
#!/usr/bin/env python
try:
    import scapy.all as scapy
except ImportError:
    import scapy

try:
    # This import works from the project directory
    import scapy_http.http as http
except ImportError:
    # If you installed this package via pip, you just need to execute this
    from scapy.layers import http

import re
def processStr(data):
    pattern = re.compile('^b\'(.*?)\'$', re.S)
    res = re.findall(pattern, str(data))
    final = re.split('\\\\r\\\\n', res[0])
    return final

# packets = scapy.sniff(iface='eth0', count=100)
# 下面文件路径替换为自己的文件路径
packets = scapy.rdpcap('exampledata.pcap')
for p in packets:
    #筛选数据中的HTTP数据
    if p.haslayer(http.HTTPRequest) or p.haslayer(http.HTTPRequest):
        if 'TCP' in p:
            print('=' * 78)
            #TCP相关信息
            Ether_name = p.name
            Ether_dst =  p.dst
            Ether_src = p.src
            IP_name = p.payload.name
            # IP_proto = p.payload.proto
            IP_src = p.payload.src
            IP_dst = p.payload.dst

            print(Ether_name)
            print('dst : ' + Ether_dst)
            print('src : ' + Ether_src)

            print(IP_name)
            # print('protcol : ' + IP_proto)
            print('src : ' + IP_src)
            print('dst : ' + IP_dst)

            # HTTP相关信息
            # 若为HTTP请求包
            if p.haslayer(http.HTTPRequest):
                print("*********request******")
                http_name = 'HTTP Request'
                # 请求头部信息
                http_header = p[http.HTTPRequest].fields
                # print(http_header)
                headers = http_header['Headers']
                items = processStr(headers)
                for i in items:
                    print(i)
                # 请求方法、路径、HTTP版本
                methods = http_header['Method']
                items = processStr(methods)
                print('Method:' + items[0])
                paths = http_header['Path']
                items = processStr(paths)
                print('Path:' + items[0])
                versions = http_header['Http-Version']
                items = processStr(versions)
                print('Http-Version:' + items[0])
            # 若为HTTP应答包
            elif p.haslayer(http.HTTPResponse):
                print("*********response******")
                http_name = 'HTTP Response'
                # HTTP头部
                http_header = p[http.HTTPResponse].fields
                # print(http_header)
                headers = http_header['Headers']
                items = processStr(headers)
                for i in items:
                    print(i)
                methods = http_header['Method']
                items = processStr(methods)
                print('Method:' + items[0])
                paths = http_header['Path']
                items = processStr(paths)
                print('Path:' + items[0])
                versions = http_header['Http-Version']
                items = processStr(versions)
                print('Http-Version:' + items[0])
                # 应答内容
                if 'Raw' in p:
                    load = p['Raw'].load
                    items = processStr(load)
                    for i in items:
                        print(i)

    else:
        continue
  1. 输出结果如下
    运行结果图
  2. 参考文章列表:
    1)https://blog.csdn.net/jjonger/article/details/81275120
    2)http://fivezh.github.io/2016/05/31/Python-http-packet-parsing/
    3)https://github.com/invernizzi/scapy-http
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值