网络入侵检测系统之Suricata(十六)--类suricata/snort规则自动维护工具

Introduction

之前一直想写一个工具用来维护一套类suricata/snort规则,需要做到脚本运行后自动可以爬取预先设置的网站规则,然后会将规则进行一些处理并存为数据库,接着可以进行相关的统计,比如今天新增多少规则或者哪些规则已经废弃了。

目前我找到的免费的类suricata/snort规则的网站如下:

  • https://rules.emergingthreats.net/open/suricata-5.0/emerging-all.rules
  • https://sslbl.abuse.ch/blacklist/sslipblacklist.rules
  • https://sslbl.abuse.ch/blacklist/sslipblacklist_aggressive.rules
  • https://raw.githubusercontent.com/ptresearch/AttackDetection/master/pt.rules.tar.gz

Architecture

扩展新规则url爬虫类方法:
1. 安装运行环境
  Python3.8:https://www.python.org/downloads/
  Scrapy2.4.1:https://scrapy.org/
  
2. 新建网站子类,需要继承scrapy.Spider,并重写allowed_domains,start_urls 及parse函数:
  class YourSpiderClass(scrapy.Spider):
      name = 'get_et_rules'
      allowed_domains = ['rules.emergingthreats.net']
      start_urls = ['https://rules.emergingthreats.net/open/suricata-5.0/emerging-all.rules']
      
      def parse(self, response):
          # your parse code ... ... 
          pass

3. 在RulesSpider新增爬虫进程即可:
  from scrapy.crawler import CrawlerProcess
  class RulesSpider():
      def __init__(self):
          self.process = CrawlerProcess({
              'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
          })
          self.process.crawl(GetEtRulesSpider)
          self.process.crawl(GetSSLIpBlackListSpider)
          self.process.crawl(GetSSLIpBlackListAggressiveSpider)
          # your process class ... ... 
  
      def start(self):
          self.process.start()
          
3. 执行main.py,显示如下:
    2021-10-25 16:08:47 [scrapy.core.engine] INFO: Closing spider (finished)
    2021-10-25 16:08:47 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 281,
    'downloader/request_count': 1,
    'downloader/request_method_count/GET': 1,
    'downloader/response_bytes': 2830542,
    'downloader/response_count': 1,
    'downloader/response_status_count/200': 1,
    'elapsed_time_seconds': 4.061754,
    'finish_reason': 'finished',
    'finish_time': datetime.datetime(2021, 10, 25, 8, 8, 47, 572493),
    'log_count/DEBUG': 3,
    'log_count/INFO': 34,
    'response_received_count': 1,
    'scheduler/dequeued': 1,
    'scheduler/dequeued/memory': 1,
    'scheduler/enqueued': 1,
    'scheduler/enqueued/memory': 1,
    'start_time': datetime.datetime(2021, 10, 25, 8, 8, 43, 510739)}
    2021-10-25 16:08:47 [scrapy.core.engine] INFO: Spider closed (finished)
    update nidps rule successfully, totally time costs 5s
    old rules has 29765 items, new rules has 29778 items
    incre 8011 items, del 7998 items

Reference

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

于顾而言

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值