【物联网识别】nmap && zgrab2扫描网络

设计要求
设计并制作出一种漏洞扫描平台,其要求如下:

( 1 )熟悉爬虫,爬取漏洞、设备详细数据,构建漏洞库、设备指纹库。

( 2 )使用 nmap 工具扫描网络,使用 zgrab2 工具辅助扫描网络。

( 3 )完成设备指纹识别,漏洞匹配过程。

( 4 )使用 nessus 工具验证漏洞。

( 5 )搭建可视化平台。

( 6 )完成设计报告


nmap 是一个网络连接端扫描软件,用来扫描网上电脑开放的网络连接端。 确定哪些服务运行在哪些连接端,并且推断计算机运行哪个操作系统(这是亦称 fingerprinting)。它是网络管理员必用的软件之一,以及用以评估网络系统安全。
本项目中使用 nmap.py 工具对网络中的 IP 地址(见文件 IP.txt)进行扫描, 收集活动主机的 IP 地址、端口号、厂商、操作系统、型号等信息。

对于 nmap 扫描没有结果的 IP 地址,使用 zgrab2 工具进行补充扫描,zgrab2
工具扫描 IP 地址,要求扫描不少于四种协议,返回标语后,对标语进行设备识
别,首先对标语进行自然语言处理(删除 html 标签、无用的标点符号、删除停止
词等),然后通过与设备指纹库匹配进行指纹识别,得到物联网设备的品牌、型号
等信息。


需要先在电脑上安装nmap,地址:https://nmap.org/download.html,然后为安装包设置环境变量

然后执行pip install python-nmap,就可以在python中使用了,直接使用import nmap

nmap参数

-F 快速扫描

-T 时间优化(0-5)(paranoid|sneaky|polite|normal|aggressive|insane)

-Pn 自己

-Ss 隐蔽扫描(TCP SYN扫描)
 半开放扫描 执行起来很快 可区分 开放的 关闭的 被过滤的

-O 系统扫描,操作系统检测

随便写了个nmap扫描端口的程序丢进服务器

import nmap
import json

def scans(ip):
    output = nm.scan(hosts=ip, arguments='-F -T4 -Pn -sS -O')
    uphosts = output['nmap']['scanstats']['uphosts']
    for i in output['scan'].values():
        if i['status']['state'] == 'up':
            hosts_list = i['addresses']['ipv4']
            vendor = i['vendor']
            reason = i['status']['reason']
            port = i['portused']
            os = i['osmatch']
        data = {"uphosts": uphosts, "ip": hosts_list, "vendor": vendor, "reason": reason, "port": port, "os": os}
        yield json.dumps({"result": data})  

if __name__ == "__main__":
    nm = nmap.PortScanner()
    with open('./IP.txt', 'r', encoding='utf-8') as f:      
        lines = f.readlines()
        for line in lines:
            result = scans(line)
            while True:
                try:
                    fg = open('./good_ip.txt', 'a+')
                    fb = open('./bad_ip.txt', 'a+')
                    res = json.loads(next(result))
                    print(res)
                    # 对于没有结果的ip地址,使用zgrab2进行补充扫描
                    if res['result']['port'] == []:
                        fb.write(res["result"]["ip"] + '\n')
                    else:
                        fg.write(json.dumps(res) + '\n')
                except StopIteration as e:
                    break
    fb.close()
    fg.close()

读取IP.txt里的IP信息并扫描,扫描到信息的保存到good_ip.txt里,反之放入bad_ip.txt,但是调用namp扫描的速度特别慢,单线程跑完IP.txt的110w条信息简直不可能,于是完善了下面的多线程扫描

import nmap
import json
import threading
import time

THREADING_NUM = 100
THREADING = list()
lock = threading.Lock()

# 用nmap扫描获取信息并返回
def scans(ip):
    result = []
    try:
        output = nm.scan(hosts=ip, arguments='-F -T4 -Pn -sS -O')
        uphosts = output['nmap']['scanstats']['uphosts'] 
        for i in output['scan'].values():
            if i['status']['state'] == 'up':
                hosts_list = i['addresses']['ipv4']
                vendor = i['vendor']
                reason = i['status']['reason']
                port = i['portused']
                os = i['osmatch']
            data = {"uphosts": uphosts, "ip": hosts_list, "vendor": vendor, "reason": reason, "port": port, "os": os}
            result.append(json.dumps({"result": data}))
    except Exception as e:
        print(e)
    return result

# 循环获取信息,写入文件
def writeinfo(line):
    global THREADING_NUM
    THREADING_NUM -= 1
    print("还有{}个线程可用".format(THREADING_NUM))
    result = scans(line)
    if result != []:
        for resu in result:
            lock.acquire()
            with open('./good_ip.txt', 'a+') as fg, open('./bad_ip.txt', 'a+') as fb:
                res = json.loads(resu)
                print(res)
                # 对于没有结果的ip地址,使用zgrab2进行补充扫描
                if res['result']['port'] == []:
                    fb.write(res["result"]["ip"] + '\n')
                else:
                    fg.write(json.dumps(res) + '\n')
        lock.release()
    THREADING_NUM += 1
    print("有个线程已经完成了")

if __name__ == "__main__":
    nm = nmap.PortScanner()
    with open('./IP.txt', 'r', encoding='utf-8') as f:      
        lines = f.readlines()
        for line in lines:
            time.sleep(.15)
            while True:
                if THREADING_NUM > 0:
                    t = threading.Thread(target=writeinfo, args=(line.split()))
                    THREADING.append(t)
                    # t.daemon = True
                    t.start()
                    break
                else:
                    for i in range(len(THREADING)):
                        THREADING[i].join()
                        break
                    break

定义了线程池上限,一口气向线程池里添加100个线程,然后监视等待有线程结束,只要有线程结束就往线程池里添加新的任务,可能是上限太高的原因,非常巧妙的是线程一口气少了几十个才慢慢往里添加新线程,这样就能保证我的cpu一直高效率运转

即使这样也才跑20w条ip就被我停了,实在没时间跑了,20w中有3w条没有用nmap识别出来的ip,单独放入txt导入zrab2里进行辅助扫描

zgrab2

单模块 ./zgrab2 ssh

多模块

在这里插入图片描述

对于扫描出的没有信息的ip地址进行zgrab2辅助扫描,成功率挺低的,使用./zgrab2 multiple -c multiple.ini -f bad_ip.txt -o output.txt,排开成功率不说还容易被阿里云识别为攻击

在这里插入图片描述
在这里插入图片描述

这里可以看到辅助扫描的结果很不好,3.3wip成功数量只有仅仅几百条

数据整理

zgrab2扫描得到的数据是这样的:

{"ip":"115.53.76.103","data":{"ftp21":{"status":"success","protocol":"ftp","result":{"banner":"220 Welcome to virtual FTP service.\r\n"},"timestamp":"2021-01-16T18:55:48+08:00"},"http80":{"status":"unknown-error","protocol":"http","result":{},"timestamp":"2021-01-16T18:55:18+08:00","error":"net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"},"http8080":{"status":"unknown-error","protocol":"http","result":{},"timestamp":"2021-01-16T18:55:28+08:00","error":"net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"},"ssh22":{"status":"connection-timeout","protocol":"ssh","result":{},"timestamp":"2021-01-16T18:55:38+08:00","error":"dial tcp 115.53.76.103:22: i/o timeout"},"telnet23":{"status":"connection-timeout","protocol":"telnet","timestamp":"2021-01-16T18:55:18+08:00","error":"dial tcp 115.53.76.103:23: connect: connection refused"}}}

{"ip":"218.59.123.10","data":{"ftp21":{"status":"connection-timeout","protocol":"ftp","timestamp":"2021-01-16T18:55:48+08:00","error":"dial tcp 218.59.123.10:21: connect: connection refused"},"http80":{"status":"unknown-error","protocol":"http","result":{},"timestamp":"2021-01-16T18:55:18+08:00","error":"net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"},"http8080":{"status":"connection-timeout","protocol":"http","result":{},"timestamp":"2021-01-16T18:55:28+08:00","error":"dial tcp \u003cnil\u003e-\u003e218.59.123.10:8080: i/o timeout"},"ssh22":{"status":"connection-timeout","protocol":"ssh","result":{},"timestamp":"2021-01-16T18:55:38+08:00","error":"dial tcp 218.59.123.10:22: i/o timeout"},"telnet23":{"status":"connection-timeout","protocol":"telnet","timestamp":"2021-01-16T18:55:18+08:00","error":"dial tcp 218.59.123.10:23: connect: connection refused"}}}

使用nltk自然语言处理,对句中词语分割

import re
import nltk

count = 0
with open("./output.txt",'r',encoding='utf-8') as f, open("./output.json",'a',encoding='utf-8') as ff:
    liness = f.readlines()
    for lines in liness:
        # 删除不可打印字符
        line = re.sub(r'\\[n|r|t|v|f|s|S|cx]','',lines)
        # 删除html标签
        line = re.sub(r'<[^<]+?>','',lines)
        # 删除标点符号
        lines = lines.replace('@','')
        lines = lines.replace('\\','')
        lines = re.sub(r'[\s+\!\\\/=|_@$&#%^*(+\')]+','',lines)
        lines = lines.replace("\"","").replace(",","").replace('[','').replace(']','')
        # 分词
        word = nltk.word_tokenize(lines)
        # 删除停止词
        stop_words = set(nltk.corpus.stopwords.words('english'))
        stop_words.remove(u'will')
        stop_words.remove(u'do')
        filetered_sentence = [w for w in word if not w in stop_words]

        count += 1
        print("{:.4f}".format(count/len(liness)))
        
        ff.write(str(filetered_sentence))
        ff.write("\n")
print('done')

在这里插入图片描述

从3w数据里提取3百条,加入16w有效数据里面,emmmmm我觉得没这必要,垃圾数据就抛弃吧,下一步就直接匹配漏洞去

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值