目录扫描在信息收集的过程中是一项非常重要的步骤,有时候网站其实有其他具有漏洞的网页,导致你错失getshell的绝佳机会。
对此我很感兴趣,在思考这是什么原理呢,我个人认为是 输入的 url+字典 两个字符串进行拼接,然后发包,接收返回的状态码,然后判断状态码进行处理...
下面是我的代码:
import requests
def httpContext(file_name):
with open(file_name, 'r', encoding='utf-8') as good_file:
for line in good_file:
url = line.strip() # 去除行末尾的换行符
response = requests.get(url) # 发送请求
contents = response.text.lower() # 获取响应的源代码并转换为小写
if 'key' in contents:
with open('httpContext.txt', 'a', encoding='utf-8') as output_file:
output_file.write(url + '\n')
url = 'http://192.168.21.42' # 目标URL
with open('DIR.txt', 'r', encoding='utf-8') as f, open('Good.txt', 'w', encoding='utf-8') as good_file:
for line in f:
dir = line.strip() # 去除行末尾的换行符
full_url = url + dir # 拼接URL
try:
response = requests.get(full_url)
status_code = response.status_code
print(f"[{status_code}]{full_url}")
if status_code == 200:
good_file.write(full_url + '\n')
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
httpContext('Good.txt')
代码中的DIR.txt是需要的字典文件,Good.txt是保存成功访问的url,httpContext是保存具有关键字的源码的url。
测试运行:
DIR.txt文件:
运行py文件。
结果: