爬虫
奋斗的小创
这个作者很懒,什么都没留下…
展开
-
python3.8实现各种加密
# base64import base64data = 'hello, world'encrypt = base64.b64encode(data.encode("utf8"))decrypt = base64.b64decode(encrypt)# MD5import hashlibdef encrypt_md5(data): data = data md5 = hashlib.md5() md5.update(data.encode()) encry原创 2021-02-07 11:24:28 · 941 阅读 · 1 评论 -
某省住房和城建网的AES加密
目标网站:aHR0cCUzQS8vemp0LmdhbnN1Lmdvdi5jbi8=下的 aHR0cCUzQS8vemp0LmdhbnN1Lmdvdi5jbi8vaW5kZXgvaG9tZS5odG0=var CryptoJS = CryptoJS || function(u, p) { var d = {} , l = d.lib = {} , s = function() {} , t = l.Base = { extend:原创 2020-08-17 10:11:23 · 10200 阅读 · 0 评论 -
某资讯网站signature参数的生成
目标网站:aHR0cHMlM0EvL20uaXRvdWNodHYuY24vYXJ0aWNsZS8yOTkxOGFlYjljMGY1MDMzYTY0OGQwNjdjODYzZTA2Mg==aHR0cHMlM0EvL2JvbHVvLW0uaXRvdWNodHYuY24vJTIzL25ld3MvNzgx(两个网站加密方式相同)注意:需要在同文件夹下添加crypto-js前段加密库var CryptoJS = require("crypto-js");var C = CryptoJS;var C_li原创 2020-08-17 10:05:57 · 393 阅读 · 0 评论 -
某资讯之base64加密
目标网站:aHR0cHMlM0EvL3d3dy4zNjBrdWFpLmNvbS85ZGVmOTkyYzYxNjNkYTJjMw==原创 2020-08-10 10:12:05 · 3467 阅读 · 0 评论 -
安徽招投标soJson加密
import reimport requestsimport execjssession = requests.session()def get_sojson(): url = 'http://www.ahtba.org.cn/' content = session.get(url).text if 'sojson.v5' in content: js = re.findall('''<script type="text/javascript"&原创 2020-07-27 10:35:21 · 268 阅读 · 0 评论 -
js加密
中国中铁大桥局集团有限公司js加密def yunsuoautojump(): location = "/tabid/3091/Default.aspx?security_verify_data=" + stringToHex() return locationdef stringToHex(): l = '' width = 1366 height ...原创 2020-07-27 10:28:51 · 212 阅读 · 0 评论 -
某二手车的antipas参数加密
这里写自定义目录标题session = requests.session()url = 'https://www.guazi.com/bj/dazhong/'content_html = session.get(url).content.decode('utf-8')js = re.findall('eval\((.*)\)\;var\s*?value',content_html)[0]js_2 = re.findall('var\s*?value.*',content_html)[0].repl原创 2020-08-17 15:33:00 · 87 阅读 · 0 评论 -
Js加密之sojson.v5加密
import requestsimport reimport execjsurl = 'http://jyt.ah.gov.cn'session = requests.session()text = session.get(url).text# 获取js代码js = re.findall('<script type="text/javascript">(.*?)</...原创 2020-07-27 10:39:53 · 2982 阅读 · 1 评论 -
多线程爬虫
普通多线程爬虫import threadingimport requestsimport timelink_list = []with open('alexa.txt', 'r') as file: file_list = file.readlines() for each in file_list: link = each.replace('\n', '') link_list.append(link)class MyThread(thre原创 2020-07-27 10:39:42 · 144 阅读 · 0 评论