有道云翻译链接https://fanyi.youdao.com/
通过抓包发现Post的信息为
其中salt,sign,lts,bv就是反爬虫机制,每次信息不同。
打开开发者工具-network,可以看到js文件,转换下格式,关键字搜索,可以查到这个四个参数。
稍微梳理下,bv值是固定的,通过md5加密,navigator.appVersion就是浏览器的user-agent,ts是时间戳,salt也算是时间戳,sign也是md5加密,key是要查询的单词。
bv= n.md5(navigator.appVersion)
ts= "" + (new Date).getTime()
salt = ts + parseInt(10 * Math.random(), 10);
sign: n.md5("fanyideskweb" + key + salt + "Y2FYu%TNSbMCxc3t2u^XT")
将上面四个参数用python模拟出来就可以了,首先模拟ts和salt
def get_salt():
lts_get = int(time.time())
salt_get = lts_get + random.randint(0, 10)
return lts_get, salt_get
然后是bv和sign
def get_md5(v):
md5 = hashlib.md5() # md5对象,md5不能反解,但是加密是固定的,就是关系是一一对应,所以有缺陷,可以被对撞出来
# navigator.appVersion 就是user-agent
md5.update(ua_header["User-Agent"].encode("utf-8"))
bv = md5.hexdigest()
# update需要一个bytes格式参数
sign_get = "fanyideskweb" + v + str(get_salt()[1]) + "Y2FYu%TNSbMCxc3t2u^XT"
md5.update(sign_get.encode('utf-8'))
sign = md5.hexdigest() # 拿到加密字符串
return bv, sign
完整代码如下:
from urllib import request
from urllib import parse
import time
import random
import hashlib
def get_salt():
lts_get = int(time.time())
salt_get = lts_get + random.randint(0, 10)
return lts_get, salt_get
def get_md5(v):
md5 = hashlib.md5() # md5对象,md5不能反解,但是加密是固定的,就是关系是一一对应,所以有缺陷,可以被对撞出来
# navigator.appVersion 就是user-agent
md5.update(ua_header["User-Agent"].encode("utf-8"))
bv = md5.hexdigest()
# update需要一个bytes格式参数
sign_get = "fanyideskweb" + v + str(get_salt()[1]) + "Y2FYu%TNSbMCxc3t2u^XT"
md5.update(sign_get.encode('utf-8'))
sign = md5.hexdigest() # 拿到加密字符串
return bv, sign
url = "https://fanyi.youdao.com/translate?smartresult_o=dict&smartresult=rule"
ua_header = {
"Host": "fanyi.youdao.com",
"Accept": "application/json, text/javascript, */*; q=0.01",
"X-Requested-With": "XMLHttpRequest",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
"(KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36",
"Accept-Language": "zh-CN,zh;q=0.9",
"Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
"sec-ch-ua-mobile": "?0"
}
print("请输入需要查询的单词:")
content = input()
lts, salt = get_salt()
bv, sign = get_md5(content)
# 通过抓包得到的有道翻译的post数据
post_json = {
"i": content,
"from": "AUTO",
"to": "AUTO",
"smartresult": "dict",
"client": "fanyideskweb",
"salt": salt,
"sign": sign,
"lts": lts,
"bv": bv,
"doctype": "json",
"version": "2.1",
"keyfrom": "fanyi.web",
"action": "FY_BY_REALTlME"
}
post_data = parse.urlencode(post_json)
ua_request = request.Request(url=url, data=post_data.encode("utf-8"), headers=ua_header)
html = request.urlopen(ua_request).read().decode("utf-8")
print(html)
写在最后
url = "https://fanyi.youdao.com/translate?smartresult_o=dict&smartresult=rule"将_o去掉,那么这四个值就可以固定不用模拟,相当于去掉了爬虫机制。