为了防止自己忘了,还是在这里写个大概过程吧
还不完善,之后再改
之前虽然我接触过一个字体反爬的网站,但是比较简单的,字体文件直接就在源码里,大众点评的不一样,我们先去网页看一下,可以发现,评价数,人均,地址和口味评分这些都是加密的,他们的class都是shopNum
地名也是加密的,它的class是tagName
具体地址也是加密的,它的class是address
接下来寻找字体的css文件,鼠标点击加密处的数据,可以发现右边style这里有个font_family包裹在一个.shopNum里面,.shopNum熟悉吗?.不就是表示class么,那不就和评论处的class=shopName一模一样么。右上角有个css文件,点进去发现有4个woff文件,但有两个一样的,所以就是3个。
通过观察对比发现,所有加密处class的名称和css文件里面的都是一一对应的。
接下来要做的就是把woff文件拿出来,用正则就可以了。
在css文件标题处右键,copy link address得到:
self.zitiUrl = http://s3plus.meituan.net/v1/mss_0a06a471f9514fc79c981b5466f56b91/svgtextcss/12fd772aee773d8a96ad5a354d8b595a.css, 之后就是请求这个网址并补全woff文件的url,之后请求woff文件的url,并保存到本地,会得到3个字体文件
def get_ziti(self): # 根据字体的url把字体文件保存到本地
res = requests.get(self.zitiUrl)
font = re.findall(r'font-family: "(.*?)";src.*?(//s3plus\.meituan\.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/\w+.woff)', res.text, re.S)
font_list = ['https:' + x[1] for x in font]
font_name = [x[0] for x in font]
for i in font_list:
result = requests.get(i)
file_name = i.split('/')[-1]
with open(file_name, 'wb')as f:
f.write(result.content)
得到结果如下:
先用python处理一下这个软件,需要导入一个模块
from fontTools.ttLib import TTfont
def parse_ziti(self, class_name, datas): # datas表示传进来的经过处理的列表
# 因为加密用了不同的文件,所以加个名称,便于区别
if class_name == 'shopNum': # 评论数, 人均消费, 口味环境服务分数
woff_name = 'ebb40305.woff'
elif class_name == 'tagName': # 店铺分类,哪个商圈
woff_name = '9b3f551f.woff'
else:
woff_name = '1d742900.woff' # 店铺具体地址
# 评分
font_data = TTFont(woff_name)
font_data.saveXML(woff_name) # 保存xml便于做分析
得到的xml是这样的:
之后我用High-Logic FontCreator在软件中打开看一下,这个软件百度一下就能下载。
打开字体文件是这样的,唯一不同点就是每个字上面对应的编码,
我们再看一下源码中,数字是显示的什么
(寻找对应关系的时候要看清楚,千万别把加密文件搞错了)
源码中的可以看网页数据是代表的0,正好后4位f530和shopNum字体文件中0上面的编码$F530是一样的。因为字体文件和xml中的name也是一一对应的。所以接下来要做的就是找出字体文件和源码的对应关系;
可以把xml文件中的name和字体文件中的数据拿出来一一对应,组成一个字典,再把网页源码拿出来,修改成和xml中name一样的格式,再拿着修改后的网页源码去找字典中对应的值。
因为不知道怎么把字体文件中的数据拿出来,这里我也是找的别人现成的
words = '1234567890店中美家馆小车大市公酒行国品发电金心业商司超生装园场食有新限天面工服海华水房饰城乐汽香部利子老艺花专东肉菜学福饭人百餐茶务通味所山区门药银农龙停尚安广鑫一容动南具源兴鲜记时机烤文康信果阳理锅宝达地儿衣特产西批坊州牛佳化五米修爱北养卖建材三会鸡室红站德王光名丽油院堂烧江社合星货型村自科快便日民营和活童明器烟育宾精屋经居庄石顺林尔县手厅销用好客火雅盛体旅之鞋辣作粉包楼校鱼平彩上吧保永万物教吃设医正造丰健点汤网庆技斯洗料配汇木缘加麻联卫川泰色世方寓风幼羊烫来高厂兰阿贝皮全女拉成云维贸道术运都口博河瑞宏京际路祥青镇厨培力惠连马鸿钢训影甲助窗布富牌头四多妆吉苑沙恒隆春干饼氏里二管诚制售嘉长轩杂副清计黄讯太鸭号街交与叉附近层旁对巷栋环省桥湖段乡厦府铺内侧元购前幢滨处向座下臬凤港开关景泉塘放昌线湾政步宁解白田町溪十八古双胜本单同九迎第台玉锦底后七斜期武岭松角纪朝峰六振珠局岗洲横边济井办汉代临弄团外塔杨铁浦字年岛陵原梅进荣友虹央桂沿事津凯莲丁秀柳集紫旗张谷的是不了很还个也这我就在以可到错没去过感次要比觉看得说常真们但最喜哈么别位能较境非为欢然他挺着价那意种想出员两推做排实分间甜度起满给热完格荐喝等其再几只现朋候样直而买于般豆量选奶打每评少算又因情找些份置适什蛋师气你姐棒试总定啊足级整带虾如态且尝主话强当更板知己无酸让入啦式笑赞片酱差像提队走嫩才刚午接重串回晚微周值费性桌拍跟块调糕'
# words中前面两个没拿,那xml中前面两个也不拿
gly_list = font_data.getGlyphOrder()[2:] # 拿出xml中的所有name ['unie8a0', 'unie910', 'unif6a4', 'unif3d3', 'unie2f4', 'unie7a6', 'uniea32', 'unif0f9', 'unie2ac', ...]
new_dict = {} # 创建一个字典
for index,value in enumerate(words):
new_dict[gly_list[index]] = value
print(new_dict)
rel = '' # 定义一个空字符串, 最后用来拼接处理过的数据
for j in datas:
if j.startswith('u'):
rel += new_dict[j]
else:
rel += j
return rel
得到结果是这样的:
这样就完成了name和值的对应关系,接下来只需要从网页上把需要的数据拿下来,修改成和new_dict中键一样的uni格式,就可以拿到加密的值了。
def get_page_info(self): # 获取网页上需要的数据
# key_word = input('请输入需要搜索的关键字:')
# response = requests.get(self.start_url.format(quote(key_word)), headers=self.headers)
# print(response.status_code)
with open('dazhong.html', 'w', encoding='utf-8')as f: # 这里我把html保存到本地方便使用,不然一直请求容易被封ip
f.write(response.text)
with open('dazhong.html', 'r', encoding='utf-8') as f:
html_ = f.read()
html_ = re.sub(r"&#x(\w+?);", r"*\1", html_) # 用正则获取网页源码需要的数据,保留括号里的内容,把括号前面的内容替换为*
html = etree.HTML(html_)
# 所有数据的标签
all_info = []
li_list = html.xpath("//div[@class='content']/div/ul/li")
for li in li_list:
item = {}
item['店铺名'] = li.xpath('./div[2]/div/a/h4/text()')[0]
item['推荐菜'] = li.xpath('./div[2]/div[4]/a//text()')
if item['推荐菜'] is not None:
item['推荐菜'] = ','.join(li.xpath('./div[2]/div[4]/a//text()'))
else:
item['推荐菜'] = ''
# 标签名称
class_name = li.xpath("./div[2]/div[2]/a[1]/b/svgmtsi/@class")[0]
tag_name = li.xpath('./div[2]/div[3]/a[2]/span/svgmtsi/@class')[0]
addr_name = li.xpath('./div[2]/div[3]/span/svgmtsi/@class')[0]
# 处理加密的数据
comment_num = li.xpath("./div[2]/div[2]/a[1]/b//text()")
# 拿评论的数据comment_num = ['1', '*e2ac', '*f0f9', '*e2ac', '*e8a0']
comment_num_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in comment_num]
# 处理:遍历列表,把有*号的*去掉,并与uni拼接成新的数据,如果是没*号的,则放直接在列表里,得到新
# 的列表comment_num_list = ['1', 'unie2ac', 'unif0f9', 'unie2ac', 'unie8a0']
item['评价数'] = self.parse_ziti(class_name, comment_num_list) # 传入参数,调用函数处理得到最后结果 ,{'评价数': '1870'}
avg_price = li.xpath("./div[2]/div[2]/a[2]/b//text()") # 人均消费
avg_price_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in avg_price]
item['人均'] = self.parse_ziti(class_name, avg_price_list)
shop_area = li.xpath('./div[2]/div[3]/a[2]/span//text()') # 商圈
shop_area_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in shop_area]
item['商圈'] = self.parse_ziti(tag_name, shop_area_list)
shop_type = li.xpath('./div[2]/div[3]/a[1]/span//text()') # 商铺类型
shop_type_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in shop_type]
item['分类'] = self.parse_ziti(tag_name, shop_type_list)
shop_address = li.xpath('./div[2]/div[3]/span//text()') # 具体地址
shop_address_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in shop_address]
item['地址'] = self.parse_ziti(addr_name, shop_address_list)
zh_comment = li.xpath("./div[2]/span/span//text()")
zh_comment_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in zh_comment]
item['综合评分'] = self.parse_ziti(class_name, zh_comment_list)
all_info.append(item) # 把所有数据放一个列表里
self.save_to_excel(all_info) # 调用保存函数
def save_to_excel(self, items): # 最后保存到excel中
app = xw.App(visible=True, add_book=False) # 实例化一个app
wb = app.books.add() # 添加一个excel
sht = wb.sheets['sheet1'] # 给sheet命名
sht.range('a1').value = list(items[0].keys()) # a1位置作为起点插入一行数据
for i in range(0, len(items)):
# 列表数据要放在循环里 格式是 [{}]
value_list = list(items[i].values())
sht.range('A{}'.format(i + 2)).value = value_list
wb.save('dzdp.xlsx')
最终结果:
完整代码:
# encoding=utf-8
import os
import requests
import re
import random
import csv
import xlwt
import xlwings as xw
from urllib.parse import quote
from fontTools.ttLib import TTFont
from lxml import etree
USER_AGENTS = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.835.163 Safari/535.1",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.9.168 Version/11.50",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; Tablet PC 2.0; .NET4.0E)",
"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; InfoPath.3)",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E)",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.41 Safari/535.1 QQBrowser/6.9.11079.201",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3; .NET4.0C; .NET4.0E) QQBrowser/6.9.11079.201",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.3 (KHTML, like Gecko) Chrome/6.0.472.33 Safari/534.3 SE 2.X MetaSr 1.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36"
]
class DaZhong(object):
def __init__(self):
self.zitiUrl = "https://s3plus.meituan.net/v1/mss_0a06a471f9514fc79c981b5466f56b91/svgtextcss/12fd772aee773d8a96ad5a354d8b595a.css"
# self.url = "http://www.dianping.com/chengdu/ch10"
self.start_url = 'https://www.dianping.com/search/keyword/8/10_{}'
self.headers = {
'User-Agent': random.choice(USER_AGENTS),
'Referer': 'http://www.dianping.com/',
'Cookie': 'fspop=test; cy=8; cye=chengdu; _lx_utm=utm_source%3DBaidu%26utm_medium%3Dorganic; _lxsdk_cuid=1767479ff31c8-0efbdbacfb43d6-c791e37-1fa400-1767479ff31c8; _lxsdk=1767479ff31c8-0efbdbacfb43d6-c791e37-1fa400-1767479ff31c8; _hc.v=3ec04228-e642-fce0-7ced-b7b52019b143.1608271921; Hm_lvt_602b80cf8079ae6591966cc70a3940e7=1608271921; s_ViewType=10; _lxsdk_s=1767479ff32-418-c89-5b0%7C%7C32; Hm_lpvt_602b80cf8079ae6591966cc70a3940e7=1608271932'
}
def get_ziti(self): # 根据字体的url把字体文件保存到本地
res = requests.get(self.zitiUrl)
font = re.findall(r'font-family: "(.*?)";src.*?(//s3plus\.meituan\.net/v1/mss_73a511b8f91f43d0bdae92584ea6330b/font/\w+.woff)', res.text, re.S)
font_list = ['https:' + x[1] for x in font]
font_name = [x[0] for x in font]
for i in font_list:
result = requests.get(i)
file_name = i.split('/')[-1]
with open(file_name, 'wb')as f:
f.write(result.content)
def parse_ziti(self, class_name, datas):
if class_name == 'shopNum': # 评论数, 人均消费, 口味环境服务分数
woff_name = 'ebb40305.woff'
elif class_name == 'tagName': # 店铺分类,哪个商圈
woff_name = '9b3f551f.woff'
else:
woff_name = '1d742900.woff' # 店铺具体地址
# 评分
font_data = TTFont(woff_name)
# font_data.saveXML(woff_name) # 保存xml便于做分析
words = '1234567890店中美家馆小车大市公酒行国品发电金心业商司超生装园场食有新限天面工服海华水房饰城乐汽香部利子老艺花专东肉菜学福饭人百餐茶务通味所山区门药银农龙停尚安广鑫一容动南具源兴鲜记时机烤文康信果阳理锅宝达地儿衣特产西批坊州牛佳化五米修爱北养卖建材三会鸡室红站德王光名丽油院堂烧江社合星货型村自科快便日民营和活童明器烟育宾精屋经居庄石顺林尔县手厅销用好客火雅盛体旅之鞋辣作粉包楼校鱼平彩上吧保永万物教吃设医正造丰健点汤网庆技斯洗料配汇木缘加麻联卫川泰色世方寓风幼羊烫来高厂兰阿贝皮全女拉成云维贸道术运都口博河瑞宏京际路祥青镇厨培力惠连马鸿钢训影甲助窗布富牌头四多妆吉苑沙恒隆春干饼氏里二管诚制售嘉长轩杂副清计黄讯太鸭号街交与叉附近层旁对巷栋环省桥湖段乡厦府铺内侧元购前幢滨处向座下臬凤港开关景泉塘放昌线湾政步宁解白田町溪十八古双胜本单同九迎第台玉锦底后七斜期武岭松角纪朝峰六振珠局岗洲横边济井办汉代临弄团外塔杨铁浦字年岛陵原梅进荣友虹央桂沿事津凯莲丁秀柳集紫旗张谷的是不了很还个也这我就在以可到错没去过感次要比觉看得说常真们但最喜哈么别位能较境非为欢然他挺着价那意种想出员两推做排实分间甜度起满给热完格荐喝等其再几只现朋候样直而买于般豆量选奶打每评少算又因情找些份置适什蛋师气你姐棒试总定啊足级整带虾如态且尝主话强当更板知己无酸让入啦式笑赞片酱差像提队走嫩才刚午接重串回晚微周值费性桌拍跟块调糕'
gly_list = font_data.getGlyphOrder()[2:]
# print(gly_list) # ['unie8a0', 'unie910', 'unif6a4', 'unif3d3', 'unie2f4', 'unie7a6', 'uniea32', 'unif0f9', 'unie2ac']
new_dict = {}
for index, value in enumerate(words):
new_dict[gly_list[index]] = value
print(new_dict)
rel = ''
for j in datas:
if j.startswith('u'):
rel += new_dict[j]
else:
rel += j
return rel
def get_page_info(self): # 获取网页上需要的数据
# key_word = input('请输入需要搜索的关键字:')
# response = requests.get(self.start_url.format(quote(key_word)), headers=self.headers)
# print(response.status_code)
# with open('dazhong.html', 'w', encoding='utf-8')as f: # 这里我把html保存到本地方便使用,不然超级容易被封ip
# f.write(response.text)
with open('dazhong.html', 'r', encoding='utf-8') as f:
html_ = f.read()
html_ = re.sub(r"&#x(\w+?);", r"*\1", html_) # 网页源码每个数字对应的内容,保留括号里的内容,把括号前面的内容替换为*
html = etree.HTML(html_)
# 所有数据的标签
all_info = []
li_list = html.xpath("//div[@class='content']/div/ul/li")
for li in li_list:
item = {}
item['店铺名'] = li.xpath('./div[2]/div/a/h4/text()')[0]
item['推荐菜'] = li.xpath('./div[2]/div[4]/a//text()')
if item['推荐菜'] is not None:
item['推荐菜'] = ','.join(li.xpath('./div[2]/div[4]/a//text()'))
else:
item['推荐菜'] = ''
# 标签名称
class_name = li.xpath("./div[2]/div[2]/a[1]/b/svgmtsi/@class")[0]
tag_name = li.xpath('./div[2]/div[3]/a[2]/span/svgmtsi/@class')[0]
addr_name = li.xpath('./div[2]/div[3]/span/svgmtsi/@class')[0]
comment_num = li.xpath("./div[2]/div[2]/a[1]/b//text()") # 拿评论的数据['1', '*e2ac', '*f0f9', '*e2ac', '*e8a0']
# 遍历列表,把*号开头的去掉,并与uni拼接成新的,如果是1,则放在列表里,得到新的列表 ['1', 'unie2ac', 'unif0f9', 'unie2ac', 'unie8a0']
comment_num_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in comment_num]
item['评价数'] = self.parse_ziti(class_name, comment_num_list)
avg_price = li.xpath("./div[2]/div[2]/a[2]/b//text()") # 人均消费
avg_price_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in avg_price]
item['人均'] = self.parse_ziti(class_name, avg_price_list)
shop_area = li.xpath('./div[2]/div[3]/a[2]/span//text()') # 商圈
shop_area_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in shop_area]
item['商圈'] = self.parse_ziti(tag_name, shop_area_list)
shop_type = li.xpath('./div[2]/div[3]/a[1]/span//text()') # 商铺类型
shop_type_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in shop_type]
item['分类'] = self.parse_ziti(tag_name, shop_type_list)
shop_address = li.xpath('./div[2]/div[3]/span//text()') # 具体地址
shop_address_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in shop_address]
item['地址'] = self.parse_ziti(addr_name, shop_address_list)
# comment_kouwei = li.xpath("./div[2]/span/span[1]/b//text()") # 口味评分
# if comment_kouwei:
# comment_kouwei_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in comment_kouwei]
# item['口味'] = self.parse_ziti(class_name, comment_kouwei_list)
# else:
# item['口味'] = ''
#
# comment_huanjing = li.xpath("./div[2]/span/span[2]/b//text()") # 环境评分
# if comment_huanjing:
# comment_huanjing_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in comment_huanjing]
# item['环境'] = self.parse_ziti(class_name, comment_huanjing_list)
# else:
# item['环境'] = ''
#
# comment_service = li.xpath("./div[2]/span/span[3]/b//text()") # 服务评分
# if comment_service:
# comment_service_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in comment_service]
# item['服务'] = self.parse_ziti(class_name, comment_service_list)
# else:
# item['服务'] = ''
zh_comment = li.xpath("./div[2]/span/span//text()")
zh_comment_list = ['uni' + i.strip('*') if i.startswith('*') else i for i in zh_comment]
item['综合评分'] = self.parse_ziti(class_name, zh_comment_list)
# self.save_to_csv(item)
all_info.append(item)
self.save_to_excel(all_info)
# def save_to_csv(self, item):
# headline = ['店铺名\t', '推荐菜\t', '分类\t', '评价数\t', '人均\t', '口味\t', '环境\t', '服务\t', '商圈\t', '地址\t']
# # for i in range(len(item)):
# if not os.path.exists('dazhong.csv'):
# with open('dazhong.csv', 'a', newline='', encoding='utf-8')as f:
# f.writelines(headline)
# f.writelines('\n')
# # f.writelines(item.values())
# else:
# with open('dazhong.csv', 'a', newline='', encoding='utf-8')as f:
# f.writelines(item.values())
# f.writelines('\n')
#
def save_to_excel(self, items):
app = xw.App(visible=True, add_book=False)
wb = app.books.add()
sht = wb.sheets['sheet1']
sht.range('a1').value = list(items[0].keys())
for i in range(0, 11):
# 列表数据要放在循环里 格式是 [{}]
value_list = list(items[i].values())
sht.range('A{}'.format(i + 2)).value = value_list
wb.save('dzdp.xlsx')
if __name__ == '__main__':
dz = DaZhong()
# dz.get_ziti()
# dz.parse_ziti()
dz.get_page_info()