今天来介绍两种web端爬取微信公众号文章方式~
方法一:公众号接口爬取(全量)
提前准备:需注册一个微信公众号
1.1 接口查找及分析
内容互动——图文消息——超链接——搜索具体公众号
在该页面后,利用Chrome浏览器的抓包功能,进行抓包,得到接口数据如下
翻页分析请求差异,可知begin控制页数,数值加5即可翻页。如下图所式(分别表示第一页和第二页的参数)
![](https://img-blog.csdnimg.cn/821236a6277e4fc3aabf266de2d57c4d.png?x-oss-process=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA55qW5rid,size_20,color_FFFFFF,t_70,g_se,x_16)
![](https://img-blog.csdnimg.cn/fc63c9bebf7d4c10ae54cf97f60bd6c3.png?x-ossprocess=image/watermark,type_ZHJvaWRzYW5zZmFsbGJhY2s,shadow_50,text_Q1NETiBA55qW5rid,size_20,color_FFFFFF,t_70,g_se,x_16)
1.2 频次限制介绍
该接口存在频次限制,首次大概可以跑80-100页左右,每次封禁时间为2小时左右,封禁解除后爬取页数会低于第一次爬取的页数。当天累计3次封禁后,会直接封一天。粗略估算,一天大约可以爬200-250页左右。
1.3 全部代码
import requests
import random
import time
user_agent_list = [
"Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16",
"Mozilla/5.0 (Linux; U; Android 2.2; en-gb; GT-P1000 Build/FROYO) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14",
"Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0 Opera 12.14",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14",
]
headers = {
'user-agent':random.choice(user_agent_list),
'cookie':'你的cookie'
}
begin = '0'
params = {
'action': 'list_ex',
'begin': begin,
'count': '5',
'fakeid': 'MzA5MjMzOTY4Mw==',
'type': '9',
'query': '',
'token': '1420937137',
'lang': 'zh_CN',
'f': 'json',
'ajax': '1'
}
url = 'https://mp.weixin.qq.com/cgi-bin/appmsg?'
i = 0
articles = []
while True:
count = i*5
params['begin'] = str(count)
try:
r = requests.get(url,headers=headers,params=params)
article_list = r.json()['app_msg_list']
for article in article_list:
create_time = time.strftime('%Y-%m-%m',time.localtime(article['create_time']))
title = article['title']
link = article['link']
articles.append([create_time,title,link])
print('第{}页爬取完毕'.format(i))
if r.json()['base_resp']['ret'] == 200013:
print("frequencey control, stop at {}".format(str(begin)))
break
if len(r.json()['app_msg_list']) == 0:
print("all ariticle parsed")
break
except Exception as e:
print(e)
break
time.sleep(random.randint(2,4))
i += 1
总结:爬取文章全,但不适合短时间内爬取多个公众号文章,成本较高
方法二:搜狗微信爬取(非全量)
网站地址:搜狗微信
# 关键词搜索爬取
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as ec
from selenium.webdriver.support.wait import WebDriverWait
import time
import re
import random
import pandas as pd
opt = webdriver.ChromeOptions()
opt.add_experimental_option('excludeSwitches', ['enable-automation'])
driver = webdriver.Chrome(options=opt)
driver.get('https://weixin.sogou.com/')
wait = WebDriverWait(driver, 10)
word_input = wait.until(ec.presence_of_element_located((By.NAME, 'query')))
word_input.send_keys('金工于明明预测分值')
driver.find_element_by_xpath("//input[@class='swz']").click()
time.sleep(2)
data = []
def get_scores():
rst = driver.find_elements_by_xpath('//div[@class="txt-box"]/p')
for title in rst:
print(title.text)
try:
date = re.search('\d+', title.text).group(0)
scores = re.findall('预测分值:(.*?)分', title.text)[0]
data.append([date, scores])
except Exception as e:
print(e)
for i in range(10):
get_scores()
if i == 9:
# 访问第10页停止点击
break
driver.find_element_by_id("sogou_next").click()
time.sleep(random.randint(3, 5))
driver.find_element_by_name('top_login').click()
# 等到扫码登录
while True:
try:
next_page = driver.find_element_by_id("sogou_next")
break
except Exception as e:
time.sleep(2)
next_page.click()
# 登录完成继续爬取文章信息
while True:
get_scores()
try:
driver.find_element_by_id("sogou_next").click()
time.sleep(random.randint(3, 5))
except Exception as e:
break
score_data = pd.DataFrame(data, columns=['日期', '预测分值'])
score_data.to_csv('./Desktop/score_data.csv', index=False, encoding='gbk')
总结:无频次限制,但非全量,适合确定专栏(包含特定关键词)文章抓取