利用BeautifulSoup库爬取贴吧壁纸保存到本地
首先分析网页,发现每换一页URL的pn都会跟着变动,那我们只要修改pn的值就可以爬取全部的页面
接着分析,发现红圈的div包括了此页面的所有信息,图片也是在这下面。
整理一下信息,我们可以先爬取红圈里的信息然后通过遍历把所有页面的图片都拿出来。代码如下
from bs4 import BeautifulSoup
import requests
import lxml
url = 'https://tieba.baidu.com/p/4847606272?pn={page}'
heads = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36",
"Connection": "keep-alive",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"Accept-Language": "zh-CN,zh;q=0.8"
}
page = 0
#循环爬取每一页的信息
while True:
page +=1
reqone = requests.get(url.format(page=page),headers=heads)
#创建beautifulsoup
Html = BeautifulSoup(reqone.text, features="lxml")
print("页面: ", url.format(page=page))
#利用css选择器查找
ck1 = Html.select('#j_p_postlist')
#到37页后停止循环
if page == 38:
break
#通过遍历爬取所有的图片标签
for ck2 in ck1:
ck3 = str(ck2.select('.BDE_Image'))
print(ck3)
运行结果如下,成功拿到了所有页面的图片标签,但是我们要保存到本地的话就要把src的图片地址提取出来。
最终代码如下
from bs4 import BeautifulSoup
import requests
import lxml
import re
url = 'https://tieba.baidu.com/p/4847606272?pn={page}'
heads = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36",
"Connection": "keep-alive",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"Accept-Language": "zh-CN,zh;q=0.8"
}
page = 0
n = 0
# 循环爬取每一页的信息
while True:
page += 1
reqone = requests.get(url.format(page=page), headers=heads)
# 创建beautifulsoup对象
Html = BeautifulSoup(reqone.text, features="lxml")
print("页面: ", url.format(page=page))
# 利用css选择器查找
ck1 = Html.select('#j_p_postlist')
# 到37页后停止循环
if page == 38:
break
# 通过遍历爬取所有的图片标签
for ck2 in ck1:
ck3 = str(ck2.select('.BDE_Image'))
#通过正则抓取src里的图片链接
ck4 = re.findall(r'src=\"(https://imgsa\.baidu\.com/forum/\w\S\w+/\w+\S\w+/\w+\.jpg)',ck3)
for ck5 in ck4:
res = requests.get(ck5, headers=heads)
n +=1
with open('d://pic//'+str(n)+'.jpg','wb') as f:
f.write(res.content)
运行结果如下
成功!