记录爬取知识点中的图片

完整代码在这里:get1.py

目标

给出下面的网站,把正文部分的图片全都获取下来:
https://zhuanlan.zhihu.com/p/109467098
然后保存在一个文件夹中.(随便找的测试用例哈,看着图片挺多的)

思路

先用requests库爬取网页,然后用bs4(BeautifulSoup解析库,后来被集成到bs4库了)解析,并分析内容获取每个图片链接,挨个下载,(最好开多线程,快)

准备

当然,先安装requests库和bs4库:

python -m pip install requests
python -m pip install bs4

然后试着导入一下,这里仅仅导入需用到的函数和类:

from requests import get
from bs4 import BeautifulSoup as bsp

另外,导入多线程库,提高抓取速度呢:

from threading import Thread

正式开始

首先主函数里面,先获取网页,当然带上自己浏览器的cookie,不然要么400要么403,就不好了
先模拟出一个请求头:

	headers = {
		"cookie":"SESSIONID=HrPazhGMeQ21VPmJbGUStdJ2p2jJ7hgPjP9vAgDNUWh; JOID=UVsQB0_UPoMFVldqBtNZ1NaAeRoU9xuhJnJ0TyTwHaYndXNJIy1SgWBRUW0Hi3lWnFUhd5eMx6OLpI1FOCVDNcs=; osd=VlAcBU7TNY8HV1BhCtFY092MexsT_BejJ3V_QybxGq0rd3JOKCFQgGdaXW8GjHJanlQmfJuOxqSAqI9EPy5PN8o=; __snaker__id=LTdZ3qzJ0jvvZf6q; _zap=b41c394c-877a-406e-a5b5-6e3cf5c4215a; _xsrf=eyiBe6zVtc4DBCNEXfgoE2DkmBnIQzs8; d_c0="AKAQ9NYpdxSPTsy0ahmjGoG3382YzBYFwiw=|1644374306"; r_cap_id="MWY3OGNkMTA3MGM1NGJkZTgyZmM5N2NhNDkwOTE0ODM=|1644374308|2a25282f9aaddebeec2988554f1359c7c5dbc62a"; cap_id="ZDMwOWRmYzcwMTJmNGY4Nzg2YWY3ZTExYzE3YzcyZGU=|1644374308|fa0d1761651393a4d579d6e940c4af3d66f14d31"; l_cap_id="Y2YwMTc1NDk4MzYyNDQ5MjgwNGU3YTY5N2Y2ZTJiM2Y=|1644374308|6de2520c201b745a432ee9f55f4c451d5136d415"; tst=r; Hm_lvt_98beee57fd2ef70ccdd5ca52b9740c49=1644374308,1644380629,1644391422; NOT_UNREGISTER_WAITING=1; Hm_lpvt_98beee57fd2ef70ccdd5ca52b9740c49=1644392122; ariaDefaultTheme=undefined; KLBRSID=f48cb29c5180c5b0d91ded2e70103232|1644392121|1644391420; captcha_session_v2="2|1:0|10:1644392122|18:captcha_session_v2|88:MkgyaGxKSFBEazhNb0xycmZsZXgvdzVBYWV4VDlBRkwrM28ycTZiV21hZ0ZDNHh3YUtWd0Rjemk2dnFzbTBISg==|6138b9e4eee54bddde72613700e37617ff32075799a2f6c7dfc7f411ad22adc7"; gdxidpyhxdE=8C3lPvP8vKv08ri6Ho9vhcxvy/GLxS\pmlyI9WiGDZ0uJQjpch6PGUxIHi/0QcK6Hos00+fQhTzJtRwWb4iUZZaKi6fvRvw73sITJDxeUfOsI/7EnzxauS8+pJCZPi+mvLaxEk\WbabpvJ+oBx18iEmop\8tKj7T0ojzefrHgBDbQ/Qk:1644393024146; _9755xjdesxxd_=32; YD00517437729195:WM_NI=/FMTjcAithghrObdVhJGYca68ClJ8rqVV7FyA3NrFP3+HURvzlFBqUltziy1DcEHJYV7vfs/M5Re0dGv9le63jhyq5vNGhZYR4x0l4Mz7lWr9voiUF3jrUlasPbGsJDxcVk=; YD00517437729195:WM_NIKE=9ca17ae2e6ffcda170e2e6eeace47b90aba7a7e1458ab48ba6c14e939f8eafb6258990fe86c94997ec9d94c12af0fea7c3b92aed9a83b5e96983b7bc89e664b291ae85d5648a999784e43efc9da4abbb7ab38a9ad8c57383ed8293c55cf1f588b6d17ebab48aa3f45fb0acb68afc3b88bd8ba9c95396e9ba87f2258feb9e86b37085aaf78de83ef1b6a89af43fb6928989d95392ecb8d1ef408fa6f98dd67afbaaa18ad84ef1b6beb6ef6bb190aea8f24288a8adb7ea37e2a3; YD00517437729195:WM_TID=2tl0ALNhughBFQEQFUd7qfLieMXU5S8R",
		"host":"zhuanlan.zhihu.com",
		"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36"
	}

然后,试着爬取一下并解析看看结果:

	res = get(url="https://zhuanlan.zhihu.com/p/109467098",
				headers=headers)
	html = bsp(res.text, "html.parser")
	print(html)

结果正常,虽然控制台乱七八糟…
根据浏览器的devtool,可以发现正文的所有图片都在如下框架中:
找到一个div
它的class属性是RichText ztext Post-RichText css-hnrfcf.那我们试着分离这个标签,然后再进行下一步.使用find_all('div', class_="RichText ztext Post-RichText css-hnrfcf")方法,并打印结果:
乱七八糟...
这不方便看清楚,那试着打印一下查找结果的__len__(),发现确实是1,说明仅存在这一个div,应该是保存正文用的.

进一步分析:图片都在img标签里,那是不是找到所有的img就够了?那么在刚刚的div查找结果里取索引[0]获得唯一的一个div,然后find_all img标签,并保存结果.
发现长度竟然有44!粗略数一下,应该不正确,我们要用的图片只有20多张.

那只能退而求其次,因为每个figure里只有一个img,保存的是图片,那么修改逻辑,最终如下:

	html = bsp(res.text, "html.parser")
	main_div = html.find_all("div",
				class_="RichText ztext Post-RichText css-hnrfcf")[0]
	imgs = [i.find_all("img")[0] for i in main_div.find_all("figure")]
	print(imgs.__len__())

ok,正好22张,跟数出来的一样.

下面迭代imgs获取链接属性,然后挨个下载就行啦,实现很简单直接贴代码了:

	for index, l in enumerate(links):
		Thread(target=lambda idx, lk:open("res/%d.jpg" % idx, "wb+") \
					.write(get(lk).content),
				args=(index, l)).start()

这里用了特别密集的代码书写形式,也许不能理解…总体来说,开启一个线程,执行"打开文件并写入内容,其中内容是网页上爬取的"的任务.然后启动这个线程,下一轮循环,把等待response的时间屏蔽掉,加速不少呢.

记得提前新建res/目录

全部代码及总结

完整代码如下:

from requests import get
from bs4 import BeautifulSoup as bsp
from threading import Thread

def main():
	headers = {
		"cookie":"SESSIONID=HrPazhGMeQ21VPmJbGUStdJ2p2jJ7hgPjP9vAgDNUWh; JOID=UVsQB0_UPoMFVldqBtNZ1NaAeRoU9xuhJnJ0TyTwHaYndXNJIy1SgWBRUW0Hi3lWnFUhd5eMx6OLpI1FOCVDNcs=; osd=VlAcBU7TNY8HV1BhCtFY092MexsT_BejJ3V_QybxGq0rd3JOKCFQgGdaXW8GjHJanlQmfJuOxqSAqI9EPy5PN8o=; __snaker__id=LTdZ3qzJ0jvvZf6q; _zap=b41c394c-877a-406e-a5b5-6e3cf5c4215a; _xsrf=eyiBe6zVtc4DBCNEXfgoE2DkmBnIQzs8; d_c0=\"AKAQ9NYpdxSPTsy0ahmjGoG3382YzBYFwiw=|1644374306\"; r_cap_id=\"MWY3OGNkMTA3MGM1NGJkZTgyZmM5N2NhNDkwOTE0ODM=|1644374308|2a25282f9aaddebeec2988554f1359c7c5dbc62a\"; cap_id=\"ZDMwOWRmYzcwMTJmNGY4Nzg2YWY3ZTExYzE3YzcyZGU=|1644374308|fa0d1761651393a4d579d6e940c4af3d66f14d31\"; l_cap_id=\"Y2YwMTc1NDk4MzYyNDQ5MjgwNGU3YTY5N2Y2ZTJiM2Y=|1644374308|6de2520c201b745a432ee9f55f4c451d5136d415\"; tst=r; Hm_lvt_98beee57fd2ef70ccdd5ca52b9740c49=1644374308,1644380629,1644391422; NOT_UNREGISTER_WAITING=1; Hm_lpvt_98beee57fd2ef70ccdd5ca52b9740c49=1644392122; ariaDefaultTheme=undefined; KLBRSID=f48cb29c5180c5b0d91ded2e70103232|1644392121|1644391420; captcha_session_v2=\"2|1:0|10:1644392122|18:captcha_session_v2|88:MkgyaGxKSFBEazhNb0xycmZsZXgvdzVBYWV4VDlBRkwrM28ycTZiV21hZ0ZDNHh3YUtWd0Rjemk2dnFzbTBISg==|6138b9e4eee54bddde72613700e37617ff32075799a2f6c7dfc7f411ad22adc7\"; gdxidpyhxdE=8C3lPvP8vKv08ri6Ho9vhcxvy/GLxS\pmlyI9WiGDZ0uJQjpch6PGUxIHi/0QcK6Hos00+fQhTzJtRwWb4iUZZaKi6fvRvw73sITJDxeUfOsI/7EnzxauS8+pJCZPi+mvLaxEk\WbabpvJ+oBx18iEmop\8tKj7T0ojzefrHgBDbQ/Qk:1644393024146; _9755xjdesxxd_=32; YD00517437729195:WM_NI=/FMTjcAithghrObdVhJGYca68ClJ8rqVV7FyA3NrFP3+HURvzlFBqUltziy1DcEHJYV7vfs/M5Re0dGv9le63jhyq5vNGhZYR4x0l4Mz7lWr9voiUF3jrUlasPbGsJDxcVk=; YD00517437729195:WM_NIKE=9ca17ae2e6ffcda170e2e6eeace47b90aba7a7e1458ab48ba6c14e939f8eafb6258990fe86c94997ec9d94c12af0fea7c3b92aed9a83b5e96983b7bc89e664b291ae85d5648a999784e43efc9da4abbb7ab38a9ad8c57383ed8293c55cf1f588b6d17ebab48aa3f45fb0acb68afc3b88bd8ba9c95396e9ba87f2258feb9e86b37085aaf78de83ef1b6a89af43fb6928989d95392ecb8d1ef408fa6f98dd67afbaaa18ad84ef1b6beb6ef6bb190aea8f24288a8adb7ea37e2a3; YD00517437729195:WM_TID=2tl0ALNhughBFQEQFUd7qfLieMXU5S8R",
		"host":"zhuanlan.zhihu.com",
		"user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.82 Safari/537.36"
	}
	res = get(url="https://zhuanlan.zhihu.com/p/109467098",
				headers=headers)
	html = bsp(res.text, "html.parser")
	main_div = html.find_all("div",
				class_="RichText ztext Post-RichText css-hnrfcf")[0]
	imgs = [i.find_all("img")[0] for i in main_div.find_all("figure")]
	# print(imgs.__len__())
	links = [i['src'] for i in imgs]
	# print(links)
	for index, l in enumerate(links):
		Thread(target=lambda idx, lk:open("res/%d.jpg" % idx, "wb+") \
					.write(get(lk).content),
				args=(index, l)).start()

if __name__ == '__main__':
	main()

爬取多图片网页的方式
首先,用爬虫爬取整个网页,解析它
然后,根据具体情况用beautifulSoup获取需要的img(通常是)的src属性,也就是图片的真实储存地址
最后,看对效率的要求,如果必须很快的话建议用多线程, 遍历链接们并挨个下载.
成果:
爬取成功!

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

dtsroy

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值