'''
批量下载豆瓣首页图片
采用伪装浏览器的方式爬取豆瓣网站首页的图片,保存到指定路径下
'''
#导入需要的库
import urllib.request,socket,re,sys,os
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
#定义文件保存路径
targetPath = "//Users//wangleilei//Documents//03__douban_Images"
#保存图片的函数
def saveFile(path):
#检测当前路径的有效性
if not os.path.isdir(targetPath):
os.mkdir(targetPath)
#设置每个图片的路径
pos = path.rindex('/')
t = os.path.join(targetPath,path[pos+1:])
return t
# 定义保存函数
# def saveFile(data):
# # 路径替换成你自己的
# path = "//Users//wangleilei//Documents//05_douban.html"
# f = open(path, 'wb')
# f.write(data)
# f.close()
# 网址
url = "https://www.douban.com/"
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:57.0) Gecko/20100101 Firefox/57.0'}
request = urllib.request.Request(url=url,headers=headers)
response = urllib.request.urlopen(request)
data = response.read()
# saveFile(data)
print(data)#https:[^\s]*?(png|gif|jpg)
for link, t in set(re.findall(r'(https:[^\s]*?(png|gif|jpg))', str(data))):
print(link)
try:
urllib.request.urlretrieve(link, saveFile(link))
print('成功')
except:
print('失败')
复制代码
Python3-爬虫-03-爬取豆瓣首页图片
最新推荐文章于 2021-02-01 19:30:16 发布