作为一名初学者,也通过很多途径查找资料,然后结合自己的部分理解,终于把这段代码写出来了,唯一有点缺陷的就是没有找到多页图片的下载方法,深表遗憾!!!
话不多说,各位看官,请看作品,若有不足,还请各位指教!!!
import requests
from bs4 import BeautifulSoup
url = 'https://photo.m.yiche.com/' #网址(car)
#print(url)
headers = {
"user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36 SLBrowser/7.0.0.6241 SLBChan/11"
} #模拟用户头文件
#单页图片下载
r = requests.get(url=url,headers=headers,timeout=5)
print(r) #返回200表示成功发送请求
r.encoding = "utf-8"
soup = BeautifulSoup(r.content,'lxml')
#print(soup)
all_img = soup.select("div.pic-car-list ul li a img") #匹配图片所在位置(car)
i = 1
for img in all_img:
img_url = img['src'] #图片网址
print(img_url)
title_url = img['alt']
print(title_url)
img_data = requests.get(url=img_url,headers=headers).content #二进制转化
root = r'E:/python/photo/' #图片保存本地目录
path = root + title_url.split()[0] +' '+ img_url.split('/')[-2]+'.jpg' #重命名
try:
with open(path, 'wb') as f:
f.write(img_data)
f.close()
print("正在保存第",i,"张图片!")
i += 1
except:
print("error")
代码附上!!!
此作品主要用于纪念首次佳作,有办法改进可添加翻页爬取办法的大佬,还请评论区指点!!!
感谢!!!