图片生成html_Python爬虫教材,批量下载素材公社的任意类型图片

**整体思路大致就是 输入你想要爬取的图片分类 加上你想下载的几页 即可实现批量下载

下面展示分步骤代码+具体的思路:

私信小编01即可获取Python学习资料

一、生成一个保存图片的文件夹

这里是把文件夹创建在.py文件的同级目录了 可以根据自己设定

path = os.getcwd()     #获取当前文件所在路径path_name = path + '/' + '素材公社'if not os.path.exists(path_name):      os.mkdir(path_name)

二、获取图片分类选项和url 生成字典

这里要下载的分类选项不包括风景图片 人物图片等大类

观察链接末尾在推导式中用if条件语句可将大类选项筛选掉

cb9c20b0c5a7c50b460ddeeed8276d47.png

def get_meun():      url = 'https://www.tooopen.com/img'      res = requests.get(url, headers=headers)      html = etree.HTML(res.text)      urls = html.xpath('/html/body/div[3]/div/div/ul/li/a/@href')      names = html.xpath('/html/body/div[3]/div//div/ul/li/a/text()')      dic = {k: v for k, v in zip(names, urls) if '_' in v}      return dic

三 、 下载图片

检查源码观察我们发现

67fb62906588e339c471c276c26e1ebf.png

6480969b0c98adb2065ea1ed416b5c82.png

所以
进入图片真正的下载地址 寻找高清大图的下载链接

f50fb67915e1f730c537e83bc7fbac71.png


此处的url就是 我们需要的高清大图的链接 下载即可
代码如下:

# 下载图片def download_img(url, start, end):     count = 0   # 计数   下载多少张图片     url1 = url.replace('.aspx', '_1_{}.aspx')# 每页的真正网页格式     img_urls1 = []  #存放图片下载地址     img_names = []      for i in range(start, end + 1):          url2 = url1.format(i)          res = requests.get(url2, headers=headers).text          img_url = re.findall(r'a class="pic" href="(.*?)"', res)          img_name = etree.HTML(res).xpath('/html/body/div[5]/ul/li/div/a/img/@alt')#          img_names.append(img_name)#          img_urls1.append(img_url)          img_urls1 += img_url           img_names += img_name#     print(img_urls1)#     print(img_names)     img_urls2 = []     for j in img_urls1:           res2 = requests.get(j, headers=headers).text           img_url2 = etree.HTML(res2).xpath('/html/body/table/tr/td/img/@src')           #img_url2 = re.findall(r'

四、输入我们想要下载图片的分类和页数 启动图片下载器开始下载

def main():        pic_dic = get_meun()      choice = input('请输入您想下载的图片类型:')             url3 = pic_dic.get(choice)      print('=' * 15 + ' 图片下载器启动 ' + '=' * 15)      start_page = int(input("请输入起始页码:   "))      end_page = int(input("请输入结束页码:   "))      print('~已开始为您下载~')      download_img(url3, start_page, end_page)   

五、为了防止被反爬我们可以多用一些user_agent 伪装请求头

user_agent = [    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",    "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",    "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",    "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",    "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",    "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",    "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"    ]

六、下面是我写的代码

import requestsfrom lxml import etreeimport reimport osimport timeimport random# 定义在当前.py文件的目录下创建文件夹path = os.getcwd()     path_name = path + '/' + '素材公社'if not os.path.exists(path_name):      os.mkdir(path_name)# 获取图片分类选项表单 生成字典def get_meun():      url = 'https://www.tooopen.com/img'      res = requests.get(url, headers=headers)      html = etree.HTML(res.text)      urls = html.xpath('/html/body/div[3]/div/div/ul/li/a/@href')      names = html.xpath('/html/body/div[3]/div//div/ul/li/a/text()')      dic = {k: v for k, v in zip(names, urls) if '_' in v}      return dic# 下载图片def download_img(url, start, end):     count = 0   # 计数   下载多少张图片     url1 = url.replace('.aspx', '_1_{}.aspx')# 每页的真正网页格式     img_urls1 = []  #存放图片下载地址     img_names = []      for i in range(start, end + 1):          url2 = url1.format(i)          res = requests.get(url2, headers=headers).text          img_url = re.findall(r'a class="pic" href="(.*?)"', res)          img_name = etree.HTML(res).xpath('/html/body/div[5]/ul/li/div/a/img/@alt')#          img_names.append(img_name)#          img_urls1.append(img_url)          img_urls1 += img_url           img_names += img_name#     print(img_urls1)#     print(img_names)     img_urls2 = []     for j in img_urls1:           res2 = requests.get(j, headers=headers).text           img_url2 = etree.HTML(res2).xpath('/html/body/table/tr/td/img/@src')           #img_url2 = re.findall(r'

七、代码实现效果

94902320aff3f5a05ea7e698cc7b5c28.png

0470418ebae4761c37ce7837c5071d27.png

总结 :

简单爬取数据思路原理:
获取网页信息—解析页面信息—模拟翻页—匹配所需数据—下载到本地文件

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值