Python练习013

该博客介绍了如何利用Python的正则表达式、BeautifulSoup和XPath三种方式来爬取糗图百科的热门图片。首先创建qiutu文件夹,然后分别通过三种方法解析HTML页面,提取图片链接,下载并保存图片到指定路径,最后打印出已下载的图片名称。
摘要由CSDN通过智能技术生成

题目:使用正则表达式、BeautifulSoup、Xpath三种方法爬取糗图百科的热门图片。
正则表达式:

import requests
import re
import os

if not os.path.exists("qiutu"):
    os.mkdir("qiutu")
url = "https://www.qiushibaike.com/imgrank/"
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\
           /537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36 Edg/\
           87.0.664.75'}
           
page_text = requests.get(url = url, headers = headers).text
s = '<div class="thumb">.*?<img src="(.*?)" alt.*?</div>'
img_src = re.findall(s,page_text,re.S)
for src in img_src:
    src = "https:"+src
    img = requests.get(url = src, headers = headers).content
    img_name = src.split('/')[-1]
    img_path = "qiutu/"+img_name
    with open(img_path,'wb') as f:
        f.write(img)
        print(img_name,"下载完成")

BeautifulSoup:

import requests
from bs4 import BeautifulSoup
import os

if not os.path.exists("qiutu"):
    os.mkdir("qiutu")
url = "https://www.qiushibaike.com/imgrank/"
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\
           /537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36 Edg/\
           87.0.664.75'}
           
page_text = requests.get(url = url, headers = headers).text
soup = BeautifulSoup(page_text,'lxml')
img_data = soup.select('.thumb > a > img')
srcs = []
for each_data in img_data:
    srcs.append(each_data['src'])
for src in srcs:
    src = "https:"+src
    img = requests.get(url = src, headers = headers).content
    img_name = src.split('/')[-1]
    img_path = "qiutu/"+img_name
    with open(img_path,'wb') as f:
        f.write(img)
        print(img_name,"下载完成")

Xpath

import requests
from lxml import etree
import os

if not os.path.exists("qiutu"):
    os.mkdir("qiutu")
url = "https://www.qiushibaike.com/imgrank/"
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\
           /537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36 Edg/\
           87.0.664.75'}
           
page_text = requests.get(url = url, headers = headers).text

tree = etree.HTML(page_text)
srcs = tree.xpath('/html/body//div[@class="thumb"]/a/img/@src')
for src in srcs:
    src = "https:"+src
    img = requests.get(url = src, headers = headers).content
    img_name = src.split('/')[-1]
    img_path = "qiutu/"+img_name
    with open(img_path,'wb') as f:
        f.write(img)
        print(img_name,"下载完成")
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值