html画图代码_Python突破反爬机制,爬取漫画图片

前言

今天手把手带领大家用Python实现爬取漫画图片,带领大家解决遇到的反爬,动态加载等问题.

知识点:

  • tqdm
  • requests
  • BeautifulSoup
  • 多线程
  • JavaScript动态加载

开发环境:

  • Python 3.6
  • Pycharm

目标地址

https://www.dmzj.com/info/yaoshenji.html

e32903e332c22fc76e190551e266ccba.png

代码

导入工具

import requests
import os
import re
from bs4 import BeautifulSoup
from contextlib import closing
from tqdm import tqdm
import time

获取动漫章节链接和章节名

r = requests.get(url=target_url)
bs = BeautifulSoup(r.text, 'lxml')
list_con_li = bs.find('ul', class_="list_con_li")
cartoon_list = list_con_li.find_all('a')
chapter_names = []
chapter_urls = []
for cartoon in cartoon_list:
    href = cartoon.get('href')
    name = cartoon.text
    chapter_names.insert(0, name)
    chapter_urls.insert(0, href)
print(chapter_urls)

下载漫画

for i, url in enumerate(tqdm(chapter_urls)):
    print(i,url)
    download_header = {
        'Referer':url,
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36'
    }
    name = chapter_names[i]
    # 去掉.
    while '.' in name:
        name = name.replace('.', '')
    chapter_save_dir = os.path.join(save_dir, name)
    if name not in os.listdir(save_dir):
        os.mkdir(chapter_save_dir)
    r = requests.get(url=url)
    html = BeautifulSoup(r.text, 'lxml')
    script_info = html.script
    pics = re.findall('d{13,14}', str(script_info))
    for j, pic in enumerate(pics):
        if len(pic) == 13:
            pics[j] = pic + '0'
    pics = sorted(pics, key=lambda x: int(x))
    chapterpic_hou = re.findall('|(d{5})|', str(script_info))[0]
    chapterpic_qian = re.findall('|(d{4})|', str(script_info))[0]
    for idx, pic in enumerate(pics):
        if pic[-1] == '0':
            url = 'https://images.dmzj.com/img/chapterpic/' + chapterpic_qian + '/' + chapterpic_hou + '/' + pic[
                                                                                                             :-1] + '.jpg'
        else:
            url = 'https://images.dmzj.com/img/chapterpic/' + chapterpic_qian + '/' + chapterpic_hou + '/' + pic + '.jpg'
        pic_name = '%03d.jpg' % (idx + 1)
        pic_save_path = os.path.join(chapter_save_dir, pic_name)
        print(url)
        response = requests.get(url,headers=download_header)
        # with closing(requests.get(url, headers=download_header, stream=True)) as response:
            # chunk_size = 1024
            # content_size = int(response.headers['content-length'])
        print(response)
        if response.status_code == 200:
            with open(pic_save_path, "wb") as file:
                # for data in response.iter_content(chunk_size=chunk_size):
                    file.write(response.content)
        else:
            print('链接异常')
    time.sleep(2)

创建保存目录

save_dir = '妖神记'
if save_dir not in os.listdir('./'):
    os.mkdir(save_dir)
target_url = "https://www.dmzj.com/info/yaoshenji.html"

PS:如有需要Python学习资料的小伙伴可以加下方的群去找免费管理员领取

da8d02f05a32ab7fdb0ac9273e3e4ef9.gif

可以免费领取源码项目实战视频PDF文件

d168dcbbb9d65e602edd9ee2b2284378.png
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值