第一次爬取小说网站

断断续续学习python也有段时间了,找了个小说网站爬取小说,结果修改文件过程中,网站挂了。。。泪奔。。还有一个错误未整改,先记录一下。

另外对爬取的网站站主说声抱歉!!

先上代码

import requests
import re
import time
import os
import logging


# 创建日志实例
logger = logging.getLogger('fiction')
# 定制日志输出格式
formatter = logging.Formatter("%(asctime)s %(levelname)s %(message)s")
# 创建日志文件
file_handler = logging.FileHandler(filename='fiction.log')
file_handler.setFormatter(formatter)
# 设置日志的默认级别
logger.setLevel(logging.INFO)
# 添加日志到日志处理器中
logger.addHandler(file_handler)


# 获取链接
def get_url(html, timeout=5):
    try:
        response = requests.get(html)
    except requests.exceptions.ConnectionError as e:
        logger.error('{}'.format(e))
        return ''
    except requests.exceptions.MissingSchema as e:
        # 查看原网页,<a href>第二十八章</a>  href未写链接地址
        logger.error('{}'.format(e))
        return ''
    # 未尝试。。。要爬的网站挂了
    except requests.urllib3.HTTPConnectionPool:
        response = requests.session()
        response.config['keep_alive'] = False
    # 其他错误
    except requests.exceptions.RequestException as e:
        logger.error('{}'.format(e))
        return ''
    response = response.content.decode('utf-8')
    return response


# 匹配小说的类别
def get_fiction_type(response):
    pattern = re.compile(
        '<li class=[\s\S]*?><a href="([\s\S]*?)" [\s\S]*?>([\s\S]*?)</a></li>')
    fiction_types = re.findall(pattern, response)
    for type_link, fiction_type in fiction_types[1:]:
        yield type_link, fiction_type


# 匹配单个类型的小说总页数
def get_page_count(response):
    # 若某一类型小说为空,则总页数为0
    try:
        pattern = re.compile('<div id="page"><span>([\s\S]*?)条</span>')
        fiction_count = int(re.findall(pattern, response)[0])
    except:
        fiction_count = 0
    # 取模为0 则不加,取模不为0,则页数 +1
    if fiction_count % 24 == 0:
        page_count = fiction_count // 24
    else:
        page_count = fiction_count // 24 + 1
    return page_count


# 匹配小说链接
def get_fiction_links(response):
    pattern = re.compile(
        '<h5 class="name"><a href="([\s\S]*?)" title="[\s\S]*?">[\s\S]*?</a></h5>')
    fiction_link = re.findall(pattern, response)
    return fiction_link


# 匹配单本小说的书名
def get_book_title(response):
    pattern = re.compile("<h1>([\s\S]*?)</h1>")
    filename = re.findall(pattern, response)[0]
    return filename


# 匹配章节链接
def match_list_urls(response):
    zheng = '<dd><a href="(.*?)">(.*?)</a></dd>'
    pattern = re.compile(zheng)
    list_url = re.findall(pattern, response)
    return list_url


# 获取小说文章
def get_book(list_url, filename, fiction_type=''):
    # 链接每个章节
    # 判断文件夹是否存在
    if fiction_type:
        if os.path.exists('fiction/' + fiction_type):
            if not os.path.exists('fiction/{}/{}.txt'.format(fiction_type, filename)):
                filepath = 'fiction/{}/{}.txt'.format(fiction_type, filename)
            else:
                logger.info("{} 已存在".format(filename))
                return 0
        else:
            os.mkdir('fiction/{}/'.format(fiction_type))   # 用于递归创建目录
            filepath = 'fiction/{}/{}.txt'.format(fiction_type, filename)
    else:
        filepath = 'fiction/{}.txt'.format(filename)

    with open(filepath, 'a', encoding='utf-8') as f:
        for i in list_url:
            text_url = i[0]
            text_title = i[1]
            response = get_url(text_url)
            # 匹配文章
            zheng = '<p>([\s\S]*?)</p>'
            pattern = re.compile(zheng)
            text = re.findall(pattern, response)

            f.write("\n" + text_title + '\n')
            for word in text[0:-3]:
                f.write("\n" + word + "\n")
                time.sleep(0.0001)
    logger.info('{:8s}\t下载完成!!'.format(filename))
    # print("%s 下载完成" % (filename))
    return 1


def main():
    html = 'http://www.9sct.com/'

    # 获取小说网站
    fiction_type_response = get_url(html)
    items = get_fiction_type(fiction_type_response)
    for type_link, fiction_type in items:
        web_response = get_url(type_link)
        page_count = get_page_count(web_response)

        # 如果总页数为0,则写入日志
        if page_count == 0:
            logger.warning("{} 无小说".format(fiction_type))
        else:
            for i in range(page_count):
                fiction_type_html = type_link + 'p{}'.format(i) + '.html'
                # 获取小说链接
                type_html = get_url(fiction_type_html)   # 单页的小说网页
                fiction_links = get_fiction_links(type_html)
                # 获取小说的文字
                for fiction_link in fiction_links:
                    response = get_url(fiction_link)
                    filename = get_book_title(response)
                    list_url = match_list_urls(response)
                    get_book(list_url, filename, fiction_type)
    return 'OK'


if __name__ == '__main__':
    main()


logger.removeHandler(file_handler)

运行中间发生过的错误
1、UnboundLocalError: local variable ‘response’ referenced before assignment
原因: try 发生错误后,except 下忘了获取 response

2、requests.exceptions.MissingSchema: Invalid URL ‘’: No schema supplied. Perhaps you meant http://?
原因:查看原网页,第二十八章 href未写链接地址

3、HTTPConnectionPool(host=‘www.9sct.com’, port=80): Max retries exceeded with url: /chapter/931/697216.html (Caused by NewConnectionError(’<urllib3.connection.HTTPConnection object at 0x000000000392ABE0>: Failed to establish a new connection: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。’))
原因:是因为在每次数据传输前客户端要和服务器建立TCP连接,为节省传输消耗,默认为keep-alive,即连接一次,传输多次,然而在多次访问后不能结束并回到连接池中,导致不能产生新的连接。

1、增加重试连接次数
      requests.adapters.DEFAULT_RETRIES = 5
      
2、关闭多余的连接
     s = requests.session()
     s.keep_alive = False

第三个错误改正方法还未验证。。。
还有许多要完善的地方和没想到的地方,下次换一个网站爬取。。

  • 3
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值