爬取分页内容 python

代码:

# -*- coding: utf-8 -*-
# @Time : 2020/4/14 20:40
# @Author : Oneqq
# @File : 03-gethtml.py
# @Software: PyCharm

from urllib.request import Request, urlopen
from urllib.parse import urlencode
from fake_useragent import UserAgent

def get_html(url):
    headers = {
        "User-Agent": UserAgent().chrome
    }
    request = Request(url, headers=headers)
    response = urlopen(request)
    print(response.read().decode())
    return response.read()

def save_html(filename, html_bytes):
    with open(filename, "wb") as f:
        f.write(html_bytes)

def main():
    content = input("请输入搜索的内容:")
    num = input("请输入下载多少页:")
    base_url = "https://tieba.baidu.com/f?ie=utf-8&fr=search&{}"
    for page in range(int(num)):
        args = {
            "kw": content,
            "pn": page*50
        }
        filename = "第"+str(page+1)+"页.html"
        args = urlencode(args)
        print("正在下载"+filename)
        html_bytes = get_html(base_url.format(args))
        save_html(filename, html_bytes)

if __name__ == '__main__':
    main()

结果:

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值