python异步爬虫爬取微博信息

python异步爬虫爬取微博信息

话不多说,直接上代码!

import aiohttp
import asyncio
from bs4 import BeautifulSoup
from urllib import parse
import time

headers = {"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36 Edg/84.0.522.52"}

async def get_html(url):
    print("正在爬取",url)
    async with aiohttp.ClientSession(headers=headers) as session:
        async with session.get(url) as response:
           if response.status==200:
            print("获取页面成功")
            parse_html(await response.text())

def parse_html(html_content):
    soup = BeautifulSoup(html_content,'lxml')
    trs = soup.select('table tbody tr')
    for tr in trs:
        title = tr.select_one('td a').text
        url = tr.select_one('td a')['href']
        url = parse.urljoin("https://s.weibo.com/",url)
        massage = title+url+'\n'
        with open("C:/Users/86135/Desktop/微博数据.txt",'at',encoding='utf-8') as f:
            f.write(massage)
            f.close()

if __name__ == '__main__':
    start = time.time()
    urls = ["https://s.weibo.com/top/summary/summary?cate=realtimehot","https://s.weibo.com/top/summary/summary?cate=socialevent"]
    tasks = []
    for url in urls:
        tasks.append(get_html(url))
    loop = asyncio.get_event_loop()
    loop.run_until_complete(asyncio.wait(tasks))
    print(time.time()-start)
    loop.close()

运行结果如下:
在这里插入图片描述
在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值