爬取上海蔬菜网数据记录

通过python爬取网站蔬菜及其定价并存储在csv文件中

1.所需导入的模块

import requests
from lxml import etree
import csv
from concurrent.futures import ThreadPoolExecutor

2.通过requestes获取网页代码,利用xpath获得蔬菜数据,存入data.csv中

f = open('data.csv', mode="w", encoding='utf-8')
csvwriter = csv.writer(f)


def download_one_page(url):
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.0.0 Safari/537.36 Edg/127.0.0.0"
    }
    resp = requests.get(url, headers=headers)
    resp.encoding = 'gb2312'
    html = etree.HTML(resp.text)
    tds = html.xpath("/html/body/table[4]/tr/td[2]/table/tr[1]/td/table/tr[3]/td/table[2]/tr/td")
    for td in tds:
        tbody = td.xpath('./table/tr')[1:]
        for tr in tbody:
            name = tr.xpath('./td/a/font/text()')
            name = (item.strip() for item in name)
            csvwriter.writerow(name)
    print(url, "提取完毕")

 3.通过线程池获取所有页面的数据

if __name__ == '__main__':
    # for i in range(1, 14)
    #   download('http://www.shveg.com/cn/price/Index.asp?ClassID=&page={i}')
    with ThreadPoolExecutor(6) as t:
        for i in range(14):
            t.submit(download_one_page, f"http://www.shveg.com/cn/price/Index.asp?ClassID=&page={i}")
    print("全部下载完毕")

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值