爬取淘宝商品信息

相信学了python爬虫,很多人都想爬取一些数据量比较大的网站,淘宝网就是一个很好的目标,其数据量大,而且种类繁多,而且难度不是很大,很适合初级学者进行爬取。下面是整个爬取过程:

第一步:构建访问的url

#构建访问的url
    goods = "鱼尾裙"
    page = 10
    infoList = []
    url = 'https://s.taobao.com/search'
    for i in range(page):
        s_num = str(44*i+1)
        num = 44*i
        data = {'q':goods,'s':s_num}

第二步:获取网页信息

def getHTMLText(url,data):
    try:
        rsq = requests.get(url,params=data,timeout=30)
        rsq.raise_for_status()
        return rsq.text
    except:
        return "没找到页面"

第三步:利用正则获取所需数据

def parasePage(ilt, html,goods_id):
    try:
        plt = re.findall(r'\"view_price\"\:\"[\d\.]*\"', html)
        slt = re.findall(r'\"view_sales\"\:\".*?\"', html)
        tlt = re.findall(r'\"raw_title\"\:\".*?\"', html)
        ult = re.findall(r'\"pic_url\"\:\".*?\"', html)
        dlt = re.findall(r'\"detail_url\"\:\".*?\"', html)
        for i in range(len(plt)):
            goods_id += 1
            price = eval(plt[i].split(':')[1])
            sales = eval(slt[i].split(':')[1])
            title = eval(tlt[i].split(':')[1])
            pic_url = "https:" + eval(ult[i].split(':')[1])
            detail_url = "https:" + eval(dlt[i].split(':')[1])
            ilt.append([goods_id,price,sales,title,pic_url,detail_url])
        return ilt
    except:
        print("没找到您所需的商品!")

第四步:将数据保存到csv文件

def saveGoodsList(ilt):
    with open('goods.csv','w') as f:
        writer = csv.writer(f)
        writer.writerow(["序列号", "价格", "成交量", "商品名称","商品图片网址","商品详情网址"])
        for info in ilt:
            writer.writerow(info)

下面是完整代码:

import csv
import requests
import re
 
#获取网页信息
def getHTMLText(url,data):
    try:
        rsq = requests.get(url,params=data,timeout=30)
        rsq.raise_for_status()
        return rsq.text
    except:
        return "没找到页面"
 
#正则获取所需数据
def parasePage(ilt, html,goods_id):
    try:
        plt = re.findall(r'\"view_price\"\:\"[\d\.]*\"', html)
        slt = re.findall(r'\"view_sales\"\:\".*?\"', html)
        tlt = re.findall(r'\"raw_title\"\:\".*?\"', html)
        ult = re.findall(r'\"pic_url\"\:\".*?\"', html)
        dlt = re.findall(r'\"detail_url\"\:\".*?\"', html)
        for i in range(len(plt)):
            goods_id += 1
            price = eval(plt[i].split(':')[1])
            sales = eval(slt[i].split(':')[1])
            title = eval(tlt[i].split(':')[1])
            pic_url = "https:" + eval(ult[i].split(':')[1])
            detail_url = "https:" + eval(dlt[i].split(':')[1])
            ilt.append([goods_id,price,sales,title,pic_url,detail_url])
        return ilt
    except:
        print("没找到您所需的商品!")
 
#数据保存到csv文件
def saveGoodsList(ilt):
    with open('goods.csv','w') as f:
        writer = csv.writer(f)
        writer.writerow(["序列号", "价格", "成交量", "商品名称","商品图片网址","商品详情网址"])
        for info in ilt:
            writer.writerow(info)
 
#运行程序
if __name__ == '__main__':
    #构建访问的url
    goods = "鱼尾裙"
    page = 10
    infoList = []
    url = 'https://s.taobao.com/search'
    for i in range(page):
        s_num = str(44*i+1)
        num = 44*i
        data = {'q':goods,'s':s_num}
        try:
            html = getHTMLText(url,data)
            ilt = parasePage(infoList, html,num)
            saveGoodsList(ilt)
        except:
            continue

结果如下图:

a763127600cc400a502d60a1fde0fb69f53.jpg

 

亲,如果您感觉本文有用,请点个赞再走吧✌(>‿◠)!!

转载于:https://my.oschina.net/ZhenyuanLiu/blog/1844882

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值