常见库爬取58二手全站信息

环境为:(穷买不起mac)win7 + python3 + pycharm  + mongoDB ,

数据链接为navicat 本地访问,

发现一个问题,这个网站开线程爬取竟然不封ip,不用找各种ip的犯难,哎,也是少数几个吧,注意爬取首页的url,我用的是http的,注意,不要用HTTPS的。

主程序main.py,连接两个爬虫,然后注释部分是断点爬取的方法,知道重复的不爬取。你可以试一下过程不明白的可以问我。

链接:https://pan.baidu.com/s/1FQ8kZSqfEsC9vPra_J8-ag
提取码:v0nd 

import sys
sys.path.append("..")
from multiprocessing import Pool
from channel_extract import channel_list
from page_spider import get_links
from page_spider import get_info
from page_spider import tongcheng_url
from page_spider import tongcheng_info

def get_all_links_from(channel):
    for num in range(1,101):
        get_links(channel,num)

if __name__ == '__main__':
    pool = Pool(processes=4)
    pool.map(get_all_links_from,channel_list.split())

# db_urls = [item['url'] for item in tongcheng_url.find()]
# db_infos = [item['url'] for item in tongcheng_info.find()]
# x = set(db_urls)
# y = set(db_infos)
# rest_urls = x - y
#
# if __name__ == '__main__':
#     pool = Pool(processes=4)
#     pool.map(get_info,rest_urls)

爬取的所有url的主要连接,主列表 channel_list,这个列表是全部二手的商品信息,这个比较容易得到。还有就是从标签中得到的链接不是完全的,需要补全链接。

import requests
from lxml import etree
from fake_useragent import UserAgent
start_url = 'http://bj.58.com/sale.shtml'
url_host = 'http://bj.58.com'

def get_channel_urls(url):
    headers = {"User-Agent": UserAgent().random}
    html = requests.get(url,headers=headers)
    selector = etree.HTML(html.text)
    infos = selector.xpath('//div[@class="lbsear"]/div/ul/li')

    for info in infos:
        class_urls = info.xpath('ul/li/b/a/@href')
        for class_url in class_urls:
            print(url_host + class_url)

get_channel_urls(start_url)

channel_list = '''
    http://bj.58.com/shouji/
    http://bj.58.com/tongxunyw/
    http://bj.58.com/danche/
    http://bj.58.com/diandongche/
    http://bj.58.com/fzixingche/
    http://bj.58.com/sanlunche/
    http://bj.58.com/peijianzhuangbei/
    http://bj.58.com/diannao/
    http://bj.58.com/bijiben/
    http://bj.58.com/pbdn/
    http://bj.58.com/diannaopeijian/
    http://bj.58.com/zhoubianshebei/
    http://bj.58.com/shuma/
    http://bj.58.com/shumaxiangji/
    http://bj.58.com/mpsanmpsi/
    http://bj.58.com/youxiji/
    http://bj.58.com/ershoukongtiao/
    http://bj.58.com/dianshiji/
    http://bj.58.com/xiyiji/
    http://bj.58.com/bingxiang/
    http://bj.58.com/jiadian/
    http://bj.58.com/binggui/
    http://bj.58.com/chuang/
    http://bj.58.com/ershoujiaju/
    http://bj.58.com/yingyou/
    http://bj.58.com/yingeryongpin/
    http://bj.58.com/muyingweiyang/
    http://bj.58.com/muyingtongchuang/
    http://bj.58.com/yunfuyongpin/
    http://bj.58.com/fushi/
    http://bj.58.com/nanzhuang/
    http://bj.58.com/fsxiemao/
    http://bj.58.com/xiangbao/
    http://bj.58.com/meirong/
    http://bj.58.com/yishu/
    http://bj.58.com/shufahuihua/
    http://bj.58.com/zhubaoshipin/
    http://bj.58.com/yuqi/
    http://bj.58.com/tushu/
    http://bj.58.com/tushubook/
    http://bj.58.com/wenti/
    http://bj.58.com/yundongfushi/
    http://bj.58.com/jianshenqixie/
    http://bj.58.com/huju/
    http://bj.58.com/qiulei/
    http://bj.58.com/yueqi/
    http://bj.58.com/kaquan/
    http://bj.58.com/bangongshebei/
    http://bj.58.com/diannaohaocai/
    http://bj.58.com/bangongjiaju/
    http://bj.58.com/ershoushebei/
    http://bj.58.com/chengren/
    http://bj.58.com/nvyongpin/
    http://bj.58.com/qinglvqingqu/
    http://bj.58.com/qingquneiyi/
    http://bj.58.com/chengren/
    http://bj.58.com/xiaoyuan/
    http://bj.58.com/ershouqiugou/
    http://bj.58.com/tiaozao/

'''

通过前面的爬取的主要连接爬取 ,详细列表内的商品链接,然后在解析每个详细链接,解析数据。前面主程序和爬取主列表url,应该没有问题,然后就是后面这一个,里面的问题有几个:分别为在一半的时候会出现验证码问题,是比较简单的滑动验证码,可以手动下,好像是只需要一至两次全站就可以了,也可以加个selenium,模拟滑动,然后是网站结构可能更新,所以解析位置可能不准。

import requests
from lxml import etree
import time
import pymongo
from fake_useragent import UserAgent
client = pymongo.MongoClient('localhost', 27017)
mydb = client['mydb']
tongcheng_url = mydb['tongcheng_url']
tongcheng_info = mydb['tongcheng_info']


def get_links(channel,pages):
    headers = {"User-Agent": UserAgent().random}
    list_view = '{}pn{}/'.format(channel,str(pages))
    try:
        html = requests.get(list_view,headers=headers)
        time.sleep(2)
        selector = etree.HTML(html.text)
        if selector.xpath('//tr'):
            infos = selector.xpath('//tr')
            for info in infos:
                if info.xpath('td[2]/a/@href'):
                    url = info.xpath('td[2]/a/@href')[0]
                    tongcheng_url.insert_one({'url':url})
                else:
                    pass
        else:
            pass
    except requests.exceptions.ConnectionError:
        pass

def get_info(url):
    headers = {"User-Agent": UserAgent().random}
    wb_data = requests.get(url,headers=headers)
    html = etree.HTML(wb_data.text)
    try:
        titles = html.xpath('string(//h1/text())')
        date = html.xpath('//div[@class="detail-title__info__text"][1]/text()')
        if html.xpath('//span[@class="infocard__container__item__main__text--price"]/text()'):
            prices = html.xpath('//span[@class="infocard__container__item__main__text--price"]/text()')
        else:
            prices = "无"
        if html.xpath('//div[@class="infocard__container__item__main"]/a/text()'):
            area = html.xpath('//div[@class="infocard__container__item__main"]/a/text()')
        else:
            area = "无"
        view = html.xpath('//div[@class="detail-title__info__text"][2]/text()')[0]
        info = {
            'tittle':titles,
            'date': date,
            'price':prices,
            'area':area,
            'view':view,
            'url':url
        }
        tongcheng_info.insert_one(info)

    except IndexError:
        pass

这几个问题,后期要是比较多的人看,可以发信息,小编可以把问题完善,谢谢你们的收看。 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值