使用web scraper 爬虫快速抓取分页数据和二级页面内容(58,jd,baidu)

Chrome 浏览器插件 Web Scraper 可轻松实现网页数据的爬取,还不用考虑爬虫中的登陆、验证码、异步加载等复杂问题。

先贴上爬虫58上爬数据的sitemap如下:

{"_id":"hefeitongcheng","startUrl":["https://hf.58.com/shushanqu/baihuochaoshi/s32/?PGTID=0d306b32-0034-8449-027b-ed96503664b1&ClickID=1"],"selectors":[{"id":"click","type":"SelectorElementClick","parentSelectors":["_root"],"selector":".list-main-style li","multiple":true,"delay":"5000","clickElementSelector":"strong span","clickType":"clickMore","discardInitialElements":"do-not-discard","clickElementUniquenessType":"uniqueText"},{"id":"link","type":"SelectorLink","parentSelectors":["click"],"selector":".title a","multiple":false,"delay":0},{"id":"name","type":"SelectorText","parentSelectors":["link"],"selector":"h1","multiple":false,"regex":"","delay":0},{"id":"jiage","type":"SelectorText","parentSelectors":["link"],"selector":".house_basic_title_money span","multiple":false,"regex":"","delay":0},{"id":"add","type":"SelectorText","parentSelectors":["link"],"selector":"p.p_2","multiple":false,"regex":"","delay":0}]}

Web Scraper 抓取流程及要点:

安装Web Scraper插件后,三步完成爬取操作
1、Create new sitemap(创建爬取项目)
2、选取爬取网页中的内容,点~点~点,操作
3、开启爬取,下载CSV数据

其中最关键的是第二步,两个要点:

  1. 先选中数据块 Element,每块数据我们在页面上取,都是重复的,选中 Multiple
  2. 在数据块中再取需要的数据字段(上图Excel中的列)

爬取大量数据的要点,在于掌握分页的控制。
分页分为3种情况:

1. URL 参数分页(比较规整方式)   ?page=2 或 ?page=[1-27388]

2. 滚动加载,点击“加载更多” 加载页面数据  Element scroll down

3. 点击分页数字标签(包括“下一页”标签)   Link 或 Element click

其它范例A:jd上爬hw p30价格信息

{"_id":"huaweip30","startUrl":["https://search.jd.com/Search?keyword=%E5%8D%8E%E4%B8%BAp30%20512&enc=utf-8&wq=%E5%8D%8E%E4%B8%BAp30%20512&pvid=ed449bf16e44461fac90ff6fae2e66cd"],"selectors":[{"id":"element","type":"SelectorElementClick","parentSelectors":["_root"],"selector":"div.gl-i-wrap","multiple":true,"delay":"1500","clickElementSelector":".p-num a:nth-of-type(3)","clickType":"clickOnce","discardInitialElements":"do-not-discard","clickElementUniquenessType":"uniqueText"},{"id":"name","type":"SelectorText","parentSelectors":["element"],"selector":"a em","multiple":false,"regex":"","delay":0},{"id":"jiage","type":"SelectorText","parentSelectors":["element"],"selector":"div.p-price","multiple":false,"regex":"","delay":0}]}

其它范例B:baidu上爬关键字信息

{"_id":"wailaizhu","startUrl":["https://www.baidu.com/s?wd=wailaizhu%20h0101&pn=0&oq=wailaizhu%20h0101&tn=baiduhome_pg&ie=utf-8&rsv_idx=2&rsv_pq=f62d151f0001156d&rsv_t=5b15EoMWRlm3%2BeroyWXBKI%2FDZ3H0BlGKJ6lNa6mmYBo4nNDUeJNeeN8BvgiE9S9Orivd"],"selectors":[{"id":"element","type":"SelectorElementClick","parentSelectors":["_root"],"selector":"div#content_left","multiple":true,"delay":"1500","clickElementSelector":"a span.pc","clickType":"clickOnce","discardInitialElements":"do-not-discard","clickElementUniquenessType":"uniqueText"},{"id":"name","type":"SelectorText","parentSelectors":["element"],"selector":"a","multiple":false,"regex":"","delay":0},{"id":"body","type":"SelectorText","parentSelectors":["element"],"selector":"_parent_","multiple":false,"regex":"","delay":0}]}

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值