ps:课前规矩,ps一下。上节我们讲了最简单的爬虫,但是在真实的网络环境下,并不是所有的网页都能用那样的方式抓取,用ajax异步请求数据的网页就没办法用如上方式,那么我们今天就来看看如何抓取异步加载数据的网页。(找网页的时候发现简书的部分页面也是用这种方式加载的,忍了很久还是放过了简书~~)
代码预览
#coding:utf-8
from bs4 import BeautifulSoup
import requests
import json
import pymongo
url = 'http://www.guokr.com/scientific/'
def dealData(url):
client = pymongo.MongoClient('localhost', 27017)
guoke = client['guoke']
guokeData = guoke['guokeData']
web_data = requests.get(url)
datas = json.loads(web_data.text)
print datas.keys()
for data in datas['result']:
guokeData.insert_one(data)
def start():
urls = ['http://www.guokr.com/apis/minisite/article.json?retrieve_type=by_subject&limit=20&offset={}&_&#