1.前期准备具体请查看上一篇
2.准备库requests,BeautifulSoup,xlwt,lxml1.BeautifulSoup:是专业的网页爬取库,方便抓取网页信息
2.xlwt:生成excel表格
3.lxml:xml解析库
3.具体思路企查查网站具有一定的反爬机制,直接爬取会受到网站阻拦,所以我们需要模拟浏览器请求,绕过反爬机制,打开企查查网站,获取cookie及一系列请求头文件,然后使用BeautifulSoup分析网页节点捕捉需要的信息
4.源码# encoding: utf-8import requestsfrom bs4 import BeautifulSoupimport lxmlimport xlwtimport redef craw():
file = xlwt.Workbook()
table = file.add_sheet('sheet1', cell_overwrite_ok=True)
print('正在爬取,请稍等....') for n in range(1,500):
print('第'+ str(n) + '页......')
url = 'https://www.qichacha.com/g_JS_' + str(n) + '.html'
user_agent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
headers