最近做了一次网页爬虫项目,对某网站爬取一些公司公开信息,因为爬虫经验积累很少,所以一开始就使用静态爬虫。
首先爬取各个公司的url 使用requests and xpath爬取,爬取代码如下:
import requests import csv from lxml import etree data=['https://www.11467.com/xian/dir/a-p'+str(i)+'.htm' for i in range(1,21)] def get_html(url): try: headers={ "User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3732.400 QQBrowser/10.5.3819.400" } r=requests.get(url,headers=headers) r.encoding=r.apparent_encoding r.raise_for_status() return r.text except Exception as err: print(err) url_list=[] def parser(html): try: html=etree.HTML(html) for row in html.xpath('//*[@id="il"]/div[3]/div/ul/li'): url = row.xpath('div[2]/h4/a/@href')[0] url_list.append(url) except Exception as err: print(err) for i in data: if __name__=='__main__': url=i parser(get_html(url))
得到的结果为:
我这里爬取的是二十个页面的所有公司的url,但是得到的只有每个页面第一个公司的url,后面我使用绝对路径爬取第二个公司的url,结果出现报错显示超出范围,后面经过一段时间的询问度娘,得到了结果
网页使用了js反爬虫机制需要使用动态爬虫selenium
-
selenium与chromedriver
Selenium相当于是一个机器人。可以模拟人类在浏览器上的一些行为,自动处理浏览器上的一些行为,比如点击,填充数据,删除cookie等。
chromedriver是一个驱动Chrome浏览器的驱动程序,使用他才可以驱动浏览器。当然针对不同的浏览器有不同的driver。
-
selenium与chromedriver的安装
打开jupyter的任务管理器输入pip install selenium完成安装。
根据下载的谷歌浏览器版本在下面网站中找到对应版本的chromedriver.exe文件
http://npm.taobao.org/mirrors/chromedriver/
使用selenium动态爬虫爬取公司url
准备好环境后,我就开始尝试使用动态爬虫爬取公司的url啦!代码如下:
from selenium import webdriver import csv import time from selenium.webdriver.chrome.options import Options import requests data=[] try: for i in range(1,21): url='https://www.11467.com/xian/dir/a-p'+str(i)+'.htm' driver=webdriver.Chrome('D://chromedriver.exe') driver.get(url) page_list=['//*[@id="il"]/div[3]/div/ul/li['+str(i)+']/div[2]/h4/a' for i in range(1,101)] for page in page_list: for a_ in driver.find_elements_by_xpath(page): data.append(a_.get_attribute("href")) time.sleep(5) driver.close() except Exception as err: print(err)
通过动态爬虫成功的解决了反爬虫机制,爬取到2000条数据:
通过得到的url获取每个公司的数据
需要得到这些数据,因为每个页面都是固定的,我一开始还是选择了静态爬虫,结果还是出现了前面的问题,还是安安稳稳的使用动态爬虫吧。
经过爬取数据发现,每个公司的数据格式都不一样,所以爬取的内容都很乱,于是换了个网站爬取。
一样使用动态爬取
import time import pandas as pd from selenium import webdriver from selenium.webdriver import ActionChains url_list = [] def login(driver): driver.delete_all_cookies() url = "https://www.qcc.com/weblogin?back=%2F" #企查查登录网址 driver.get(url) time.sleep(3) # 点击密码登入 driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[1]/div[2]/a').click() time.sleep(1) # 输入账号密码 driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[3]/form/div[1]/input').send_keys(username) driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[3]/form/div[2]/input').send_keys(password) button = driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[3]/form/div[3]/div/div/div[1]/span') #滑动滑块 ActionChains(driver).click_and_hold(button).perform() ActionChains(driver).move_by_offset(xoffset=308, yoffset=0).perform() ActionChains(driver).release().perform() time.sleep(2) driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[3]/form/div[4]/button/strong').click()# 点击登录 time.sleep(0.5) driver.find_element_by_xpath('//*[@id="searchkey"]').send_keys(seek) driver.find_element_by_xpath('//*[@id="indexSearchForm"]/div[1]/span').click() # 点击查询 time.sleep(1) company_url = ['/html/body/div[1]/div[2]/div[2]/div[4]/div/div[2]/div/table/tr['+str(i)+']/td[3]/div/a[1]' for i in range(1,21)] for i in range(1,7): driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[2]/div[4]/nav/ul/li[{}]/a'.format(i)).click() #点击下一页 time.sleep(2) for com in company_url: for a_ in driver.find_elements_by_xpath(com): url_list.append(a_.get_attribute("href")) #获取公司网址 print(url_list) def main(): while True: option = webdriver.ChromeOptions() option.add_experimental_option('excludeSwitches', ['enable-automation']) # webdriver防检测 option.add_argument("--disable-blink-features=AutomationControlled") option.add_argument("--no-sandbox") option.add_argument("--disable-dev-usage") option.add_experimental_option("prefs", {"profile.managed_default_content_settings.images": 2}) driver = webdriver.Chrome(executable_path=r"D:chromedriver.exe",options=option) driver.set_page_load_timeout(15) login(driver) driver.close() if __name__ == '__main__': username = ''#用户名 password = ''#密码 seek = '互联网' #搜索企业类型/html/body/div[1]/div[2]/div[2]/div[4]/nav/ul/li[1]/a headers = {#请求头 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36'} main()
代码很长吖!!!所以肯定会遇到问题的啦!
-
爬取过多页面,网页弹出小窗,使得只能爬取100条数据
使用自动加手动的方法,搜多个网页的页面,并每个网页只爬取前一百条
import time import pandas as pd from selenium import webdriver from selenium.webdriver import ActionChains company_url = [] def login(driver): driver.delete_all_cookies() url = "https://www.qcc.com/weblogin?back=%2F" #企查查登录网址 driver.get(url) time.sleep(3) # 点击密码登入 driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[1]/div[2]/a').click() time.sleep(1) # 输入账号密码 driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[3]/form/div[1]/input').send_keys(username) driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[3]/form/div[2]/input').send_keys(password) button = driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[3]/form/div[3]/div/div/div[1]/span') #滑动滑块 ActionChains(driver).click_and_hold(button).perform() ActionChains(driver).move_by_offset(xoffset=308, yoffset=0).perform() ActionChains(driver).release().perform() time.sleep(2) driver.find_element_by_xpath('/html/body/div[1]/div[3]/div/div[2]/div[3]/form/div[4]/button/strong').click()# 点击登录 time.sleep(0.5) url_a = [#搜索影视 'https://www.qcc.com/web/search?key=%E5%BD%B1%E8%A7%86&p={}&filter=%7B%22rchain%22%3A%5B%7B%22pr%22%3A%22GD%22%7D%5D%7D', 'https://www.qcc.com/web/search?key=%E5%BD%B1%E8%A7%86&p={}&filter=%7B%22rchain%22%3A%5B%7B%22pr%22%3A%22HEN%22%7D%5D%7D', 'https://www.qcc.com/web/search?key=%E5%BD%B1%E8%A7%86&p={}&filter=%7B%22rchain%22%3A%5B%7B%22pr%22%3A%22JS%22%7D%5D%7D', 'https://www.qcc.com/web/search?key=%E5%BD%B1%E8%A7%86&p={}&filter=%7B%22rchain%22%3A%5B%7B%22pr%22%3A%22SH%22%7D%5D%7D', 'https://www.qcc.com/web/search?key=%E5%BD%B1%E8%A7%86&p={}&filter=%7B%22rchain%22%3A%5B%7B%22pr%22%3A%22ZJ%22%7D%5D%7D', 'https://www.qcc.com/web/search?key=%E5%BD%B1%E8%A7%86&p={}&filter=%7B%22rchain%22%3A%5B%7B%22pr%22%3A%22SC%22%7D%5D%7D', 'https://www.qcc.com/web/search?key=%E5%BD%B1%E8%A7%86&p={}&filter=%7B%22rchain%22%3A%5B%7B%22pr%22%3A%22SD%22%7D%5D%7D', 'https://www.qcc.com/web/search?key=%E5%BD%B1%E8%A7%86&p={}&filter=%7B%22rchain%22%3A%5B%7B%22pr%22%3A%22HB%22%7D%5D%7D'] num = 1 for r in url_a: for j in range(1,6): driver.get(r.format(j)) for i in range(1,20): d = driver.find_element_by_xpath('/html/body/div[1]/div[2]/div[2]/div[4]/div/div[2]/div/table/tr[{}]/td[3]/div/a[1]'.format(i)) print('第{}条----->>>'.format(num),d.get_attribute("href")) #获取公司网址 num += 1 company_url.append(d.get_attribute("href")) time.sleep(5) def main(): while True: option = webdriver.ChromeOptions() option.add_experimental_option('excludeSwitches', ['enable-automation']) # webdriver防检测 option.add_argument("--disable-blink-features=AutomationControlled") option.add_argument("--no-sandbox") option.add_argument("--disable-dev-usage") option.add_experimental_option("prefs", {"profile.managed_default_content_settings.images": 2}) driver = webdriver.Chrome(executable_path=r"D:\chromedriver.exe",options=option) driver.set_page_load_timeout(15) login(driver) if __name__ == '__main__': username = ''#用户名 password = ''#密码 headers = {#请求头 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36'} main()
得到结果如下:
-
得到后爬取每个公司的信息啦!
import time import pandas as pd from selenium import webdriver from selenium.webdriver import ActionChains data = [] driver=webdriver.Chrome('D://chromedriver.exe') driver.delete_all_cookies() driver.get('https://www.qcc.com/') num = 1 for r in company_url: driver.get(r) try: sx = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[1]/td[2]') shehui_xinyong = sx.text #信用代码 except: shehui_xinyong = None try: gn = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[1]/td[4]') gongsi_name = gn.text #名称 except: gongsi_name = None try: ql = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[5]/td[2]') qiyeleix = ql.text #企业类型 except: qiyeleix = None try: sd = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[6]/td[4]') suoshudiqu = sd.text #所属地区 except: suoshudiqu = None try: fd = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[2]/td[2]/div/div/span[2]/span/a') fadingdaibiaoren = fd.text #法定代表人 except: fadingdaibiaoren = None try: zz = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[3]/td[2]') zhuceziben = zz.text #注册资本 except: zhuceziben = None try: cl = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[2]/td[6]') chengliriqi = cl.text #成立日期 except: chengliriqi = None try: yq = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[5]/td[4]') yingyeqixian = yq.text #营业期限 except: yingyeqixian = None try: jf = driver.find_element_by_xpath('//*[@id="cominfo"]/div[2]/table/tr[10]/td[2]') jingyingfanwei = jf.text #经营范围 except: jingyingfanwei = None data.append([shehui_xinyong,gongsi_name,fadingdaibiaoren,zhuceziben,qiyeleix,yingyeqixian,suoshudiqu,jingyingfanwei,chengliriqi]) print('第{}条------->>>'.format(num), gongsi_name) num += 1 time.sleep(20)
这里非常非常重要!!!!
该网页有反爬虫机制,所以尽量做到模仿人为点击,此处页面最好停留一段时间,或者随机一段时间,在我爬取时因为贪图速度,遭到反爬虫将ip地址封了,所以上面的代码无法在爬取了,现在当事人特别后悔!!!
但是这能难道我吗?不存在的!!
我使用静态爬虫模仿人为搜索爬取公司数据,使用随机时间使得页面暂停来做到
import requests import csv from lxml import etree import time import random def get_html(url): try: headers={ "User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.25 Safari/537.36 Core/1.70.3732.400 QQBrowser/10.5.3819.400" } r=requests.get(url,timeout=35,headers=headers) r.encoding=r.apparent_encoding r.raise_for_status() return r.text except Exception as err: print(err) data=[] def parser(html,num): try: html=etree.HTML(html) try: shehui_xinyong=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[1]/td[2]/text()')[0] except: shehui_xinyong = None try: gongsi_name=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[1]/td[4]/text()')[0] except: gongsi_name = None try: qiyeleix=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[5]/td[2]/text()')[0] #企业类型 except: qiyeleix = None try: suoshudiqu=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[6]/td[4]/text()')[0] #所属地区 except: suoshudiqu = None try: fadingdaibiaoren=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[2]/td[2]/div/div/span[2]/span/a/text()')[0] #法定代表人 except: fadingdaibiaoren = None try: zhuceziben=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[3]/td[2]/text()')[0] #注册资本 except: zhuceziben = None try: chengliriqi=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[2]/td[6]/text()')[0] #成立日期 except: chengliriqi = None try: yingyeqixian=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[5]/td[4]/text()')[0] #营业期限 except: yingyeqixian = None try: jingyingfanwei=html.xpath('//*[@id="cominfo"]/div[2]/table/tr[10]/td[2]/text()')[0] #经营范围 except: jingyingfanwei = None data.append([shehui_xinyong,gongsi_name,fadingdaibiaoren,zhuceziben,qiyeleix,yingyeqixian,suoshudiqu,jingyingfanwei,chengliriqi]) print('第{}条------->>>'.format(num), gongsi_name) except Exception as err: print(err) No = 1 for i in company_url: if __name__=='__main__': parser(get_html(i),No) No += 1 time.sleep(random.randint(15,25))
最终功夫不负有心人将所需要的公司数据全部爬取成功!!
也将这些数据保存至本地
总结一下我爬虫遇到的问题吧
-
遇到反爬虫无法爬取到想要的数据
-
网页的数据格式混乱无法准确抓取需要的数据
-
由于爬取过多页面出现弹窗,导致程序错误
-
延时设置时间过短,导致被封ip地址