前言
scrapy是优秀的Python爬虫框架,最近在使用scrapy爬取豆瓣音乐信息时,着实被其反爬机制搞了一下。虽然豆瓣提供了各种API可以供我们提取信息,但是我就是要用爬虫爬你练手。
正文
常见的反爬机制有如下几种:
1.请求头检查,比如cookies,user-agent,refer,甚至Accept-Language等等,这也是最基本的反爬机制。
2.访问频次检查,如果一个ip在短时间内访问次服务器次数过于频繁,且cookies相同,则会被判定为机器人,你可能会被要求登录后再访问服务器或者输入验证码,甚至直接封禁你的ip。
3.验证码验证,爬虫无法轻易绕过这一关。
4.有些网页的元素是动态生成的,只有在js加载完成后才会显示。比如很多实用了Ajax的网站,都会有动态生成的元素。直接爬取页面将无法获取想要的元素。
5.表单安全措施,如服务器生成的随机变量加入字段检测,提交表单蜜罐等。所谓蜜罐简单来说就是骗机器人的一些表单,比如一下三种:
<a href='...' style='display:none;'> #看不见
<input type='hiden' ...> #隐藏
<input style='position:absolute; right:50000px;overflow-x:hidden;' ...> #右移50000像素,且隐藏滚动条,应该出电脑屏幕了,看不到
如果你有关于这些元素操作,就表明你是直接通过页面源码观察网页的,也可以说明你是机器人,至少不是正常人。
反反爬应对策略:
1.人为设置请求头,使用谷歌开发者工具F12查看Network可以看到Request Headers。不嫌麻烦可以把得到的请求头全部加入自己程序是request里边。
2.设置timesleep,或者download_delay等等,爬一会儿停一下,甚至可以设置随机爬取时间,使你的程序看起来更像人类,或者说不像机器人。有些网站不需要cookies,可以不设置。如果是封禁ip的话,可以设置ip代理池。
3.图像识别,文字识别,关键字OCR,多家公司都有免费产品可用。另外觉得不带劲可以自己训练一个,Google有一个开源的ocr库,就是大名鼎鼎的Tesseract,网上教程很多。
4.对于动态页面,最简单最直接的办法就是使用selenium,这本来是一个自动化测试的框架,不过用它可以真实模拟浏览器,所以可以采集js加载后的页面,配合无头浏览器PhantomJS就可以快速爬取动态页面了。(最新版本的selenium已经不支持PhantomJS了,小公司的创意终究还是倒下了,现在可以使用Headless Chrome,Headless Firefox等,或用旧版selenium)
5.还是使用selenium,因为它还可以模拟鼠标点击拖放等动作。且以真实浏览器视角观察页面,所以可以判断出蜜罐。来一个简单的判断超链接逻辑。
from selenium import webdriver
driver = webdriver.PhantomJS(executable_path='')
driver.get('http://')
links = driver.find_elements_by_tag_name('a')
for link in links:
if not link.is_display():
print('its not in the screen')
实战豆瓣
建立一个爬取豆瓣音乐评论的scrapy项目douban,命令行下:
scrapy startproject douban
douban项目__init__.py:
# -*- coding: utf-8 -*-
BOT_NAME = 'douban'
SPIDER_MODULES = ['douban.spiders']
NEWSPIDER_MODULE = 'douban.spiders'
DOWNLOAD_DELAY = 2 #控制爬取速度,而且scrapy自动设置delay为0.5*DOWNLOAD_DELAY到1.5*DOWNLOAD_DELAY之间,可以说是很方便了。
重写items.py:
# -*- coding: utf-8 -*-
# Define here the models for your scraped items
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html
import scrapy
class MusicReviewItem(scrapy.Item):
review_title = scrapy.Field()
review_content = scrapy.Field()
review_author = scrapy.Field()
review_music = scrapy.Field()
review_time = scrapy.Field()
review_url = scrapy.Field()
spiders部分建立music_review.py:
# -*- coding: utf-8 -*-
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from douban.items import MusicItem, MusicReviewItem
from scrapy import log
import re
class ReviewSpider(CrawlSpider):
name = 'review'
allowed_domains = ['music.douban.com']
start_urls = ['https://music.douban.com/subject/26480723/']#主要是反爬,这里找个简单例子,音乐Fade
rules = (Rule(LinkExtractor(allow=r"/review/\d+/$"), callback="parse_review", follow=True),)#rule正则规定爬取的页面
def parse_review(self, response):
try:
item = MusicReviewItem()
item['review_title'] = "".join(response.xpath('//*[@property="v:summary"]/text()').extract())
content = "".join(
response.xpath('//*[@id="link-report"]/div[@property="v:description"]/text()').extract())
item['review_content'] = content.lstrip().rstrip().replace("\n", " ")
if len(item['review_content'])<1: #部分评论的html结构是多段<p>组成的
item['review_content']="".join(
response.xpath('//*[@id="link-report"]/div[@property="v:description"]/p/text()').extract())
item['review_author'] = "".join(response.xpath('//*[@property = "v:reviewer"]/text()').extract())
item['review_music'] = "".join(response.xpath('//*[@class="main-hd"]/a[2]/text()').extract())
item['review_time'] = "".join(response.xpath('//*[@class="main-hd"]/p/text()').extract())
item['review_url'] = response.url
yield item
except Exception as :
log(e)
新建一个ippool.py,用来添加proxies:
# -*- coding: utf-8 -*-
import random
# 导入settings.py中的IPPOOL
from .settings import IPPOOL
from scrapy.downloadermiddlewares.httpproxy import HttpProxyMiddleware
class ippool(HttpProxyMiddleware):
def __init__(self, ip=''):
self.ip = ip
def process_request(self, request, spider):
ip = random.choice(IPPOOL)
print("当前使用IP是:"+ ip)
request.meta["proxy"] = "https://"+ip
相应的。要在配置文件settings.py里边写入:
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':123,
'douban.ippool.ippool' : 125,
}
IPPOOL = [
"110.72.33.3:8123"
]
另外,还要加入user-agent,直接把headers除了cookies都加上,写入settings.py:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language':'zh-CN,zh;q=0.9',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Host': 'music.douban.com',
'Referer': 'www.baidu.com',
'Upgrade-Insecure-Requests':' 1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
}
考虑到代理ip不稳定,retry和timeout也是必须的,可以直接使用scrapy自带的中间件scrapy.downloadermiddlewares.retry.RetryMiddleware,这些都加入settings:
RETRY_ENABLE = True
RETRY_TIMES = 10
# RETRT_CODECS = /默认是500,502,503,504,408
DOWNLOAD_TIMEOUT = 10
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.retry.RetryMiddleware':100, #这一行是新加的
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':123,
'douban.ippool.ippool' : 125,
}
最后写一个run.py运行整个爬虫,且输出json保存:
from scrapy import cmdline
cmdline.execute("scrapy crawl review -o review.json".split())
当代理连接失败,或者timeout时,运行效果如下:
当有稳定可用的代理ip时,效果如下,成功绕过反爬机制抓取到内容:
不够有一个小问题,最后不会正常结束,会有SSL报警:
关于这个问题,可以参考StackOverflow的一篇回答,点击查看。
在douban项目下新建context.py,内容如下:
from OpenSSL import SSL
from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
class CustomContextFactory(ScrapyClientContextFactory):
"""
Custom context factory that allows SSL negotiation.
"""
def __init__(self,method=SSL.SSLv23_METHOD):
# Use SSLv23_METHOD so we can use protocol negotiation
self.method = method
然后在settings里边加入:
DOWNLOADER_CLIENTCONTEXTFACTORY = 'douban.context.CustomContextFactory'
然后再运行就不会报错了。
最后settings.py汇总起来如下:
# -*- coding: utf-8 -*-
# Scrapy settings for douban project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://doc.scrapy.org/en/latest/topics/settings.html
# https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
# https://doc.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'douban'
SPIDER_MODULES = ['douban.spiders']
NEWSPIDER_MODULE = 'douban.spiders'
RETRY_ENABLE = True
RETRY_TIMES = 10
# RETRT_CODECS = /默认是500,502,503,504,408
DOWNLOAD_TIMEOUT = 10
DOWNLOAD_DELAY = 2
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language':'zh-CN,zh;q=0.9',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Host': 'music.douban.com',
'Referer': 'www.baidu.com',
'Upgrade-Insecure-Requests':' 1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
}
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.retry.RetryMiddleware':100,
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware':123,
'douban.ippool.ippool' : 125,
}
DOWNLOADER_CLIENTCONTEXTFACTORY = 'douban.context.CustomContextFactory'
# 设置IP池
IPPOOL = [
"110.72.33.3:8123"
]
这时候我们看下提取的json文件,看起来是unicode保存的,在线转码查看一下:
成功!
总结
scrapy框架为我们定义好了许多非常实用的功能,多多练习,在发现错误解决问题的过程中才能更好掌握。大家在不知道网站有什么反爬机制的情况下一定要小心,不然被封了ip就很糟心了。源码已上传我的github,欢迎来吐槽~