scrapy_redis分布式爬虫
首先安装redis 然后redis连接
scrapy_redis 相对于scrapy效率更高,速度更快,代码区别在于
1,继承的父类不一样(可以自己查看源码)
2,增加redis_key
3,在settings.py文件里增加上图四句功能代码
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_URL = "redis://127.0.0.1:6379"
以爬取苏宁图书为例 代码如下
class DangdangSpider(RedisSpider):
name = 'dangdang'
allowed_domains = ['dangdang.com']
redis_key = "dangdang"
# start_urls = ['http://book.dangdang.com/']
def parse(self, response):
# 大分类分组
div_list = response.xpath("//div[@class='con flq_body']/div")
for div in div_list:
item = {}
item["b_cate"] = div.xpath("./dl/dt//text()").extract()
item["b_cate"] = [i.strip() for i in item["b_cate"] if len(i.strip()) > 0]
# 获取中间分类
dl_list = response.xpath("./div//dl[@class='inner_dl']")
for dl in dl_list:
item["m_cate"] = dl.xpath("./dt//text()").extract()
item["m_cate"] = [i.strip() for i in item["m_cate"] if len(i.strip()) > 0][0]
# 获取小分类
a_list = dl.xpath("./dd/a")
for a in a_list:
item["s_href"] = a.xpath("./@href").extract_first()
item["s_cate"] = a.xpath("./@title").extract_first()
if item["s_href"] is not None:
yield scrapy.Request(item["href"], callback=self.parse_book_list, dont_filter=True, meta={"item": deepcopy(item)})
def parse_book_list(self, response):
item = response.meta["item"]
li_list = response.xpath(".//ul[@class='bigimg']/li")
for li in li_list:
item["book_img"] = li.xpath("/a[@class='pic']/img/@src").extract_first()
if item["book_img"] == "images/model/guan/url_none.png":
item["book_img"] = li.xpath("/a[@class='pic']/img/@data-original").extract_first()
item["book_name"] = li.xpath("./p[@class='name']/a/@title").extract_first()
item["book_info"] = li.xpath("./p[@class='detail']/text()").extract_first()
item["book_price"] = li.xpath(".//span[@class='search_now_price']/text()").extract_first()
item["book_author"] = li.xpath("./p[@class='search_book_author']/span[1]/a/text()").extract()
item["book_publish_data"] = li.xpath("./p[@class='search_book_author']/span[2]/text()").extact_first()
item["book_press"] = li.xpath("./p[@class='search_book_author']/span[3]/a/text()").extract_first()
print(item)
# 下一页
next_url = response.xpath(".//li[@class='next']/a/@href").extract_first()
if next_url is not None:
next_url = urllib.parse.urljoin(response.url, next_url)
yield scrapy.Request(next_url, callback=self.parse_book_list, dont_filter=True, meta={"item": item})
程序运行和scrapy相同scrapy crawl dangdang
运行之后会出现
需要去redis里设置
即完成操作。
如果出现此种情况:
关于user-agent 以及robot协议及ip问题不再叙述
此种情况应是window的redis数据库,而且lpush时,使用的是redis。windos那个文件,这样插入的redis_key,在你爬虫服务器上是查不到这个值的。
使用 redis-cli -h ip -p 6379 这样链接数据,再lpush操作。