1.创建项目
scrapy startproject XX项目名XX
scrapy startproject guangdong_chizheng
2.创建spider爬虫py文件
cd guangdong_chizheng
scrapy genspider example example.com
查看创建后的代码:
# -*- coding: utf-8 -*-
import scrapy
class JmtvSpider(scrapy.Spider):
name = 'jmtv'
allowed_domains = ['jmtv.cn']
start_urls = ['http://jmtv.cn/']
def parse(self, response):
pass
介绍一下代码结构:
name = 'jmtv'
项目的唯一命名,用来区分不同的spider
allowed_domains = ['jmtv.cn']
他是允许爬取的域名,如果初始化或后续的请求链接不是,这个域名下的,则请求链接会被过滤掉
start_urls = ['http://jmtv.cn/']
它定义了spider启动时爬取的url列表,初始化请求,是由他来定义的
def parse(self, response):
默认情况下,start_urls,里面的链接构成的请求完成下载后,返回的响应就会作为参数传递给这个函数,该方法负责解析返回的响应,提取数据或生成进一步的要处理的请求
3.创建Item,定义抓取的数据的字段
#Item是保存爬取数据的容器,定义了我们要抓去的数据字段
import scrapy
class GuangdongChizhengItem(scrapy.Item):
site_id = scrapy.Field()
site_name = scrapy.Field()
title = scrapy.Field()
url = scrapy.Field()
name = scrapy.Field()
content = scrapy.Field()
release_time = scrapy.Field()
image_url = scrapy.Field()
video_url = scrapy.Field()
download_image = False
is_out_link = 1
4.解析response响应数据
# -*- coding: utf-8 -*-
import scrapy
import re
from lxml import etree
from guangdong_chizheng.items import GuangdongChizhengItem
# 必须定义 网站id
SITE_ID = 1001
# 必须定义 网站名
SITE_NAME = '江门市广播电视台'
class JmtvSpider(scrapy.Spider):
name = 'jmtv'
allowed_domains = ['jmtv.cn']
start_urls = ['http://www.jmtv.cn/search/comment_list.php?callback=jQuery21406848936938500227_1575095471100&&channelid=14420&year=2019&op=list&datetime=11&page=1&random=0.4310174986829107&_=1575095471105']
def parse(self, response):
item = GuangdongChizhengItem()
html = response.text
html_list = re.findall('jQuery.*?(\{.*\})\)', html, re.S)
html_list = eval(html_list[0])
html = etree.HTML(html_list['html'].replace('\\', ''))
html_list = html.xpath('//li')
for info in html_list:
#print(etree.tostring(info, encoding='unicode'))
item['title'] = info.xpath('string(./div[@class="main_con_right"]/div[@id="title1"]/a/text())')
item['image_url'] = info.xpath('string(./div[@class="main_con_left"]/img/@src)')
item['url'] = info.xpath('string(./div[@class="main_con_right"]/div[@id="title1"]/a/@href)')
release_time = info.xpath('string(//span[@class="shijian"]/text())')
item['release_time'] = ''.join(re.findall('时间:(.*?)<', release_time,re.S))
item['site_id'] = SITE_ID
item['site_name'] = SITE_NAME
item['content'] = ''
item['video_url'] = ''
yield item
5.后续生成request进一步抓取
request = scrapy.Request('下一页url', callback=lambda response: self.parse(response))
yield request
6.配置pipeline存储数据到mongo
class MongoPipeline(object):
def __init__(self, mongo_uri, mongo_db):
self.mongo_uri = mongo_uri
self.mongo_db = mongo_db
@classmethod
def from_crawler(cls, crawler):
return cls(
mongo_uri=crawler.settings.get('MONGO_URI'),
mongo_db=crawler.settings.get('MONGO_DB')
)
def open_spider(self, spider):
self.client = pymongo.MongoClient(self.mongo_uri)
self.db = self.client[self.mongo_db]
def process_item(self, item, spider):
name = item.__class__.__name__
self.db[name].insert(dict(item))
return item
def close_spider(self, spider):
self.client.close()
7.更改setting配置文件:
ITEM_PIPELINES = {
'guangdong_chizheng.pipelines.GuangdongChizhengPipeline': 300,
'guangdong_chizheng.pipelines.MongoPipeline': 400,
}
#MongoDB数据库信息
MONGO_URI='localhost'
MONGO_DB='guangdong_chizheng'
8.运行代码:
新建一个py文件,写入一下代码
from scrapy.cmdline import execute
execute(["scrapy","crawl","jmtv"])
以上一个完整的爬虫代码构造了,介绍的很详细了,就不上代码了,需要的话私信
每周六,定时发表一篇关于scrapy框架用法介绍,后续还会有scrapy-redis,再到解读源码,一步一步来点关注哦,干货满满