伯乐
jobbole.py
Spiders中的jobbole.py:用来爬取的主要网页,里面的spider类主要有两个方法,parse方法和parse_details方法,在scrapy中,主要使用异步的方法进行爬取,它自己构建了request方法,使用yield去发出请求,并交给scrapy的下载器对url的内容进行下载,传给声明好的调用函数,默认下就是parse,也可以自己构建处理的函数去处理。
在抓取伯乐全部文章的时候,parse方法用于爬取文章目录的网页,获取目录中文章的url和下一页的url,parse_details方法解析详细的文章页面要获取的内容。基本逻辑如下:
def parse(self,response):第一次启动时估计是默认进入starts_url获取response(这个我也不太清楚)
解析目录页面获取文章url与下一页url
Forurl in 文章url:
Yield给parse_detail
If有下一页:
Yield给 parse
下面贴出的代码中的item处理方式有两种,一种注释了的是直接在parse_detail中处理完获取的数据后直接通过键值匹配赋值。另一种利用css样式直接使用itemloader类loader到item的类中,在类中进行处理。这个要在后面的item类的代码中才能看到。
class JobboleSpider(scrapy.Spider):
name = 'jobbole'
allowed_domains = ['blog.jobbole.com']
start_urls = ['http://blog.jobbole.com/all-posts/']
# allowed_domains = ["python.jobbole.com"]
# start_urls = ['http://python.jobbole.com/all-posts/']
def parse(self, response):
post_nodes = response.css("#archive .floated-thumb .post-thumb a")
for post_node in post_nodes:
post_url = post_node.css("::attr(href)").extract_first("")
image_url = post_node.css("img::attr(src)").extract_first("")#从该页面获取的url,利用meta将获得的url放到request里面,返回的response(不同于获取meta内容的response)也会带有meta,那么在parse_detail中也可以获取meta的值
yield Request(url=parse.urljoin(response.url, post_url),meta = {'fromt_image_url':image_url}, callback=self.parse_detail)
# yield到下载器下载response并给回调函数做事情
next_url = response.css('.next.page-numbers::attr(href)').extract_first()
if next_url:
yield Request(url=parse.urljoin(response.url, next_url))
pass
def parse_detail(self,response):
# job_article = ArticleItem()
front_image_url = response.meta.get('fromt_image_url','')
# title = response.css(".entry-header h1::text").extract()[0]
# create_date = response.css(".entry-meta-hide-on-mobile::text").extract()[0].strip().replace("·", " ").strip()
# praise_number = response.css(".vote-post-up::text").extract()[0]
# match_re = re.match(".*(\d+).*", praise_number)
# if match_re:
# praise_number = match_re.group(1)
# else:
# praise_number = 0
#
# fav_number = response.css(".bookmark-btn::text").extract()[0]
# match_re = re.match(".*(\d+).*", fav_number)
# if match_re:
# fav_number = match_re.group(1)
# else:
# fav_number = 0
# com_number = response.css("a[href='#article-comment'] span::text").extract()[0]
# match_re = re.match(".*(\d+).*", com_number)
# if match_re:
# com_number = match_re.group(1)
# else:
# com_number = 0
# content = response.css("div.entry").extract()[0]
# tag_list = response.css("p.entry-meta-hide-on-mobile a::text").extract()
# tags = ','.join(tag_list)
# job_article['title'] = title
# job_article['url'] = response.url
# try:
# create_date = datetime.datetime.strptime(create_date,"%Y/%m/%d").date()
# except Exception as e:
# create_date = datetime.datetime.now.date()
# job_article['create_date']=create_date
# job_article['praise_number']=praise_number
# job_article['fav_number']=fav_number
# job_article['comment_number']=com_number
# job_article['content']=content
# job_article['tags']=tags
# job_article['front_image_url']=[front_image_url]
# job_article['url_object_id']=get_md5(response.url)
item_loader = ArticleItemloader(item = ArticleItem(),response = response)
item_loader.add_css('title','.entry-header h1::text')
item_loader.add_value('url',response.url)
item_loader.add_value('url_object_id',get_md5(response.url))
item_loader.add_css('create_date','.entry-meta-hide-on-mobile::text')
item_loader.add_css('praise_number','.vote-post-up h10::text')
item_loader.add_css('fav_number','.bookmark-btn::text')
item_loader.add_css('comment_number',"a[href='#article-comment'] span::text")
item_loader.add_css('content','div.entry')
item_loader.add_css('tags','p.entry-meta-hide-on-mobile a::text')
item_loader.add_value('front_image_url',[front_image_url])
article_item = item_loader.load_item()
yield article_item#yield到pipeline
Items.py & Pipeline.py
类似于django,items自己创建的类就跟models差不多,但是只有一种类型就是scrapy.Field(),在创建好对应的item之后,我们就要在parse_details中创建一个item对象,并给他赋值,赋上所爬取的内容,并yield给pipeline,pipeline是一个用来对获得的item进行拦截并且处理的文件,写入数据库,json等都是在pipeline文件中进行。在pipeline中的方法写好后要在setting中设置它们的执行顺序,要使pipeline中的方法有序的执行。
item类中写入的话在爬取文件中还要带入itemloader进行load,在这里我们重写了一下itemloader的一个方法。
处理数据的方法:
数据传入处理input_processor=map(fn1,fn2,fn3)根据函数的顺序进行处理,传入的参数跟在spider页面parse_detial获得的数据一样,可以执行匿名函数,也可以自己封装好函数,也可以不写处理过程。
数据传出处理output_processor()跟上面一样,是对被传入处理函数处理过得数据进行二次处理的方法,在我们重写的itemloader中我们对这个传出处理进行了重写,使每个没有声明output_processor()的item元素都自动添加该处理方法。
在类中还有一个get_sql方法,由于在parse_details 向pipeline方法 yield item后,pipeline需要调取sql语句和元组数组进行插入操作。因此封装在在这里面的话可以统一地对使用一个pipeline方法就可以实现对不同的数据表进行插入操作了。
然后就是pipeline文件中异步插入数据库的方法,这个先不详细说了,先看一看就好。
def convert_date(create_date):
create_date = create_date.strip().replace("·", " ").rstrip()
try:
create_date = datetime.datetime.strptime(create_date, "%Y/%m/%d").date()
except Exception as e:
create_date = datetime.datetime.now().date()
return create_date
class ArticleItemloader(ItemLoader):
default_output_processor = TakeFirst()
class ArticleItem(scrapy.Item):
title = scrapy.Field(
)
create_date = scrapy.Field(
input_processor = MapCompose(convert_date)
)
url = scrapy.Field()
url_object_id = scrapy.Field()
front_image_url = scrapy.Field(
output_processor = MapCompose(lambda x:x)
)
front_image_path = scrapy.Field(
)
praise_number = scrapy.Field(
input_processor=MapCompose(get_num)
)
comment_number = scrapy.Field(
input_processor=MapCompose(get_num)
)
fav_number = scrapy.Field(
input_processor=MapCompose(get_num)
)
tags = scrapy.Field(
output_processor = ','.join
)
content = scrapy.Field()
def get_sql(self):
insert_sql = """
insert into article(title,url,url_object_id,front_image_url,front_image_path,comment_nums,praise_nums,create_date,fav_nums,tags,content)
VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)
"""
params =(self['title'],self['url'],self['url_object_id'],self['front_image_url'][0],self['front_image_path'],self['comment_number'],self['praise_number'],self['create_date'],self['fav_number'],self['tags'],self['content'])
return insert_sql, params
class MysqlTwistedPipeline(object): def __init__(self,dbpool): self.dbpool = dbpool @classmethod def from_settings(cls, settings): dbparms = dict( host=settings["MYSQL_HOST"], db=settings["MYSQL_DBNAME"], user=settings["MYSQL_USER"], passwd=settings["MYSQL_PASSWORD"], charset='utf8mb4', cursorclass=MySQLdb.cursors.DictCursor, use_unicode=True, ) dbpool = adbapi.ConnectionPool("MySQLdb", **dbparms) return cls(dbpool) def process_item(self, item, spider): #使用twisted 将mysql插入异步执行 query = self.dbpool.runInteraction(self.do_insert,item) query.addErrback(self.handle_error,item) def handle_error(self,failure,item): print(failure) def do_insert(self,cursor,item): #具体插入 insert_sql, params = item.get_sql() cursor.execute(insert_sql,params)