使用scrapy框架爬取时光网日本动画电影时光网评分TOP100并保存到mysql数据库
- 首先创建scrapy项目shiguang(scrapy startproject shiguang (shiguang为你的项目名称))
- 在创建的shiguang文件中找到spider文件创建movie Python文件
分析网站网页
我们做一个多页爬取的处理,分析一下时光网下一页的连接
第一页为:http://movie.mtime.com/list/1709.html
第二页:http://movie.mtime.com/list/1709-2.html
第三页:http://movie.mtime.com/list/1709-3.html
由此往下可以看出依次递增➕1
start_urls = ['http://movie.mtime.com/list/1709.html']
page = 1
page_url = "http://movie.mtime.com/list/1709-%d.html"
if self.page <= 3:
self.page += 1
new_page_url = self.page_url % self.page
print(new_page_url)
yield scrapy.Request(url=new_page_url, callback=self.parse)
可以分析出xpath,以下的可以依次分析得出,废话不多说直接上代码
在movie中写入代码
import scrapy
from Shiguang.items import ShiguangItem
class MovieSpider(scrapy.Spider):
name = 'movie'
allowed_domains = ['movie.mtime.com']
start_urls = ['http://movie.mtime.com/list/1709.html']
page = 1
page_url = "http://movie.mtime.com/list/1709-%d.html"
def parse(self, response):
list_selector = response.xpath("//div[@class='top_nlist']/dl/dd")
for one_selector in list_selector:
name = one_selector.xpath("./div/h3/a/text()").getall()[0]#电影名称
director = one_selector.xpath("./div/p[1]/a/text()").get()#导演
performer = one_selector.xpath("./div/p[2]/a/text()").get()#主演
content = one_selector.xpath("./div/p[3]/text()").get()#内容简介
item = ShiguangItem()
item["name"] = name
item["director"] = director
item["performer"] = performer
item["content"] = content
yield item
#print(item)
#print('--' * 22)
if self.page <= 3:
self.page += 1
new_page_url = self.page_url % self.page
print(new_page_url)
yield scrapy.Request(url=new_page_url, callback=self.parse)
在items中写入代码
import scrapy
class ShiguangItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
name = scrapy.Field()
director = scrapy.Field()
performer = scrapy.Field()
content = scrapy.Field()
在管道文件pipelines中连接数据库
class ShiguangPipeline:
def process_item(self, item, spider):
return item
import MySQLdb
class MySQLPipeline(object):
def open_spider(self,spider):
db_name = spider.settings.get("MYSQL_DB_NAME","mtime")
host = spider.settings.get("MYSQL_HOST","localhost")
user = spider.settings.get("MYSQL_USER","root")
pwd = spider.settings.get("MYSQL_PASSWORD","123456")
self.db_conn = MySQLdb.connect(db=db_name,
host=host,
user=user,
password=pwd,
charset="utf8")
self.db_cursor = self.db_conn.cursor()
def process_item(self, item, spider):
values = (item['name'],
item["director"],
item["performer"],
item["content"])
sql = 'insert into dianyin(name,director,performer,content)values(%s,%s,%s,%s)'
self.db_cursor.execute(sql,values)
return item
def close_spider(self,spider):
self.db_conn.commit()
self.db_cursor.close()
self.db_conn.close()
最后在settings文件中做出以下设置
这样就完成了整个时光网的爬取,下面看效果图
第一次写博客,可能写的不太好,如果有什么疑问可以私聊我
Q:502037970