mysql 豆瓣,爬取豆瓣电影Top250并存入Mysql

最近使用scrapy爬取了豆瓣电影Top250的数据和图片保存到Mysql,给大家分享一下

创建scrapy startproject doubantop250

items.py

class DoubansqlItem(scrapy.Item):

# define the fields for your item here like:

moviename = scrapy.Field()

dbimgurl = scrapy.Field()

classname = scrapy.Field()

grade = scrapy.Field()

count = scrapy.Field()

introduction = scrapy.Field()

Pymysql插入数据库部分代码

Pipelines.py

class DoubansqlPipeline(object):

def __init__(self):

self.connect = pymysql.connect(

host=settings.MYSQL_HOST,

port=3306,

db=settings.MYSQL_DBNAME,

user=settings.MYSQL_USER,

passwd=settings.MYSQL_PASSWD,

charset='utf8',

use_unicode=True)

self.cursor = self.connect.cursor()

def process_item(self, item, spider):

self.cursor.execute(

"""insert into doub_doubdata(moviename,dbimgurl,classname,grade,count,introduction) values(%s,%s,%s,%s,%s,%s)""",

(item['moviename'],

item['dbimgurl'],

item['classname'],

item['grade'],

item['count'],

item['introduction']

))

# 执行sql语句,item里面定义的字段和表字段一一对应

self.connect.commit()

# 提交

return item

在settings.py里设置数据库信息

ITEM_PIPELINES = {

'doubansql.pipelines.DoubansqlPipeline': 300,

}

MYSQL_HOST = 'localhost' # 数据库地址

MYSQL_DBNAME = 'doubandata' # 数据库名字

MYSQL_USER = 'root' # 数据库登录名

MYSQL_PASSWD = '1234567' # 数据库登录密码

创建doubanmovie.py开始爬取

class DoubanmovieSpider(scrapy.Spider):

name = 'doubanmovie'

allowed_domains = ['douban.com']

start_urls = ['https://movie.douban.com/top250']

def parse(self, response):

item = DoubansqlItem()

Movies = response.xpath('//*[@id="content"]/div/div[1]/ol/li')

for eachMovie in Movies:

moviename = eachMovie.xpath('div/div[2]/div[1]/a/span[1]/text()').extract_first()

dbimgurl = eachMovie.xpath('div/div[1]/a/img/@src').extract_first()

classname = eachMovie.xpath('div/div[2]/div[2]/p[2]/span/text()').extract_first()

grade = eachMovie.xpath('div/div[2]/div[2]/div/span[2]/text()').extract_first()

count = eachMovie.xpath('div/div[2]/div[2]/div/span[4]/text()').extract_first()

introduction = eachMovie.xpath('div/div[2]/div[2]/p[1]/text()').extract_first()

if introduction:

introduction = introduction[0]

else:

introduction = ''

filename = moviename + '.jpg'

dirpath = './cover'

if not os.path.exists(dirpath):

os.makedirs(dirpath)

filepath = os.path.join(dirpath, filename)

urllib.request.urlretrieve(dbimgurl, filepath)

cover = 'cover/' + filename

item['moviename'] = moviename

item['dbimgurl'] = cover

item['classname'] = classname

item['grade'] = grade

item['count'] = count

item['introduction'] = introduction

yield item

# 解析下一页规则,取后一页的xpath

next_link = response.xpath("//span[@class='next']/link/@href").extract()

if next_link:

next_link = next_link[0]

yield scrapy.Request('https://movie.douban.com/top250' + next_link, callback=self.parse)

想要看代码的可以点以下链接

代码下载

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值