scrapy初步-简单静态爬虫(爬取电影天堂所有电影)(1)

1.name属性:用于区别Spider,名字必须唯一。

2.start_urls属性 :Spider启动时的初始url,第一个爬取的页面

3.parse()函数:该函数传入的response参数向url发送请求后获得的响应,此方法获得数据后解析数据和生成进一步爬取的url

  • items.py:类似于jevaee中的javabean类,用于封装要爬取的数据的类,需要继承scrapy.Item类

  • piplines.py:用于对封装后的数据类进行处理,验证数据是否正确,储存数据或者直接丢弃数据,该类必须实现process_item()方法,传入item和spider参数,item是数据封装的类,spider代表爬取该item的spider。该类需要在setting.py中激活

  • middlewares.py:中间件,介于request/response之间的钩子程序,用于修改reuqest和response的类

  • settings.py:设置文件

2.程序编写
  • dytts_spider.py

import sys

sys.path.append(“…/”) #添加items.py模块

import scrapy

import urlparse

from bs4 import BeautifulSoup

import re

from items import DyttspiderItem

class DyttSpider(scrapy.Spider):

name = “dyttspiders” #爬虫名字

allowed_domains = [“ygdy8.com”] #允许爬取网页的域名

start_urls = [“http://www.ygdy8.com/html/gndy/dyzz/20130820/42954.html”] #初始url

def parse(self,response):

start_url = “http://www.ygdy8.com/” #用于构建完整的url

print(“is parse :%s”%response.url)

rep = re.compile(“<.*?>”) #用于提取电影内容的正则表达式

mat = r".*/\d+/\d+.html" #用于匹配需要爬取网页的正则表达式

urls = response.xpath(“//a/@href”).extract() #scrapy的选择器语法,用于提取网页链接

data = response.body #响应的文本

soup = BeautifulSoup(data,“lxml”) #构建BeautifulSoup对象

content = soup.find(“div”,id=“Zoom”) #电影信息元素

item = None

if content:

sources = content.find_all(“a”) #电影下载地址元素

source = [] #电影下载地址

if sources:

for link in sources:

source.append(link.text)

print(source[0])

name = soup.find(“title”) #电影名称

if name:

name = name.text

print(“%s is parsed”%name.encode(“utf-8”))

message = content.text.encode(“utf-8”) #电影信息

message = rep.sub(" ",message)

item = DyttspiderItem(name=name,message=message,source=source) #构建数据类

else:

print(“error!!!”)

yield item #返回数据类

for url in urls:

if re.match(mat,url) != None:

full_url = urlparse.urljoin(start_url,url) #返回进一步爬取的url

yield scrapy.Request(url=full_url,callback=self.parse)

  • 2.items.py

-- coding: utf-8 --

Define here the models for your scraped items

See documentation in:

http://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class DyttspiderItem(scrapy.Item):

define the fields for your item here like:

name = scrapy.Field()

name = scrapy.Field() #电影名称

source = scrapy.Field() #电影下载地址

message = scrapy.Field() #电影信息

  • pipelines.py

import pymongo

from items import DyttspiderItem

-- coding: utf-8 --

Define your item pipelines here

Don’t forget to add your pipeline to the ITEM_PIPELINES setting

See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html

class DyttspiderPipeline(object):

def init(self):

self.client = pymongo.MongoClient() #构建mongodb客户端

def process_item(self, item, spider):

if item:

print(“is saveing a move %s”%item.name)

dic_item = dict(item) #将数据类转化为能存储的字典

result = self.client.moves.ygdy.find_one({“name”:item[“name”]}) #查询数据库中是否存在此数据

if result == None:

self.client.moves.ygdy.insert(dic_item)

else:

print(“one move is in document document”)

pipelines需要在settings.py中注册,详见后面的seetings.py

  • middlewares.py

-- coding: utf-8 --

Define here the models for your spider middleware

See documentation in:

http://doc.scrapy.org/en/latest/topics/spider-middleware.html

from scrapy import signals

from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware

import random

class MyUserAgentMiddleware(UserAgentMiddleware):

‘’’

设置User-Agent

‘’’

def init(self, user_agent):

self.user_agent = user_agent

@classmethod

def from_crawler(cls, crawler):

return cls(

user_agent=crawler.settings.get(‘MY_USER_AGENT’)

)

def process_request(self, request, spider):

agent = random.choice(self.user_agent)

request.headers[‘User-Agent’] = agent

用于在request中添加随机user-agent

  • settings.py

-- coding: utf-8 --

Scrapy settings for DyttSpider project

For simplicity, this file contains only settings considered important or

commonly used. You can find more settings consulting the documentation:

http://doc.scrapy.org/en/latest/topics/settings.html

http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html

http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = ‘DyttSpider’

SPIDER_MODULES = [‘DyttSpider.spiders’]

NEWSPIDER_MODULE = ‘DyttSpider.spiders’

COOKIES_ENABLES=False

Crawl responsibly by identifying yourself (and your website) on the user-agent

#USER_AGENT = ‘DyttSpider (+http://www.yourdomain.com)’

Obey robots.txt rules

ROBOTSTXT_OBEY = False

DOWNLOADER_MIDDLEWARES={

‘scrapy.downloadermiddlewares.useragent.UserAgentMiddleware’:None,

‘DyttSpider.middlewares.MyUserAgentMiddleware’:400,

}

ITEM_PIPELINES = {“DyttSpider.pipelines.DyttspiderPipeline”:300}

MY_USER_AGENT = [

“Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)”,

“Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)”,

“Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)”,

“Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)”,

“Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)”,

“Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)”,

“Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)”,

“Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)”,

“Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6”,

“Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1”,

“Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0”,

“Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5”,

“Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6”,

“Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11”,

“Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20”,

“Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52”,

“Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11”,

“Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER”,

自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。

深知大多数Python工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年Python开发全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。

img

img

img

img

img

img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上前端开发知识点,真正体系化!

由于文件比较大,这里只是将部分目录大纲截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且后续会持续更新

如果你觉得这些内容对你有帮助,可以扫码获取!!!(备注:Python)

链图片转存中…(img-HxszfThc-1713697762629)]

img

img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上前端开发知识点,真正体系化!

由于文件比较大,这里只是将部分目录大纲截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且后续会持续更新

如果你觉得这些内容对你有帮助,可以扫码获取!!!(备注:Python)

  • 16
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值