------------------scrapy 命令和例子--------------------
==========常用命令======
scrapy startproject abc www.abc.com ----create project
scrapy shell lab.scrapyd.cn ---debug 查看页面html内容
scrapy shell http://www.scrapyd.cn
response.xpath("表达式")
[讲解 http://www.scrapyd.cn/doc/186.html]
response.xpath("//p[@class='desc']//text()").extract()
response.xpath("//h4[@class='media-heading']//text()").extract()
response.css('div.quote')[0]
response.xpath("//span[@class='text']//text()").extract_first()
response.css('.text::text').extract_first()
scrapy crawl abc
=========全局命令===============
startproject
genspider
settings
runspider
scrapy runspider scrapy_cn.py
shell
scrapy shell http://www.scrapyd.cn
fetch
scrapy fetch http://www.scrapyd.cn >d:/projects/ABC/3.html
view 直接把收集的内容在浏览器中打开
scrapy view http://www.scrapyd.cn
version
scrapy version
================项目命令====================
crawl
crawl:运行蜘蛛
scrapy crawl scrapyd_cn
check
check:检查蜘蛛
list
list:显示有多少个蜘蛛
scrapy list
edit
parse
bench
========导出页面
scrapy fetch http://lab.scrapyd.cn >d:/projects/ABC/lab.html
scrapy shell http://lab.scrapyd.cn
=================几种运行方式===============
【1】scrapy crunspider single.py
【2】scrapy crawl single -a tag=人生
如果用上面这个方式,运行一个单个文件的py,则会报错 :
Scrapy 1.5.1 - no active project
Unknown command: crawl
Use "scrapy" to see available commands
用下面的则不会:
【3】scrapy crunspider single -a tag=人生
=================urllib 转码讲解===============
https://www.cnblogs.com/caicaihong/p/5687522.html
==========================example==========fileName='single'=================
import scrapy
#import sys
import urllib.parse
class itemSpider(scrapy.Spider):
name = 'listSpider'
start_urls = ['http://lab.scrapyd.cn']
def start_requests(self):
#url = self.start_urls[0]
url = 'http://lab.scrapyd.cn/'
tag = getattr(self, 'tag', None) # 获取tag值,也就是爬取时传过来的参数
if tag is not None: # 判断是否存在tag,若存在,重新构造url
# encode method [1]
#tag = urllib.parse.quote(tag.encode('utf8'))
#print("=====s%======",tag) #=====s%====== %E4%BA%BA%E7%94%9F
# encode method [2]
tag = urllib.parse.quote(tag,safe='')
print("=====s%======",tag)
#urllib.parse.unquote(s)
unquotetag = urllib.parse.unquote(tag)
print("!!!!!!s%!!!!!",unquotetag)
#urllib.parse.urlencode(values)
url = url + 'tag/' + tag # 构造url若tag=爱情,url= "http://lab.scrapyd.cn/tag/爱情"
yield scrapy.Request(url, self.parse) # 发送请求爬取参数内容
def parse(self, response):
mingyan = response.css('div.quote') # 提取首页所有名言,保存至变量mingyan
for v in mingyan: # 循环获取每一条名言里面的:名言内容、作者、标签
text = v.css('.text::text').extract_first() # 提取名言
autor = v.css('.author::text').extract_first() # 提取作者
tags = v.css('.tags .tag::text').extract() # 提取标签
tags = ','.join(tags) # 数组转换为字符串
"""
接下来进行写文件操作,每个名人的名言储存在一个txt文档里面
"""
fileName = '%s-语录.txt' % autor # 定义文件名,如:木心-语录.txt
with open(fileName, "a+") as f: # 不同人的名言保存在不同的txt文档,“a+”以追加的形式
f.write(text)
f.write('\n') # ‘\n’ 表示换行
f.write('标签:' + tags)
f.write('\n-------\n')
f.close()
next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)
==========================example==========fileName='single'=================