方法1:
修改setting.py中的User-Agent
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = ‘Hello World’
方法2.
修改setting中的DEFAULT_REQUEST_HEADERS
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
‘Accept’: ‘text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8’,
‘Accept-Language’: ‘en’,
‘User-Agent’:‘Hello World’
}
方法3.
在代码中修改。
class HeadervalidationSpider(scrapy.Spider):
name = ‘headervalidation’
allowed_domains = [‘helloacm.com’]
def start_requests(self):
header={'User-Agent':'Hello World'}
yield scrapy.Request(url='http://helloacm.com/api/user-agent/',headers=header)
def parse(self, response):
print '*'*20
print response.body
print '*'*20
方法4.
在中间件中自定义Header
在项目目录下添加一个目录:
customerMiddleware,在目录中新建一个自定义的中间件文件:
文件名随意为 customMiddleware.py
文件内容为修改request User-Agent
#--coding=utf-8--
from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware
class CustomerUserAgent(UserAgentMiddleware):
def process_request(self, request, spider):
ua=‘HELLO World???’
request.headers.setdefault(‘User-Agent’,ua)
在setting中添加下面一句,以便使中间件生效。
DOWNLOADER_MIDDLEWARES = {
‘headerchange.customerMiddleware.customMiddleware.CustomerUserAgent’:10
# ‘headerchange.middlewares.MyCustomDownloaderMiddleware’: 543,
}