如果起始地址很多,是从文件读取的就无法使用start_urls=[]的形式了,需要重写start_requests方法来加载起始URL.
def start_requests(self):
self.urls = []
with open('D:\Java\program\myscrapy\hot\hot\htmls.txt', 'r') as f:
self.urls = f.readlines()
for url in self.urls:
time.sleep(2)
yield scrapy.Request(url=url, callback=self.parse)
为了防止发送速度太快,容易引起反扒机制,每个请求阻塞两秒.
还要在settings.py文件中设置并发参数
CONCURRENT_REQUESTS = 2
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
CONCURRENT_REQUESTS_PER_DOMAIN = 2
CONCURRENT_REQUESTS_PER_IP = 2