1.站点分析
打开哈理工教务在线,进入教务公告页面,将每页显示量设置成为一个较大的值,例如10000。。。
对应站点为:http://jwzx.hrbust.edu.cn/homepage/infoArticleList.do;jsessionid=0A7BC5FE8C48FB877683ABB970E4F6D6.TH?sortColumn=publicationDate&columnId=354&sortDirection=-1&pagingPage=1&pagingNumberPer=10000
而所要抓取的文章标题的源码如下
那么可知公告文章标题的xpaht为:
"//ul[@class='articleList']/li/div/a/text()"
2.打开cmd,输入 scrapy startproject jwzx
3.打开jwzx/jwzx/spiders,创建一个新文件jwzxSpider.py
编写代码:
import scrapy
class jwzxSpider(scrapy.Spider):
name = 'jwzx'
start_urls = ['http://jwzx.hrbust.edu.cn/homepage/infoArticleList.do;jsessionid=0A7BC5FE8C48FB877683ABB970E4F6D6.TH?sortColumn=publicationDate&columnId=354&sortDirection=-1&pagingPage=1&pagingNumberPer=10000']
def parse(self, response):
titles = response.xpath("//ul[@class='articleList']/li/div/a/text()").extract()
urls = response.xpath("//ul[@class='articleList']/li/div/a/@href").extract()
for i in range(len(urls)):
print(titles[i])
print(urls[i])
with open("C:/Users/kfc/Desktop/jwzx.txt",'a',encoding='utf-8') as f:
f.write(titles[i].strip()+'\n'+'http://jwzx.hrbust.edu.cn/homepage/'+urls[i]+'\n')
4.然后进入cmd,在项目的根目录下运行scrapy crawl jwzx(这个jwzx就是刚才jwzxSpider.py文件中的name字段)
5.输出接过保存至本地文件: