这个小爬虫主要的思想是找到一个大v,然后通过爬取这个大v的粉丝来获取用户信息,再通过粉丝的粉丝等依次往下获取信息,类似一个树状的结构。
选一个大v:
https://www.zhihu.com/people/xuxiaofeng1993/activities
然后通过分析网页可知,他关注的人信息所在的接口:
https://www.zhihu.com/api/v4/members/xuxiaofeng1993/followees?include=data%5B*%5D.answer_count%2Carticles_count%2Cgender%2Cfollower_count%2Cis_followed%2Cis_following%2Cbadge%5B%3F(type%3Dbest_answerer)%5D.topics&offset=40&limit=20
他的粉丝信息所在接口:
http://www.zhihu.com/api/v4/members/xuxiaofeng1993/followers?include=data%5B%2A%5D.answer_count%2Carticles_count%2Cgender%2Cfollower_count%2Cis_followed%2Cis_following%2Cbadge%5B%3F%28type%3Dbest_answerer%29%5D.topics&limit=20&offset=20
通过分析可知所有的粉丝信息都是通过ajax加载的,知道这些大致的结构,就可以构建爬虫了。
首先需要在setting中改写下headers和user-agent。在尝试中发现在headers中需要加入下面的这个,不然就会被禁,无法爬取:
'authorization': 'oauth c3cef7c66a1843f8b3a9e6a1e3160e20'
由于每个用户的信息和网页时不同的,所以需创建动态的url,进而获得不同用户的信息:
user_url = 'https://www.zhihu.com/api/v4/members/{user}?include={include}'
follows_url = 'https://www.zhihu.com/api/v4/members/{user}/followees?include={include}&offset={offset}&limit={limit}'
followers_url = 'https://www.zhihu.com/api/v4/members/{user}/followers?include={include}&offset={offset}&limit={limit}
在scrapy中,用不同的url回调不同的解析方法,选择是解析用户信息或是得到粉丝或是关注者的信息。由于每个用户的信息和和
yield Request(self.user_url.format(user=self.start_user, include=self.user_query), self.parse_user)
yield Request(self.follows_url.format(user=self.start_user, include=self.follows_query, limit=20, offset=0),self.parse_follows)
yield Request(self.followers_url.format(user=self.start_user, include=self.followers_query, limit=20, offset=0),self.parse_followers)
详细代码我上传到了资源里,有需要的话可以自行下载:https://download.csdn.net/download/qq_42820395/10689568 。