采用适当的头文件(header)可以避免爬虫被拒绝访问,爬不同的网站需要适当调整头文件,以逼乎头文件为例:
headers = {
"authority": "www.zhihu.com",
"method": "POST",#post好牛逼啊
"scheme": "https",
"accept-language": "zh-CN,zh;q=0.9",
"cookie": '_zap=4b8fd0b0-5ece-4710-8a39-4690be3cc915; d_c0="ACDn4-HhLA-PTloTkzkSI1g9NSQ0UNbecnY=|1553490041"; _xsrf=iqWraCpzOAEVVHNZGwDfyaUPzBb7lkuI; z_c0="2|1:0|10:1553513989|4:z_c0|92:Mi4xTHpaZUJBQUFBQUFBSU9majRlRXNEeVlBQUFCZ0FsVk5CUXlHWFFCVjdwTFIwbjFVeXdZWmREdDVybTVvVWtVa0NR|e97ba19d5423a0bb2269441eb310b80853aaed3e4cfdcd555c5b4517e681824d"; __gads=ID=ef86bad0aef0dc13:T=1553514097:S=ALNI_MaIcscAZVawHrwdA_5OzAq3gGMLfg; __utmv=51854390.100-1|2=registration_date=20170314=1^3=entry_date=20170314=1; _ga=GA1.2.1820027566.1554478077; tst=r; q_c1=c09535e464704c7e8aa93032d032f507|1556906107000|1553490042000; __utma=51854390.1820027566.1554478077.1555816095.1556906108.7; __utmz=51854390.1556906108.7.7.utmcsr=baidu|utmccn=(organic)|utmcmd=organic; tgw_l7_route=060f637cd101836814f6c53316f73463',#方便登录
"origin": "https://www.zhihu.com",
"referer": "https://www.zhihu.com/topic",#来源自哪里
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36",
"x-requested-with": "XMLHttpRequest",#为 XMLHttpRequest,则为 Ajax 异步请求!
"x-xsrftoken": "iqWraCpzOAEVVHNZGwDfyaUPzBb7lkuI",#反验证功能
"_xsrf":"697157726143707a4f41455656484e5a47774466796155507a4262376c6b7549"#页面校检码,是来检查你是否是从正常的登录页面过来的
}
"user-agent"一定要写,它决定了爬虫能否被识别为正常访问
cookie是模拟以某个账号登录
x-xsrftoken、_xsrf涉及到反爬虫知识