🍎作者:尘世镜花恋
声明:本文章纯属个人学习所作,切勿用于非法渠道。转载请注明原文链接。
有一些网站,对于本机用户自身的访问也计入访问量内,所以我们可以利用这一点利用自己的电脑提高自己网页的访问量。
这里仅介绍用本机ip访问。
首先导入requests和numpy库,time用于程序停止防止因请求次数过多而被封禁ip地址,url列表里是我要进行提高访问量的文章链接,headers列表是用于伪装浏览器,这里用到多个伪装头(我觉得这样保险一些)
import requests,numpy
import time
url = ['https://blog.csdn.net/syh_c_python/article/details/119416120?spm=1001.2014.3001.5502',
'https://blog.csdn.net/syh_c_python/article/details/118756672?spm=1001.2014.3001.5502',
'https://blog.csdn.net/syh_c_python/article/details/119412529?spm=1001.2014.3001.5502',
'https://blog.csdn.net/syh_c_python/article/details/118725171?spm=1001.2014.3001.5502',
'https://download.csdn.net/download/syh_c_python/20719965?spm=1001.2014.3001.5503',
'https://download.csdn.net/download/syh_c_python/20719950?spm=1001.2014.3001.5503',
'https://download.csdn.net/download/syh_c_python/20304752?spm=1001.2014.3001.5503',
'https://download.csdn.net/download/syh_c_python/20280254?spm=1001.2014.3001.5503',
'https://download.csdn.net/download/syh_c_python/20280002?spm=1001.2014.3001.5503',
'https://blog.csdn.net/syh_c_python/article/details/118759208',
'https://blog.csdn.net/syh_c_python/article/details/118756282?spm=1001.2014.3001.5502'
]
headers = [{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.26 Safari/537.36 Core/1.63.5959.400 SLBrowser/10.0.3544.400'},
{'User-Agent':"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"},
{'User-Agent':"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)"},
{'User-Agent':"Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)"},
{'User-Agent':"Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)"},
{'User-Agent':"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)"},
{'User-Agent':"Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)"},
{'User-Agent':"Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)"},
{'User-Agent':"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)"},
{'User-Agent':"Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6"},
{'User-Agent':"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1"}
]
主程序,这里限制了最大访问次数为10000,也就是说到10000访问量后程序会停止,在for循环中,谨慎起见,我用随机函数在我的列表0-6中随机出一个列表索引,所以每次程序执行的文章访问顺序是不一样的。
count = 0
countUrl = len(url)
def main():
# 访问次数设置
global count
for i in range(1,10000):
if count < 10000:
try: # 正常运行
for i in range(countUrl):
s=numpy.random.randint(0,10)
time.sleep(1)
response = requests.get(url[i], headers=headers[s])
if response.status_code == 200:
count = count + 1
print('Success ' + str(count), 'times')
time.sleep(30)
except Exception: # 异常暂停60秒再运行
print('Failed and Retry')
time.sleep(60)
else:
sys.exit()
main()
运行如下:
觉得有用的客官记得点赞关注我哦~
回来后的补充
过了两年,期间有一年因为学习耽误点时间
最近回归这段时间,也是靠着春节放个小假,再写几篇文章,对我的文章感兴趣的客官,可以看看这些哦~