- 添加请求头User-Agent:
如果不添加请求头,网站会认为不是用浏览器操作,会进行反爬虫,添加请求头,网站会识别你是用哪个浏览器,不同的浏览器User-Agent不同
user_agent = ["Mozilla/5.0 (Windows NT 10.0; WOW64)", 'Mozilla/5.0 (Windows NT 6.3; WOW64)',
'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko',
'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.95 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; rv:11.0) like Gecko)',
'Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1',
'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070309 Firefox/2.0.0.3',
'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070803 Firefox/1.5.0.12',
'Opera/9.27 (Windows NT 5.2; U; zh-cn)',
'Mozilla/5.0 (Macintosh; PPC Mac OS X; U; en) Opera 8.0',
'Opera/8.0 (Macintosh; PPC Mac OS X; U; en)',
'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.12) Gecko/20080219 Firefox/2.0.0.12 Navigator/9.0.0.6',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0)',
'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E)',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Maxthon/4.0.6.2000 Chrome/26.0.1410.43 Safari/537.1 ',
'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E; QQBrowser/7.3.9825.400)',
'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0 ',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.92 Safari/537.1 LBBROWSER',
'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; BIDUBrowser 2.x)',
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/3.0 Safari/536.11']
-
修改访问频率:
大多数情况下,我们遇到的是访问频率限制。如果你访问太快了,网站就会认为你不是一个人。这种情况下需要设定好频率的阈值,否则有可能误伤。
遇到这种网页,最直接的办法是限制访问时间
需要你限制不定的时间,不能用一个准确的时间 -
代理IP
如果对页的爬虫的效率有要求,那就不能通过设定访问时间间隔的方法来绕过频率检查了。
代理IP访问可以解决这个问题。如果用100个代理IP访问100个页面,可以给网站造成一种有100个人,每个人访问了1页的错觉。这样自然而然就不会限制你的访问了。
但是代理IP也很不稳定,需要时刻检验你的IP是否能用
import urllib2
from bs4 import BeautifulSoup
import csv
def IPspider(numpage):
csvfile = file('ips.csv', 'wb')
writer = csv.writer(csvfile)
url='http://www.xicidaili.com/nn/'
user_agent='IP'
headers={'User-agent':user_agent}
for num in xrange(1,numpage+1):
ipurl=url+str(num)
print 'Now downloading the '+str(num*100)+' ips'
request=urllib2.Request(ipurl,headers=headers)
content=urllib2.urlopen(request).read()
bs=BeautifulSoup(content,'html.parser')
res=bs.find_all('tr')
for item in res:
try:
temp=[]
tds=item.find_all('td')
temp.append(tds[1].text.encode('utf-8'))
temp.append(tds[2].text.encode('utf-8'))
writer.writerow(temp)
except IndexError:
pass
#假设爬取前十页所有的IP和端口
IPspider(10)
检验IP是否可用,在2s内连接百度
import socket
def IPpool():
socket.setdefaulttimeout(2)
reader=csv.reader(open('ips.csv'))
IPpool=[]
for row in reader:
proxy=row[0]+':'+row[1]
proxy_handler=urllib2.ProxyHandler({"http":proxy})
opener=urllib2.build_opener(proxy_handler)
urllib2.install_opener(opener)
try:
html=urllib2.urlopen('http://www.baidu.com')
IPpool.append([row[0],row[1]])
except Exception,e:
continue
return IPpool
- 分布式爬虫
分布式爬虫会部署在多台服务器上,每个服务器上的爬虫统一从一个地方拿网址。这样平均下来每个服务器访问网站的频率也就降低了。由于服务器是掌握在我们手上的,因此实现的爬虫会更加的稳定和高效。这也是我们这个课程最后要实现的目标。