在爬虫的过程出现这个:
Traceback (most recent call last):
File "D:\pythonDemo01\pythonDemo01\Demo19.py", line 11, in <module>
response=urllib2.urlopen(imag_url)
File "D:\Python27\lib\urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "D:\Python27\lib\urllib2.py", line 435, in open
response = meth(req, response)
File "D:\Python27\lib\urllib2.py", line 548, in http_response
'http', request, response, code, msg, hdrs)
File "D:\Python27\lib\urllib2.py", line 473, in error
return self._call_chain(*args)
File "D:\Python27\lib\urllib2.py", line 407, in _call_chain
result = func(*args)
File "D:\Python27\lib\urllib2.py", line 556, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
代码:
imag_url='http://img4.imgtn.bdimg.com/it/u=495525449,2738183538&fm=214&gp=0.jpg'
response=urllib2.urlopen(imag_url)
with open('nv.jpg','wb') as f:
f.write(response.read())
百度上说是被网站禁止了爬虫,因为此时 我们还未进行伪装成浏览器:
看来是得添加一下请求头相关的东西了:
headers={
'Accept-Language': 'zh-CN',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko'
}
imag_url='http://img4.imgtn.bdimg.com/it/u=495525449,2738183538&fm=214&gp=0.jpg'
res=urllib2.Request(url=imag_url,headers=headers)
response=urllib2.urlopen(res)
with open('nv.jpg','wb') as f:
f.write(response.read())