python中我经常使用urllib2.urlopen函数提取网页源码,但是有些时候这个函数返回的却是:HTTP Error 403: Forbidden,这表明源网站不允许爬虫进行爬取,举例说明:
- #!/usr/bin/env python
- # -*- coding: utf-8 -*-
- import urllib2
- url = "http://www.google.com/"
- data = urllib2.urlopen(url).read()
- print data
- #!/usr/bin/env python
- # -*- coding: utf-8 -*-
- import urllib2
- url = "http://www.google.com/translate_a/t?client=t&sl=zh-CN&tl=en&q=%E7%94%B7%E5%AD%A9"
- data = urllib2.urlopen(url).read()
- print data
解决方法:伪装成浏览器进行访问
- #!/usr/bin/env python
- # -*- coding: utf-8 -*-
- import urllib2
- url = "http://www.google.com/translate_a/t?client=t&sl=zh-CN&tl=en&q=%E7%94%B7%E5%AD%A9"
- #浏览器头
- headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
- req = urllib2.Request(url=url,headers=headers)
- data = urllib2.urlopen(req).read()
- print data
注:如果源码中中文为乱码,可以使用:
print data.decode("UTF-8")
本文为Eliot原创,转载请注明出处:http://blog.csdn.net/xyw_blog/article/details/18142487