有些网站加了保护,CSDN、***、等等很多网站都这样,这就必须要伪装浏览器正常访问了,类似蜘蛛爬虫一样,那么只有给代码加上一个Header,再试试读取HTML。
声明:以下代码在Python 3.3中编写调试完成!
原来想这样实现:
1
2
3
4
import
urllib.request
url
=
"http://www.oschina.net/"
data
=
urllib.request.urlopen(url).read()
print
(data)
1
2
3
4
|
import
urllib.request
url
=
"http://www.oschina.net/"
data
=
urllib.request.urlopen(url).read()
print
(data)
|
后来是这样:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
'''
Created on 2013-1-27
@author: isaced
'''
import
urllib.request
url
=
"http://www.oschina.net/"
headers
=
(
'User-Agent'
,
'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11'
)
opener
=
urllib.request.build_opener()
opener.addheaders
=
[headers]
data
=
opener.
open
(url).read()
print
(data)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
'''
Created on 2013-1-27
@author: isaced
'''
import
urllib.request
url
=
"http://www.oschina.net/"
headers
=
(
'User-Agent'
,
'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11'
)
opener
=
urllib.request.build_opener()
opener.addheaders
=
[headers]
data
=
opener.
open
(url).read()
print
(data)
|