声明:
本实验的操作系统是ubuntu,python 2.X
Code-1:抓取静态的title数据(无需登录用户)
获取淘宝主页的页面静态数据
url:http://www.taobao.com
#!/usr/bin/env python
#-*- coding: utf-8 -*-
#@author Amiber
#@date 2012-12-01
#@brief grap the static-web data with chinese languag
from BeautifulSoup import BeautifulSoup
import urllib2
url = r"http://www.taobao.com"
resContent = urllib2.urlopen(url).read()
resContent = resContent.decode('gbk').encode('utf8')
soup = BeautifulSoup(resContent)
print soup.title.string
url = r"http://www.news.baidu.com"
resContent = urllib2.urlopen(url).read().decode('gb18030').encode('utf8')
soup = BeautifulSoup(resContent)
print soup.title.string
Code-2:抓取静态网页中的table数据(无需登录用户)
获取的是国家统计局一个网上上的静态表格数据
#!/usr/bin/env python
#!-*- coding:utf-8 -*-
#@author Amiber
#@date 2012-12-01
#@brief grap the table-data in static-web
from BeautifulSoup import BeautifulSoup
import urllib2
import re
import string
def earse(strline,ch) :
left = 0
right = strline.find(ch)
while right !=-1 :
strline = strline.replace(ch,'')
right = strline.find(ch)
return strline
url = r"http://www.bjstats.gov.cn/sjfb/bssj/jdsj/2012/201211/t20121130_239295.htm"
resContent = urllib2.urlopen(url).read()
resContent = resContent.decode('gb18030').encode('utf8')
soup = BeautifulSoup(resContent)
print soup('title')[0].string
tab= soup.findAll('table')
trs = tab[len(tab)-1].findAll('tr')
for trIter in trs :
tds = trIter.findAll('td')
for tdIter in tds :
span = tdIter('span')
for i in range(len(span)) :
if span[i].string :
print earse(span[i].string,' ').strip(),
else :
pass
print
Code-3:抓取静态网页中的文档数据(无需登录用户)
获取的是一个bbs网站的一个zip文档数据
#!/usr/bin/env python
#