爬取数据流程
- 获取爬取网页的html文件
- 使用BeautifulSoup库获得html的soup对象
- 得到soup对象后,找到自己需要的数据所在的标签,并通过设置合适的属性和属性值的正则表达式作为筛选。
代码如下
import re
import requests
from bs4 import BeautifulSoup
def getHTML(url,keyword):
url = url + 'search?word='+ keyword
#print(url)
try:
r = requests.get(url,timeout = 30)
r.raise_for_status
r.encoding = "GB2312"
return r.text
except:
return ""
def getSoup(page):
soup = BeautifulSoup(page,'html.parser')
return soup
def fillResult(soup):
results = []
resultsTemp = []
#links = soup.find_all('a')
links = soup.find_all('a',{'href':re.compile('question/')})
#print(links)
for a in links:
link = a.attrs['href']
resultsTemp.append(link)
for i in range(len(resultsTemp)):
#爬取的数据这里有重复,做个判断去掉
if i%2 != 0:
results.append(resultsTemp[i])
return results
def printResult(results):
for i in range(len(results)):
print(i,results[i])
url = 'https://zhidao.baidu.com/'
page = getHTML(url,'微波炉维修')
soup = getSoup(page)
results = fillResult(soup)
printResult(results)
输出如下
0 http://zhidao.baidu.com/question/710707862468327685.html
1 http://zhidao.baidu.com/question/133539693.html
2 http://zhidao.baidu.com/question/1923825140336059027.html
3 http://zhidao.baidu.com/question/438008215.html
4 http://zhidao.baidu.com/question/1048608452322990139.html
5 http://zhidao.baidu.com/question/717335322150201685.html
6 http://zhidao.baidu.com/question/1689754141374415228.html
7 http://zhidao.baidu.com/question/106059517.html
8 http://zhidao.baidu.com/question/1371992425466276179.html
9 http://zhidao.baidu.com/question/688831948614539564.html
遇到的问题
解析html 文件出了问题,原本是
def getSoup(page):
soup = BeautifulSoup(page,feature = ''lxml')
return soup
出现问题时,搜索标签不全,遇到一长段给css文件就不再继续搜索对应标签了,换了解析参数后,即可正常爬取,代码如下
def getSoup(page):
soup = BeautifulSoup(page,'html.parser')
return soup