主要实现了大河票务网的爬虫
大和票务网属于静态网页,主要采用分析源码的方式来获取信息。
首先通过’https://www.dahepiao.com/search_new?fenlei=2&keyword=’+keyword加入关键词进行搜索,查看网页源代码可以发现想要的信息是这一部分:
通过BeautifulSoup提取标签,特别是href,通过这一链接进入景点的详情页面:
Ncity = city.replace('市', '').replace('县', '').replace('省', '')
# 添加城市名,提高搜索精度
url = 'https://www.dahepiao.com/search_new?fenlei=2&keyword='+Ncity+keyword
# p = 1;
try:
html = self.getHtml(url)
soup = BeautifulSoup(html,"html.parser")
items = soup.find_all('a',{'class':'info-title'})
for item in items:
# 名称
title = item['title']
# 跳转链接
href = item['href']
# 进一步筛选
result = fuzz.token_sort_ratio(title, keyword)
if result <= 20:
continue
detailhtml = self.getHtml(href)
进入详情页面之后,下图中这一部分就是想要爬取到的门票的信息:
如此,用BeautifulSoup根据类名查找到符合classname的标签即可:
for tr in trs:
# 门票的详细描述
dis = tr.find('div',{'class':'link_text'}).find_all('p')
discription = ''
for di in dis:
discription = discription + di.text + '\n'
# 用try except包装,提高容错
try:
type= tr.find('td',{'class':'ptdname'}).text.replace('\n','').replace(' ','')
except:
type=''
try:
ttitle = tr.find('a',{'class':'ptlink'}).text.replace('订票须知','').replace('\n','').replace(' ','')
except:
ttitle = ''
try:
booktime = tr.find('dd',{'class':'ticket_bookingTime'}).string.replace('\n','').replace(' ','')
except:
booktime = ''
try:
price = tr.find('dd',{'class':'ticket_price'}).text.replace('¥', '').replace('\n','').replace(' ','')
except:
price = ''
tickets.setdefault(type,[])
tickets[type].append({'name':ttitle,'type':type,'price':price,'url':href,'buy':'','from':'大河票务网','isReturnable':'',
'bookTime':booktime,'outTime':'','useTime':'','discription':discription})
self.spotsInfo[title] = tickets