找到了一本非常厚的python爬虫书籍:《python网络数据采集》
估计够学好一阵子的了
今天首先学习一个叫BeautifulSoup的库,这个库是目前做爬虫最常用的一个库。打开cmd输入:pip install beautifulsoup4即可无脑安装(安装完成后别忘了打开python命令行试试)
练习1 初探beautifulsoup
beautifulsoup的功能非常多,这个例子是用它来解析html元素
# from urllib.request import urlopen
# from bs4 import BeautifulSoup
# html = urlopen('http://www.pythonscraping.com/pages/page1.html')
# bsObj =BeautifulSoup(html.read())
# print(bsObj)
# 我也可以单独输出h1这个标签
# from urllib.request import urlopen
# from bs4 import BeautifulSoup
# html = urlopen('http://www.pythonscraping.com/pages/page1.html')
# bsObj =BeautifulSoup(html.read())
# print(bsObj)
# print(bsObj.h1)
练习2 如果让beautifulsoup去寻找一个不存在的标签会发生什么
# from urllib.request import urlopen
# from bs4 import BeautifulSoup
# html = urlopen('http://www.pythonscraping.com/pages/page1.html')
# bsObj =BeautifulSoup(html.read())
# print(bsObj)
# print(bsObj.h1)
# # 输出一个不存在的标签不会有问题
# print(bsObj.ljlkjlij)
# # 但是输出不存在标签的子标签(当然也不存在)会报错
# print(bsObj.ljlkjlij.sfasf)
# AttributeError: 'NoneType' object has no attribute 'sfasf'
练习3 避免程序因为AttributeError退出
# from urllib.request import urlopen
# from bs4 import BeautifulSoup
# html = urlopen('http://www.pythonscraping.com/pages/page1.html')
# bsObj =BeautifulSoup(html.read())
# print(bsObj)
# print(bsObj.h1)
# ## 解决练习2 报错的一个可用方法是使用try捕获error
# try:
# test = bsObj.lihkhkjh.hghghghg
# except AttributeError as e:
# print('tag is not existed')
# else:
# if test == None:
# print('tag was not found')
# else:
# print(test)
# # #AttributeError: 'NoneType' object has no attribute 'sfasf'
练习4 用函数方法改写前面的例子
# from urllib.request import urlopen
# from urllib.request import HTTPError
# from bs4 import BeautifulSoup
# def gethtml(url):
# try:
# html = urlopen(url)
# except HTTPError as e:
# return None
# try:
# r = BeautifulSoup(html)
# content = r.h1
# except AttributeError as e:
# return None
# else:
# return content
# pass
# test = gethtml('http://www.pythonscraping.com/pages/page1.html')
# if test==None:
# print('not exist')
# else:
# print(test)
到此第一章的内容全部看完了,内容不是很多,但书中提到了许多错误处理的细节,打卡,明天继续!