BeautifulSoup是一个HTML/XML的解析器,主要的功能是如何解析和提取 HTML/XML 数据。简单来说,BeautifulSoup只是一个从html字符串提取数据的工具。
页面爬虫思路:
- 1、导入模块
- 2、定义URL和请求头参数
- 3、requests发送html请求,获取html字符串
- 4、实例化BeautifulSoup对象(中介)
- 5、获取数据
- 6、存储数据
实例一:微博热搜爬虫
import requests
from bs4 import BeautifulSoup
url='https://s.weibo.com/top/summary?display=0&retcode=6102'
headers={
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36'
}
response=requests.get(url=url,headers=headers)
content=response.content.decode('utf8')
soup=BeautifulSoup(content,'lxml')
sinas=[]
tds=soup.find_all('td',class_="td-02")[1:]
for td in tds:
event=td.find_all('a')[-1].string
hot=td.find_all('span')[0].string
sina={
"event":event,
"hot":hot,
}
sinas.append(sina)
print(sinas)
实例二:汽车之家新闻资讯爬虫
- 爬取的数据:新闻标题、发表时间、新闻概要。
import requests
from bs4 import BeautifulSoup
headers={
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36'
}
urls=[]
for i in range(1,6):
url="https://www.autohome.com.cn/news/{}/#liststart".format(i)
urls.append(url)
news=[]
for url in urls:
try:
response=requests.get(url=url,headers=headers)
content=response.text
soup=BeautifulSoup(content,'html.parser')
divs=soup.find_all('div',class_="article-wrapper")
for div in divs:
title=list(div.find_all('h3'))
times=list(div.find_all('span',class_="fn-left"))
profiles=list(div.find_all('p'))
for title,times,profiles in zip(title,times,profiles):
title=title.string
times=times.string
profiles=profiles.string
car_news={
"title":title,
"times":times,
"profiles":profiles,
}
news.append(car_news)
except:
continue
print(news)