记录一下第一次使用python爬取网页内容!
lxml的安装可自行百度,这里不多说
以豆瓣影评为例,网址https://movie.douban.com/subject/2043546/
F12打开控制台查看html,可以看出用户的评论都在class="review-list "的div里,它的每个子级div都代表一条评论。ok,清楚结构了,下面开工
from lxml import etree
import requests
url = "https://movie.douban.com/subject/2043546/"
response = requests.get(url)
text = response.text
html = etree.HTML(text)
先获取到这个页面的html,对了,这里还用到了xpath来选择节点,具体用法请参考文章Python_XPath
代码继续,先选中class="review-list "的div及所有的评论
#选中class="review-list "的div
div = html.xpath('//div[@class="review-list "]')[0]
#选中下级所有div,即所有评论
lists = div.xpath('./div')
获取到所有的评论之后,就遍历获取每一条评论的信息并打印出来
for item in lists:
name = item.xpath('.//a[@class="name"]/text()')[0]
time = item.xpath('.//span[@class="main-meta"]/text()')[0]
comment = item.xpath('.//div[@class="short-content"]/text()')[0]
print('用户:'+name+'\n'+'评论内容:'+comment+'\n'+'评论时间:'+time+'\n\n')
从html结构可以看出,用户的名称都放在class="name"的a标签中
评论内容和评论时间同理
打印结果得到
全部代码为
from lxml import etree
import requests
url = "https://movie.douban.com/subject/2043546/"
response = requests.get(url)
text = response.text
html = etree.HTML(text)
#选中class="review-list "的div
div = html.xpath('//div[@class="review-list "]')[0]
#选中下级所有div,即所有评论
lists = div.xpath('./div')
for item in lists:
name = item.xpath('.//a[@class="name"]/text()')[0]
time = item.xpath('.//span[@class="main-meta"]/text()')[0]
comment = item.xpath('.//div[@class="short-content"]/text()')[0]
print('用户:'+name+'\n'+'评论内容:'+comment+'\n'+'评论时间:'+time+'\n\n')
好了,到此为止简单的爬取html内容就完成啦!
萌新初次接触,仍在学习中,文章内容毫无营养,只用于记录!