爬虫-beautifulsoup&xpath

Task2

2.1 学习beautifulsoup

学习beautifulsoup,并使用beautifulsoup提取内容。

Beautiful Soup 是一个可以从HTML或XML文件中提取数据的Python库.它能够通过你喜欢的转换器实现惯用的文档导航,查找,修改文档的方式.Beautiful Soup会帮你节省数小时甚至数天的工作时间.
参考文档:https://www.crummy.com/software/BeautifulSoup/bs4/doc.zh/

使用beautifulsoup提取丁香园论坛的回复内容。

import urllib.request
from bs4 import BeautifulSoup as bs

def main():
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) "
                          "Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0"
    }
    url = 'http://www.dxy.cn/bbs/thread/626626'
    request = urllib.request.Request(url, headers=headers)
    response = urllib.request.urlopen(request).read().decode("utf-8")
    html = bs(response, 'lxml')
    getItem(html)
def getItem(html):
    datas = [] # 用来存放获取的用户名和评论
    for data in html.find_all("tbody"):
        try:
            userid = data.find("div", class_="auth").get_text(strip=True)
            print(userid)
            content = data.find("td", class_="postbody").get_text(strip=True)
            print(content)
            datas.append((userid,content))
        except:
            pass
    print(datas)

if __name__ == '__main__':
    main()
from bs4 import BeautifulSoup
from common import getHtmlText

def get_data(html):
    soup = BeautifulSoup(html, "lxml")
    results = list()
    datas = soup.find_all("tbody")
    for data in datas:
        try:
            user_name = data.find("div", class_="auth").get_text(strip=True)
            comment = data.find("td", class_="postbody").get_text(strip=True)
            result = "用户名:{}, 评论内容:{}\n".format(user_name, comment)
            print(result)
            results.append(result)
        except:
            pass
    return results

if __name__ == '__main__':
    url = 'http://www.dxy.cn/bbs/thread/626626#626626'
    html = getHtmlText.getHtmlText(url)
    with open("dxy.txt", "w", encoding="utf-8") as f:
        for line in get_data(html):
f.writelines(line)

丁香园直通点:http://www.dxy.cn/bbs/thread/626626#626626
参考资料:https://blog.csdn.net/wwq114/article/details/88085875

2.2 Task4 学习xpath

学习xpath,使用lxml+xpath提取内容。

XPath 是一门在 XML 文档中查找信息的语言。XPath 可用来在 XML 文档中对元素和属性进行遍历。
XPath 是 W3C XSLT 标准的主要元素,并且 XQuery 和 XPointer 都构建于 XPath 表达之上。
因此,对 XPath 的理解是很多高级 XML 应用的基础。

XPath 使用路径表达式在 XML 文档中进行导航
XPath 包含一个标准函数库
XPath 是 XSLT 中的主要元素
XPath 是一个 W3C 标准
<?xml version="1.0" encoding="ISO-8859-1"?>

<bookstore>

<book category="COOKING">
  <title lang="en">Everyday Italian</title>
  <author>Giada De Laurentiis</author>
  <year>2005</year>
  <price>30.00</price>
</book>

<book category="CHILDREN">
  <title lang="en">Harry Potter</title>
  <author>J K. Rowling</author>
  <year>2005</year>
  <price>29.99</price>
</book>

<book category="WEB">
  <title lang="en">XQuery Kick Start</title>
  <author>James McGovern</author>
  <author>Per Bothner</author>
  <author>Kurt Cagle</author>
  <author>James Linn</author>
  <author>Vaidyanathan Nagarajan</author>
  <year>2003</year>
  <price>49.99</price>
</book>

<book category="WEB">
  <title lang="en">Learning XML</title>
  <author>Erik T. Ray</author>
  <year>2003</year>
  <price>39.95</price>
</book>

</bookstore>

参考:http://www.runoob.com/xpath/xpath-tutorial.html

使用xpath提取丁香园论坛的回复内容。
丁香园直通点:http://www.dxy.cn/bbs/thread/626626#626626
参考资料:https://blog.csdn.net/naonao77/article/details/88129994


import requests
import csv
import lxml
from lxml import etree

url = "http://www.dxy.cn/bbs/thread/626626#626626"
headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36"}
re = requests.get(url, headers)
text = re.text
print()
html = etree.HTML(text)  #调用HTML类初始化。构成XPath解析对象
#result = etree.tostring(ht) #对ht进行修正补齐,但是结果是BYTE型,要转换成str类型
#result = result.decode("utf-8")#转换成str型    也可以直接进行文本解析,不用这三步
#tml  = etree.parse('./text.html',etree.HTMLParser()) #直接调用文本进行分析
result1 = html.xpath('//div[@class="auth"]')
result2 = html.xpath('//tbody//td[@class="postbody"]') #取出对应的回复评论  都是list型
result3 = html.xpath('//div[@class="post-info"]/span[1]')
#创建空列表存放数据
user = []
content = []
time = []
with open("dingxiang1.csv",'a',encoding='utf-8') as csvfile:
    writer = csv.writer(csvfile)
    writer.writerow(["user","评论","时间"])
    for data in result1:
        x = data.xpath('string(.)')#对单个对象去掉换行符,只留下字符串
        user.append(x)      #做成字典型
    for data in result2:
        y = data.xpath('string(.)').strip()
        content.append(y)
    for data in result3:
        z = data.xpath('string(.)').strip()
        time.append(z)
    m = len(user)
    for i in range(m):
        result = user[i], content[i], time[i]
        print(user[i], content[i], time[i])
        writer.writerow(result)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值