2020-12-24

xpath

基本概念
。Xpatn(XML Path Language)是一种XML查询语言,他能在XML树状结构中寻找节点。Xpath用于在XML文档中通过元素和属性进行导航
。XML是一种标记语法的文本格式,xpath可以方便的定位xml中的元素和其中的属性。lxml是python中的一个第三方模块,它包含了将html文本转成xml对象,和对对象执行xpath的功能。

在这里插入图片描述
在这里插入图片描述
模块的使用
在Python中,我们安装lxml库来使用Xpath技术
lxml是一个HTML/XML的解析器,主要的功能是如何解析和提取HTML/XML数据 利用etree.HTML,将字符串转化为Element对象
lxml python 官方文档:http://lxml.de/index.html

入门练习

from lxml import etree
import csv
wb_data = """
        <div>
            <ul>
                 <li class="item-0"><a href="link1.html">first item</a></li>
                 <li class="item-1"><a href="link2.html">second item</a></li>
                 <li class="item-inactive"><a href="link3.html">third item</a></li>
                 <li class="item-1"><a href="link4.html">fourth item</a></li>
                 <li class="item-0"><a href="link5.html">fifth item</a>
             </ul>
         </div>
        """
html_element = etree.HTML(wb_data)
# print(html_element)
# href所对应的数据
links = html_element.xpath('//li/a/@href')
# a标签所对应的文本数据
content = html_element.xpath('//li/a/text()')
# print(links)
# print(content)
# 需求把打印出来的结果保存到一个字典当中 例如 {'href':'link1.html','title':'first item'}.... 并把这些数据写入到一个csv文件当中
lst = []
for link in links:
    d = {}
    d['href'] = link
    d['title'] = content[links.index(link)]
    # print(links.index(link)) # 0 1 2 3 4
    # print(d)
    lst.append(d)

titles = ('href','title')
with open('d.csv','w',encoding='utf-8',newline='') as file_obj:
    writer = csv.DictWriter(file_obj,titles)
    writer.writeheader()
    writer.writerows(lst)

小试牛刀


# https://xxxstart=0&filter= 第一页
# https://xxxstart=25&filter= 第二页
# https://xxxstart=50&filter= 第三页
# https://xxxstart=75&filter= 第四页
# https://xxxstart=100&filter= 第五页
# (page-1) * 25

import requests
from lxml import etree
import csv
# url = 'https://xxxstart=0&filter='
# headers = {
#     'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) ...........'
# }
# res = requests.get(url,headers=headers)
# print(res.text)
lianxi_url = 'https://xxxstart={}&filter='

# 获取网页源码
def getSource(url):

    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) ..........'
    }
    response = requests.get(url,headers=headers)
    response.encoding = 'utf-8'
    return response.text

# 获取每一个数据
def getEveryItem(source):

    html_element = etree.HTML(source)

    movieItemList = html_element.xpath('//div[@class="info"]')

    movieList = []

    for eachMoive in movieItemList:

        movieDict = {}
        title = eachMoive.xpath('./div[@class="hd"]/a/span[@class="title"]/text()')
        otherTitle = eachMoive.xpath('./div[@class="hd"]/a/span[@class="other"]/text()')  # 副标题
        link = eachMoive.xpath('./div[@class="hd"]/a/@href')[0]  # url
        star = eachMoive.xpath('./div[@class="bd"]/div[@class="star"]/span[@class="rating_num"]/text()')[0] # 评分
        quote = eachMoive.xpath('./div[@class="bd"]/p[@class="quote"]/span/text()')  # 引言
        if quote:
            quote = quote[0]
        else:
            quote = ''
        movieDict['title'] =''.join(title+otherTitle)
        movieDict['url'] = link
        movieDict['star'] = star
        movieDict['quote'] = quote

        movieList.append(movieDict)
        print(movieList)

    return movieList

# 写入数据
def writeData(movieList):

    with open('douban.csv','w',encoding='utf-8',newline='') as f:
        writer = csv.DictWriter(f,fieldnames=['title','star','quote','url'])
        writer.writeheader()
        for each in movieList:
            writer.writerow(each)



if __name__ == '__main__':

    movieList = []

    for i in range(10):
        pageLink = lianxi_url.format(i * 25)

        source = getSource(pageLink)

        movieList += getEveryItem(source)

    writeData(movieList)









评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值