国学大师词库爬虫

  1. 代查词汇下载地址:https://jhc001.lanzouw.com/iWAtlwcuixa
    密码:bxp6
  2. 爬虫代码:
#coding=utf-8
#coding=gbk
import requests
from lxml import etree
import os


def spider(name):

    try:
        response=requests.get('http://www.guoxuedashi.net/zidian/so.php?sokeyci='+name+'&submit=&kz=12&cilen=0')
        tree=etree.HTML(response.text)
        lis=tree.xpath('//div[@class="info_txt2 clearfix"]/a[1]/@href')
        # print(lis)
        if lis != []:
            r_lis='http://www.guoxuedashi.net'+lis[0]
            detail_page1(name,r_lis)

        else:
            response = requests.get('http://www.guoxuedashi.net/renwu/?sokeylishi='+name)
            tree=etree.HTML(response.text)
            lis=tree.xapth('//dl[@class="clearfix"]/dd[1]/a/@href')
            if lis!=[]:
                r_lis='http://www.guoxuedashi.net'+lis[0]
                detail_page2(name,r_lis)
            else:
                print('两者搜索结果均为空')

    except:
        print('异常')



# 词典-词首 解析
def detail_page1(name,r_lis):
    # r_lis='http://www.guoxuedashi.net/hydcd/7876o.html'
    response = requests.get(r_lis)
    # print(response.text)
    tree=etree.HTML(response.text)
    lis=tree.xpath('//div[@class="info_txt2 clearfix"]/p[2]/span/span/text()')
    if lis:
        detail=lis[0].split('。')[0]
        print(name+'\r\n'+detail)
        save_data(name,detail)
    else:
        lis = tree.xpath('//div[@class="info_txt2 clearfix"]/text()'|'//div[@class="info_txt2 clearfix"]/font/span/text()')
        detail=lis[1]+'\n'+lis[2]+'\n'+lis[3]
        print(name+'\r\n'+detail)
        save_data(name,detail)

# 历史-人物 解析
def detail_page2(name,r_lis):
    # r_lis='http://www.guoxuedashi.net/renwu/10838abax/'
    response=requests.get(r_lis)
    # print(response.text)
    tree=etree.HTML(response.text)
    lis=tree.xpath('//div[@class="info_content zj clearfix"]/span/p/text()')
    detail=lis[2].split('。')[0]
    print(name+detail)
    save_data(name, detail)

# 读取数据
def read_word():
    with open('./words.txt','r',encoding='utf-8') as fp:
        words=fp.readlines()
        # print(words)
        for word in words:
            name=word.replace('\n','')
            # print(name)
            spider(name)


# 保存数据
def save_data(name,detail):
    with open('./results/results.txt','a',encoding='utf-8') as fp:
        result=name+':'+detail+'\n'
        fp.write(result)



if __name__ == '__main__':
	os.makedirs('./results',exist_ok=True)
    read_word()
    # spider('张飞')
    # detail_page2()
  1. 代码纯纯单线程,效率出奇的低,还有很多不足之处,希望各位大神不吝赐教
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值