抓取namenode 50070 jmx的指标信息

在生产实践过程中,需要把data退役之后需要停机下线,在下线之前需要确认机器是否已下线完成,要去namenode的50070界面上查看显然效率低,为了能够快速拿到节点信息,写了简单的脚本。jmx/50070还有很多信息可以获取,可以需求采集需要的指标,可以转成Prometheus的export,或是入到时序数据库。本文只是用于交流和学习。

# -*- coding: utf-8 -*-
__author__ = 'machine'
#date: 20220720
import json
import requests

url_dict = {
    '集群1': 'http://192.168.100.1:50070',
    '集群2': 'http://192.168.14.1:50070'
            }

for k,v in url_dict.items():
    print(" ")
    print("-----------------------------------------------------------------------------")
    print("集群名称:",k)
    #print(v)
    url=v+str('/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo')
    print(url)
    req = requests.get(url)
    #print(req)
    result_json = json.loads(req.text)
    #print(result_json)
    livenode=json.loads(result_json['beans'][0]['LiveNodes'])
    deadnode=result_json['beans'][0]['DeadNodes']
    print("运行节点的服务状态: ")
    list_inservernode = []
    list_decommissioned = []
    for lip in livenode.values():
        #print(lip['xferaddr'].split(':')[0])
        status=lip['adminState'].split(' ')[0]
        if status == 'Decommissioned':
            list_decommissioned.append(lip['xferaddr'].split(':')[0])
            #print("退役节点",lip['xferaddr'].split(':')[0])
        else:
            list_inservernode.append(lip['xferaddr'].split(':')[0])
            #print("在线节点",lip['xferaddr'].split(':')[0])
    print(" ")
    print("退役节点")
    for i in list_decommissioned:
        print(i)
    print("在线节点")
    for i in list_inservernode:
        print(i)
    print(" ")
    #print("-----------------------------------------------------------------------------")
    print(str('----------------------------- ') + "HDFS空间使用情况" + str(' -----------------------------'))
    print("HDFS总共空间(TB):",result_json['beans'][0]['Total'] // (1024 * 1024 * 1024 * 1024) ,str('TB'))
    print("HDFS已使用空间(TB):", result_json['beans'][0]['Used'] // (1024 * 1024 * 1024 * 1024), str('TB'))
    print("HDFS剩余空间(TB):", result_json['beans'][0]['Free'] // (1024 * 1024 * 1024 * 1024), str('TB'))
    print("HDFS已使用空间(使用率)",result_json['beans'][0]['PercentUsed'],str('%'))
    print("-----------------------------------------------------------------------------")

jmx hadoop部分参数

curl http://192.168.10.2:50070/jmx?

NameNode:50070

qry=Hadoop:service=NameNode,name=RpcActivityForPort8020
MemHeapMaxM
MemMaxM

Hadoop:service=NameNode,name=JvmMetrics

MemHeapMaxM
MemMaxM
Hadoop:service=NameNode,name=FSNamesystem
CapacityTotal
CapacityTotalGB
CapacityRemaining
CapacityRemainingGB
TotalLoad
FilesTotal

Hadoop:service=NameNode,name=FSNamesystemState

NumLiveDataNodes

Hadoop:service=NameNode,name=NameNodeInfo

LiveNodes
java.lang:type=Runtime
StartTime

Hadoop:service=NameNode,name=FSNamesystemState

TopUserOpCounts:timestamp

Hadoop:service=NameNode,name=NameNodeActivity

CreateFileOps
FilesCreated
FilesAppended
FilesRenamed
GetListingOps
DeleteFileOps
FilesDeleted

Hadoop:service=NameNode,name=FSNamesystem

CapacityTotal
CapacityTotalGB
CapacityUsed
CapacityUsedGB
CapacityRemaining
CapacityRemainingGB
CapacityUsedNonDFS

DataNode

DataNode:50075

Hadoop:service=DataNode,name=DataNodeActivity-slave-50010

BytesWritten
BytesRead
BlocksWritten
BlocksRead
ReadsFromLocalClient
ReadsFromRemoteClient
WritesFromLocalClient
WritesFromRemoteClient
BlocksGetLocalPathInfo

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值