ceph_exporter-2.0.0-1expoter监控项整理

本文详细解析了使用Prometheus监控Ceph Jewel(10.2.11)集群时涉及的PG和OSD监控项,包括PG的状态如active、scrubbing、degraded等,以及OSD的可用存储、利用率、性能延迟等关键指标,为Ceph集群的健康状况提供全面的数据支持。
摘要由CSDN通过智能技术生成

随着监控使用的逐渐深入,改成一个系列文章了,涉及到监控搭建、监控项分析、告警规则

系列文章一: prometheus搭建ceph-jewel(10.2.11版本)的监控

本篇文章的监控项解析是基于ceph_exporter-2.0.0-1.x86_64.rpm版本的expoter

文章目录

一、PG相关

pg相关的参数

1.1 ceph_active_pgs active状态的pg的数量

监控项:集群中处于active的PG数
# HELP ceph_active_pgs No. of active PGs in the cluster
# TYPE ceph_active_pgs gauge
ceph_active_pgs{cluster="ceph"} 512

1.2 ceph_deep_scrubbing_pgs

监控项:集群中处于深度清洗 PG 的数量
# HELP ceph_deep_scrubbing_pgs No. of deep scrubbing PGs in the cluster
# TYPE ceph_deep_scrubbing_pgs gauge
ceph_deep_scrubbing_pgs{cluster="ceph"} 0

1.3 ceph_degraded_pgs

监控项:处于降级状态的 PG 数量
# HELP ceph_degraded_pgs No. of PGs in a degraded state
# TYPE ceph_degraded_pgs gauge
ceph_degraded_pgs{cluster="ceph"} 0

1.4 ceph_peering_pgs 处于perring状态的pg的数量

监控项: 处于perring状态的pg的数量
# HELP ceph_peering_pgs No. of peering PGs in the cluster
# TYPE ceph_peering_pgs gauge
ceph_peering_pgs{cluster="ceph"} 0

1.5 ceph_scrubbing_pgs

监控项:集群中的擦洗 PG 数量
# HELP ceph_scrubbing_pgs No. of scrubbing PGs in the cluster
# TYPE ceph_scrubbing_pgs gauge
ceph_scrubbing_pgs{cluster="ceph"} 0

1.6 ceph_stuck_degraded_pgs

监控项:陷入降级状态的 PG 数
# HELP ceph_stuck_degraded_pgs No. of PGs stuck in a degraded state
# TYPE ceph_stuck_degraded_pgs gauge
ceph_stuck_degraded_pgs{cluster="ceph"} 0

1.7 ceph_stuck_stale_pgs

监控项:集群中卡住的 PG 数
# HELP ceph_stuck_stale_pgs No. of stuck stale PGs in the cluster
# TYPE ceph_stuck_stale_pgs gauge
ceph_stuck_stale_pgs{cluster="ceph"} 0

1.8 ceph_stuck_unclean_pgs

监控项:卡在 unclean 状态的 PG 数
# HELP ceph_stuck_unclean_pgs No. of PGs stuck in an unclean state
# TYPE ceph_stuck_unclean_pgs gauge
ceph_stuck_unclean_pgs{cluster="ceph"} 0

1.9 ceph_stuck_undersized_pgs

监控项:卡在 undersized 状态的 PG 数
# HELP ceph_stuck_undersized_pgs No. of stuck undersized PGs in the cluster
# TYPE ceph_stuck_undersized_pgs gauge
ceph_stuck_undersized_pgs{cluster="ceph"} 0

1.10 ceph_total_pgs 当前集群总pg数量

监控项: 当前集群总pg数量
# HELP ceph_total_pgs Total no. of PGs in the cluster
# TYPE ceph_total_pgs gauge
ceph_total_pgs{cluster="ceph"} 512

1.11 ceph_unclean_pgs 集群处于unclean状态的pg的数量

监控项: 集群处于unclean状态的pg的数量
# HELP ceph_unclean_pgs No. of PGs in an unclean state
# TYPE ceph_unclean_pgs gauge
ceph_unclean_pgs{cluster="ceph"} 0

1.12 ceph_undersized_pgs 集群处于undersized状态的pg的数量

监控项: 集群处于undersized状态的pg的数量
# HELP ceph_undersized_pgs No. of undersized PGs in the cluster
# TYPE ceph_undersized_pgs gauge
ceph_undersized_pgs{cluster="ceph"} 0

1.13 ceph_pgs_remapped 处于remapped状态的pg的数量

监控项:处于remapped状态的pg的数量
# HELP ceph_pgs_remapped No. of PGs that are remapped and incurring cluster-wide movement
# TYPE ceph_pgs_remapped gauge
ceph_pgs_remapped{cluster="ceph"} 0

二、OSD相关

2.1 ceph_osd_avail_bytes

监控项: OSD的可用存储大小,以字节为单位
# HELP ceph_osd_avail_bytes OSD Available Storage in Bytes
# TYPE ceph_osd_avail_bytes gauge
ceph_osd_avail_bytes{cluster="ceph",osd="osd.0"} 1.0881601016e+13
ceph_osd_avail_bytes{cluster="ceph",osd="osd.1"} 1.074914776e+13
ceph_osd_avail_bytes{cluster="ceph",osd="osd.10"} 1.0974155728e+13
ceph_osd_avail_bytes{cluster="ceph",osd="osd.11"} 1.1021825236e+13

2.2 ceph_osd_average_utilization

监控项:OSD平均利用率
# HELP ceph_osd_average_utilization OSD Average Utilization
# TYPE ceph_osd_average_utilization gauge
ceph_osd_average_utilization{cluster="ceph"} 6.287799

2.3 ceph_osd_bytes

监控项:OSD 总字节数
# HELP ceph_osd_bytes OSD Total Bytes
# TYPE ceph_osd_bytes gauge
ceph_osd_bytes{cluster="ceph",osd="osd.0"} 1.167485438e+13
ceph_osd_bytes{cluster="ceph",osd="osd.1"} 1.171155454e+13
ceph_osd_bytes{cluster="ceph",osd="osd.10"} 1.167485438e+13
ceph_osd_bytes{cluster="ceph",osd="osd.11"} 1.167485438e+13

2.4 ceph_osd_crush_weight osd的weight值

监控项: osd的weight值
# HELP ceph_osd_crush_weight OSD Crush Weight
# TYPE ceph_osd_crush_weight gauge
ceph_osd_crush_weight{cluster="ceph",osd="osd.0"} 10.873093
ceph_osd_crush_weight{cluster="ceph",osd="osd.1"} 10.907196
ceph_osd_crush_weight{cluster="ceph",osd="osd.10"} 10.873093
ceph_osd_crush_weight{cluster="ceph",osd="osd.11"} 10.873093

2.5 ceph_osd_depth

监控项:
# HELP ceph_osd_depth OSD Depth
# TYPE ceph_osd_depth gauge
ceph_osd_depth{cluster="ceph",osd="osd.0"} 2
ceph_osd_depth{cluster="ceph",osd="osd.1"} 2
ceph_osd_depth{cluster="ceph",osd="osd.10"} 2
ceph_osd_depth{cluster="ceph",osd="osd.11"} 2

2.6 ceph_osd_down osd的down/up状态

监控项:osd的down/up状态 1为up,0为down
# HELP ceph_osd_down No. of OSDs down in the cluster
# TYPE ceph_osd_down gauge
ceph_osd_down{cluster="ceph",osd="osd.0",status="up"} 1
ceph_osd_down{cluster="ceph",osd="osd.1",status="up"} 1
ceph_osd_down{cluster="ceph",osd="osd.10",status="up"} 1
ceph_osd_down{cluster="ceph",osd="osd.11",status="up"} 1

2.7 ceph_osd_in osd的out/in状态

监控项: osd的out/in状态 1为in,0为out
# HELP ceph_osd_in OSD In Status
# TYPE ceph_osd_in gauge
ceph_osd_in{cluster="ceph",osd="osd.0"} 1
ceph_osd_in{cluster="ceph",osd="osd.1"} 1
ceph_osd_in{cluster="ceph",osd="osd.10"} 1
ceph_osd_in{cluster="ceph",osd="osd.11"} 1

2.8 ceph_osd_perf_apply_latency_seconds

监控项:OSD 性能应用延迟
# HELP ceph_osd_perf_apply_latency_seconds OSD Perf Apply Latency
# TYPE ceph_osd_perf_apply_latency_seconds gauge
ceph_osd_perf_apply_latency_seconds{cluster="ceph",osd="osd.0"} 0.002
ceph_osd_perf_apply_latency_seconds{cluster="ceph",osd="osd.1"} 0
ceph_osd_perf_apply_latency_seconds{cluster="ceph",osd="osd.10"} 0
ceph_osd_perf_apply_latency_seconds{cluster="ceph",osd="osd.11"} 0.003

2.9 ceph_osd_perf_commit_latency_seconds

监控项:OSD 性能提交延迟
# HELP ceph_osd_perf_commit_latency_seconds OSD Perf Commit Latency
# TYPE ceph_osd_perf_commit_latency_seconds gauge
ceph_osd_perf_commit_latency_seconds{cluster="ceph",osd="osd.0"} 0.001
ceph_osd_perf_commit_latency_seconds{cluster="ceph",osd="osd.1"} 0
ceph_osd_perf_commit_latency_seconds{cluster="ceph",osd="osd.10"} 0
ceph_osd_perf_commit_latency_seconds{cluster="ceph",osd="osd.11"} 0.002

2.10 ceph_osd_pgs 各个osd上PG的数量

监控项: 各个osd上PG的数量
# HELP ceph_osd_pgs OSD Placement Group Count
# TYPE ceph_osd_pgs gauge
ceph_osd_pgs{cluster="ceph",osd="osd.0"} 105
ceph_osd_pgs{cluster="ceph",osd="osd.1"} 111
ceph_osd_pgs{cluster="ceph",osd="osd.10"} 74
ceph_osd_pgs{cluster="ceph",osd="osd.11"} 68

2.11 ceph_osd_reweight 各个osd的rewight值

监控项: 各个osd的rewight值
# HELP ceph_osd_reweight OSD Reweight
# TYPE ceph_osd_reweight gauge
ceph_osd_reweight{cluster="ceph",osd="osd.0"} 1
ceph_osd_reweight{cluster="ceph",osd="osd.1"} 1
ceph_osd_reweight{cluster="ceph",osd="osd.10"} 1
ceph_osd_reweight{cluster="ceph",osd="osd.11"} 1

2.12 ceph_osd_scrub_state

监控项:参与清理的osd
# HELP ceph_osd_scrub_state State of OSDs involved in a scrub
# TYPE ceph_osd_scrub_state gauge
ceph_osd_scrub_state{cluster="ceph",osd="osd.11"} 0
ceph_osd_scrub_state{cluster="ceph",osd="osd.2"} 0
ceph_osd_scrub_state{cluster="ceph",osd="osd.5"} 0

2.13 ceph_osd_total_avail_bytes

监控项:OSD 总可用存储字节数
# HELP ceph_osd_total_avail_bytes OSD Total Available Storage Bytes 
# TYPE ceph_osd_total_avail_bytes gauge
ceph_osd_total_avail_bytes{cluster="ceph"} 1.97115523488e+14

2.14 ceph_osd_total_bytes

监控项:OSD 总存储字节数
# HELP ceph_osd_total_bytes OSD Total Storage Bytes
# TYPE ceph_osd_total_bytes gauge
ceph_osd_total_bytes{cluster="ceph"} 2.103413654e+14

2.15 ceph_osd_total_used_bytes

监控项:OSD 已用存储总字节数
# HELP ceph_osd_total_used_bytes OSD Total Used Storage Bytes
# TYPE ceph_osd_total_used_bytes gauge
ceph_osd_total_used_bytes{cluster="ceph"} 1.3225841912e+13

2.16 ceph_osd_up osd是否为up的状态

监控项: osd是否为up的状态
# HELP ceph_osd_up OSD Up Status
# TYPE ceph_osd_up gauge
ceph_osd_up{cluster="ceph",osd="osd.0"} 1
ceph_osd_up{cluster="ceph",osd="osd.1"} 1
ceph_osd_up{cluster="ceph",osd="osd.10"} 1
ceph_osd_up{cluster="ceph",osd="osd.11"} 1

2.17 ceph_osd_used_bytes

监控项:OSD 已用存储(以字节为单位)
# HELP ceph_osd_used_bytes OSD Used Storage in Bytes
# TYPE ceph_osd_used_bytes gauge
ceph_osd_used_bytes{cluster="ceph",osd="osd.0"} 7.93253364e+11
ceph_osd_used_bytes{cluster="ceph",osd="osd.1"} 9.6240678e+11
ceph_osd_used_bytes{cluster="ceph",osd="osd.10"} 7.00698652e+11
ceph_osd_used_bytes{cluster="ceph",osd="osd.11"} 6.53029144e+11

2.18 ceph_osd_utilization

监控项:OSD 利用率
# HELP ceph_osd_utilization OSD Utilization
# TYPE ceph_osd_utilization gauge
ceph_osd_utilization{cluster="ceph",osd="osd.0"} 6.794546
ceph_osd_utilization{cluster="ceph",osd="osd.1"} 8.217584
ceph_osd_utilization{cluster="ceph",osd="osd.10"} 6.001776
ceph_osd_utilization{cluster="ceph",osd="osd.11"} 5.593467

2.19 ceph_osd_variance

监控项:
# HELP ceph_osd_variance OSD Variance
# TYPE ceph_osd_variance gauge
ceph_osd_variance{cluster="ceph",osd="osd.0"} 1.080592
ceph_osd_variance{cluster="ceph",osd="osd.1"} 1.306909
ceph_osd_variance{cluster="ceph",osd="osd.10"} 0.954512
ceph_osd_variance{cluster="ceph",osd="osd.11"} 0.889575

2.20 ceph_osds osd总数

监控项:  osd总数
# HELP ceph_osds Count of total OSDs in the cluster
# TYPE ceph_osds gauge
ceph_osds{cluster="ceph"} 18

2.21 ceph_osds_down 处于down状态的osd的数量

监控项: 处于down的osd的数量
# HELP ceph_osds_down Count of OSDs that are in DOWN state
# TYPE ceph_osds_down gauge
ceph_osds_down{cluster="ceph"} 0

2.22 ceph_osds_in 处于in状态的osd数量

监控项:处于in状态的osd数量
# HELP ceph_osds_in Count of OSDs that are in IN state and available to serve requests
# TYPE ceph_osds_in gauge
ceph_osds_in{cluster="ceph"} 18

2.23 ceph_osds_up 处于up状态的osd的数量

监控项: 处于up状态的osd的数量
# HELP ceph_osds_up Count of OSDs that are in UP state
# TYPE ceph_osds_up gauge
ceph_osds_up{cluster="ceph"} 18

三、池相关

3.1 ceph_pool_available_bytes

监控项:ceph 池的可用空间
# HELP ceph_pool_available_bytes Free space for this ceph pool
# TYPE ceph_pool_available_bytes gauge
ceph_pool_available_bytes{cluster="ceph",pool="glance"} 5.1816392370514e+13
ceph_pool_available_bytes{cluster="ceph",pool="nova"} 5.1816392370514e+13

3.2 ceph_pool_dirty_objects_total

监控项:缓存层池中的脏对象的总数
# HELP ceph_pool_dirty_objects_total Total no. of dirty objects in a cache-tier pool
# TYPE ceph_pool_dirty_objects_total gauge
ceph_pool_dirty_objects_total{cluster="ceph",pool="glance"} 63206
ceph_pool_dirty_objects_total{cluster="ceph",pool="nova"} 484191

3.3 ceph_pool_objects_total

监控项:在池中分配的对象总数
# HELP ceph_pool_objects_total Total no. of objects allocated within the pool
# TYPE ceph_pool_objects_total gauge
ceph_pool_objects_total{cluster="ceph",pool="glance"} 63206
ceph_pool_objects_total{cluster="ceph",pool="nova"} 484191

3.4 ceph_pool_raw_used_bytes

监控项:
# HELP ceph_pool_raw_used_bytes Raw capacity of the pool that is currently under use, this factors in the size
# TYPE ceph_pool_raw_used_bytes gauge
ceph_pool_raw_used_bytes{cluster="ceph",pool="glance"} 1.58812930048e+12
ceph_pool_raw_used_bytes{cluster="ceph",pool="nova"} 1.2028625289216e+13

3.5 ceph_pool_read_total

监控项:
# HELP ceph_pool_read_total Total read i/o calls for the pool
# TYPE ceph_pool_read_total gauge
ceph_pool_read_total{cluster="ceph",pool="glance"} 1.0406736e+07
ceph_pool_read_total{cluster="ceph",pool="nova"} 2.067554553e+09

3.6 ceph_pool_used_bytes

监控项:当前正在使用的池的容量
# HELP ceph_pool_used_bytes Capacity of the pool that is currently under use
# TYPE ceph_pool_used_bytes gauge
ceph_pool_used_bytes{cluster="ceph",pool="glance"} 5.29376430939e+11
ceph_pool_used_bytes{cluster="ceph",pool="nova"} 4.00954184e+12

3.7 ceph_pool_write_bytes_total

监控项:池的总写入吞吐量
# HELP ceph_pool_write_bytes_total Total write throughput for the pool
# TYPE ceph_pool_write_bytes_total gauge
ceph_pool_write_bytes_total{cluster="ceph",pool="glance"} 7.42647232512e+11
ceph_pool_write_bytes_total{cluster="ceph",pool="nova"} 2.9168662132736e+13

3.8 ceph_pool_write_total

监控项:池的总写入 i/o 调用
# HELP ceph_pool_write_total Total write i/o calls for the pool
# TYPE ceph_pool_write_total gauge
ceph_pool_write_total{cluster="ceph",pool="glance"} 535323
ceph_pool_write_total{cluster="ceph",pool="nova"} 2.986700876e+09

四、集群相关

集群相关的参数

4.1 ceph_cache_evict_io_bytes

监控项:每秒从缓存池中逐出的字节速率
# HELP ceph_cache_evict_io_bytes Rate of bytes being evicted from the cache pool per second
# TYPE ceph_cache_evict_io_bytes gauge
ceph_cache_evict_io_bytes{cluster="ceph"} 0

4.2 ceph_cache_flush_io_bytes

监控项:每秒从缓存池中刷新的字节速率
# HELP ceph_cache_flush_io_bytes Rate of bytes being flushed from the cache pool per second
# TYPE ceph_cache_flush_io_bytes gauge
ceph_cache_flush_io_bytes{cluster="ceph"} 0

4.3 ceph_cache_promote_io_ops

监控项:每秒测量的缓存提升操作总数
# HELP ceph_cache_promote_io_ops Total cache promote operations measured per second
# TYPE ceph_cache_promote_io_ops gauge
ceph_cache_promote_io_ops{cluster="ceph"} 0

4.4 ceph_client_io_ops

监控项: 每秒所有客户端的总操作数
# HELP ceph_client_io_ops Total client ops on the cluster measured per second
# TYPE ceph_client_io_ops gauge
ceph_client_io_ops{cluster="ceph"} 385

4.5 ceph_client_io_read_bytes

监控项: 每秒所有客户端读取的字节速率
# HELP ceph_client_io_read_bytes Rate of bytes being read by all clients per second
# TYPE ceph_client_io_read_bytes gauge
ceph_client_io_read_bytes{cluster="ceph"} 0

4.6 ceph_client_io_write_bytes

监控项:每秒所有客户端写入的字节速率
# HELP ceph_client_io_write_bytes Rate of bytes being written by all clients per second
# TYPE ceph_client_io_write_bytes gauge
ceph_client_io_write_bytes{cluster="ceph"} 1.325e+06

4.7 ceph_cluster_available_bytes

监控项:集群内的可用空间
# HELP ceph_cluster_available_bytes Available space within the cluster
# TYPE ceph_cluster_available_bytes gauge
ceph_cluster_available_bytes{cluster="ceph"} 2.01846296051712e+14

4.8 ceph_cluster_capacity_bytes

监控项:集群总容量
# HELP ceph_cluster_capacity_bytes Total capacity of the cluster
# TYPE ceph_cluster_capacity_bytes gauge
ceph_cluster_capacity_bytes{cluster="ceph"} 2.153895581696e+14

4.9 ceph_cluster_used_bytes

监控项:集群当前使用的容量
# HELP ceph_cluster_used_bytes Capacity of the cluster currently in use
# TYPE ceph_cluster_used_bytes gauge
ceph_cluster_used_bytes{cluster="ceph"} 1.3543262117888e+13

4.10 ceph_health_status

监控项:集群的健康状态,0为正常,1为警告,2为错误
# HELP ceph_health_status Health status of Cluster, can vary only between 3 states (err:2, warn:1, ok:0)
# TYPE ceph_health_status gauge
ceph_health_status{cluster="ceph"} 0

4.11 ceph_degraded_objects

监控项:所有PG中降级的对象数量
# HELP ceph_degraded_objects No. of degraded objects across all PGs, includes replicas
# TYPE ceph_degraded_objects gauge
ceph_degraded_objects{cluster="ceph"} 0

4.12 ceph_misplaced_objects

监控项:所有PG中错位的对象数量
# HELP ceph_misplaced_objects No. of misplaced objects across all PGs, includes replicas
# TYPE ceph_misplaced_objects gauge
ceph_misplaced_objects{cluster="ceph"} 0

4.13 ceph_recovery_io_bytes

监控项:每秒在集群中恢复的字节速率
# HELP ceph_recovery_io_bytes Rate of bytes being recovered in cluster per second
# TYPE ceph_recovery_io_bytes gauge
ceph_recovery_io_bytes{cluster="ceph"} 0

4.14 ceph_recovery_io_keys

监控项:每秒在集群中恢复密钥的速率
# HELP ceph_recovery_io_keys Rate of keys being recovered in cluster per second
# TYPE ceph_recovery_io_keys gauge
ceph_recovery_io_keys{cluster="ceph"} 0

4.15 ceph_recovery_io_objects

监控项:每秒在集群中恢复对象的速率
# HELP ceph_recovery_io_objects Rate of objects being recovered in cluster per second
# TYPE ceph_recovery_io_objects gauge
ceph_recovery_io_objects{cluster="ceph"} 0

4.16 ceph_slow_requests 集群慢请求的数量

监控项: 集群慢请求的数量
# HELP ceph_slow_requests No. of slow requests
# TYPE ceph_slow_requests gauge
ceph_slow_requests{cluster="ceph"} 0

五、go相关监控项

# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0.000231224
go_gc_duration_seconds{quantile="0.25"} 0.000293771
go_gc_duration_seconds{quantile="0.5"} 0.000357795
go_gc_duration_seconds{quantile="0.75"} 0.000405695
go_gc_duration_seconds{quantile="1"} 0.000967987
go_gc_duration_seconds_sum 0.005362383
go_gc_duration_seconds_count 14
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 9
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 2.431744e+06
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.7990368e+07
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.44724e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 272275
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 1.4663055250719219e-06
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 499712
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 2.431744e+06
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 2.400256e+06
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 3.334144e+06
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 12173
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 0
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 5.7344e+06
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6423857836554787e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 132
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 284448
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 115200
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 131072
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 43776
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 65536
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 4.402192e+06
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 2.679728e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 1.605632e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 1.605632e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.216332e+07
# HELP go_threads Number of OS threads created
# TYPE go_threads gauge
go_threads 38

六、process相关监控项

# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 0.44
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1024
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 10
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 2.63168e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.64238461776e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 2.3820288e+09
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值