cadvisor收集的容器磁盘信息里面有device这个label,node-exporter收集的与磁盘相关指标里面也有device这个label.
node-exporter收集的
cadvisor收集的
其实这里的device是来自于/proc/diskstats文件。
这个文件用来显示磁盘、分区、统计信息。
sda是整个磁盘的统计信息,sda1为第一个分区的统计信息,sda2为第二个分区的统计信息
dm是device mapper的意思。
https://superuser.com/questions/131519/what-is-this-dm-0-device
dm-0 是通过软链到/dev/dm-0,被LVM(逻辑卷管理)使用的设备。
lvm会把每个lv连接到一个/dev/dm-x的设备档,这个设备档并不是一个真正的磁盘,所以不会有分区表存在,不能把dm设备分区。
[root@m1 vm]# cat /proc/diskstats
8 0 sda 7519 6 1167778 8114 7844142 23257 89758508 5106444 0 2699920 5104275
8 1 sda1 1105 0 48979 418 16 0 4177 11 0 247 428
8 2 sda2 6384 6 1116703 7689 7844126 23257 89754331 5106433 0 2699741 5103838
8 16 sdb 4306 1 1417069 25749 137666 7269 3035195 191191 0 52809 216484
253 0 dm-0 6162 0 1109783 7625 7867383 0 89754331 5154505 0 2726838 5167957
253 1 dm-1 92 0 4192 18 0 0 0 0 0 12 18
253 2 dm-2 4205 0 1416253 25709 144935 0 3035195 199207 0 52747 225892
// dmsetup ls
// 查看dm设备lvm的映射情况,可以看到主设备号,次设备号,看不到物理磁盘(sda/sdb)
dmsetup info
// 查看dm的软链信息
ls -l /dev/mapper
[root@m1 block]# ls -l /dev/mapper
total 0
lrwxrwxrwx 1 root root 7 Apr 15 11:12 centos-root -> ../dm-0
lrwxrwxrwx 1 root root 7 Apr 15 11:12 centos-swap -> ../dm-1
lrwxrwxrwx 1 root root 7 Apr 15 11:12 cloudtogo-docker -> ../dm-2
crw------- 1 root root 10, 236 Apr 15 11:12 control
官方文档
https://www.mjmwired.net/kernel/Documentation/iostats.txt
61 Field 1 -- # of reads completed
62 This is the total number of reads completed successfully.
63
64 Field 2 -- # of reads merged, field 6 -- # of writes merged
65 Reads and writes which are adjacent to each other may be merged for
66 efficiency. Thus two 4K reads may become one 8K read before it is
67 ultimately handed to the disk, and so it will be counted (and queued)
68 as only one I/O. This field lets you know how often this was done.
69
70 Field 3 -- # of sectors read
71 This is the total number of sectors read successfully.
72
73 Field 4 -- # of milliseconds spent reading
74 This is the total number of milliseconds spent by all reads (as
75 measured from __make_request() to end_that_request_last()).
76
77 Field 5 -- # of writes completed
78 This is the total number of writes completed successfully.
79
80 Field 6 -- # of writes merged
81 See the description of field 2.
82
83 Field 7 -- # of sectors written
84 This is the total number of sectors written successfully.
85
86 Field 8 -- # of milliseconds spent writing
87 This is the total number of milliseconds spent by all writes (as
88 measured from __make_request() to end_that_request_last()).
89
90 Field 9 -- # of I/Os currently in progress
91 The only field that should go to zero. Incremented as requests are
92 given to appropriate struct request_queue and decremented as they finish.
93
94 Field 10 -- # of milliseconds spent doing I/Os
95 This field increases so long as field 9 is nonzero.
96
97 Field 11 -- weighted # of milliseconds spent doing I/Os
98 This field is incremented at each I/O start, I/O completion, I/O
99 merge, or read of these stats by the number of I/Os in progress
100 (field 9) times the number of milliseconds spent doing I/O since the
101 last update of this field. This can provide an easy measure of both
102 I/O completion time and the backlog that may be accumulating.