[boco@hadoop01 ~]$ hbase org.apache.hadoop.hbase.mapreduce.RowCounter 'boco:DW_F_XDR_HTTP_FGCSVSH_USR_H'
19/09/17 18:00:57 INFO mapreduce.Job: Job job_1566288376023_2726039 completed successfully
19/09/17 18:00:57 INFO mapreduce.Job: Counters: 31
File System Counters //1======job与文件系统交互的读写统计
FILE: Number of bytes read=0 //reduce从本地文件系统读取数据(map结果保存在本地磁盘)
FILE: Number of bytes written=34800002 //表示map task往本地磁盘中总共写了多少字节(其实,Reduce端的Merge也会写入本地File)
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=54302 //map从HDFS读取数据
HDFS: Number of bytes written=0 //最终结果写入HDFS
HDFS: Number of read operations=180
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters //2======MR子任务统计,即map tasks 和 reduce tasks
Launched map tasks=180 //启用map task的个数
Other local map tasks=180
Total time spent by all maps in occupied slots (ms)=32673077
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=32673077
Total vcore-milliseconds taken by all map tasks=32673077
Total megabyte-milliseconds taken by all map tasks=267657846784
Map-Reduce Framework //3======MR框架计数器
Map input records=1404276286 //map task从HDFS读取的文件总行数
Map output records=0 //map输出的记录行数
Input split bytes=54302
Spilled Records=0 //spill过程在map和reduce端都会发生,这里统计在总共从内存往磁盘中spill了多少条数据
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=47774
CPU time spent (ms)=9560290
Physical memory (bytes) snapshot=126825742336
Virtual memory (bytes) snapshot=1437095444480
Total committed heap usage (bytes)=344227053568
org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper$Counters
ROWS=1404276286 //总记录数
File Input Format Counters //4======文件输入格式化计数器
Bytes Read=0 //map输入的所有value值字节数之和
File Output Format Counters //5======文件输出格式化计数器
Bytes Written=0 //MR输出总的字节数