MapReduce输出说明(hbase查看记录数)

[boco@hadoop01 ~]$ hbase org.apache.hadoop.hbase.mapreduce.RowCounter 'boco:DW_F_XDR_HTTP_FGCSVSH_USR_H'

19/09/17 18:00:57 INFO mapreduce.Job: Job job_1566288376023_2726039 completed successfully
19/09/17 18:00:57 INFO mapreduce.Job: Counters: 31
    File System Counters                              //1======job与文件系统交互的读写统计
        FILE: Number of bytes read=0                //reduce从本地文件系统读取数据(map结果保存在本地磁盘)                
        FILE: Number of bytes written=34800002        //表示map task往本地磁盘中总共写了多少字节(其实,Reduce端的Merge也会写入本地File)
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=54302            //map从HDFS读取数据
        HDFS: Number of bytes written=0            //最终结果写入HDFS
        HDFS: Number of read operations=180
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=0
    Job Counters                     //2======MR子任务统计,即map tasks 和 reduce tasks
        Launched map tasks=180                    //启用map task的个数
        Other local map tasks=180
        Total time spent by all maps in occupied slots (ms)=32673077
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=32673077
        Total vcore-milliseconds taken by all map tasks=32673077
        Total megabyte-milliseconds taken by all map tasks=267657846784
    Map-Reduce Framework              //3======MR框架计数器
        Map input records=1404276286            //map task从HDFS读取的文件总行数
        Map output records=0                          //map输出的记录行数
        Input split bytes=54302
        Spilled Records=0                    //spill过程在map和reduce端都会发生,这里统计在总共从内存往磁盘中spill了多少条数据
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=47774
        CPU time spent (ms)=9560290
        Physical memory (bytes) snapshot=126825742336
        Virtual memory (bytes) snapshot=1437095444480
        Total committed heap usage (bytes)=344227053568
    org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper$Counters
        ROWS=1404276286                    //总记录数  
    File Input Format Counters         //4======文件输入格式化计数器
        Bytes Read=0                                //map输入的所有value值字节数之和
    File Output Format Counters     //5======文件输出格式化计数器
        Bytes Written=0                              //MR输出总的字节数

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值