Spark读取日志,统计每个service所用的平均时间

原创 2015年07月06日 17:08:55

获取log日志,每个service以“#*#”开头。统计每个service所需的平均时间。


import java.io.{File, PrintWriter}
import org.apache.spark.{SparkContext, SparkConf}

object SimpleApp {

  def main(args: Array[String]) {
    System.setProperty("hadoop.home.dir","D://spark-1.3.1-bin-hadoop-2.3.0-cdh5.0.2");

    val logFile = "d://Debug.2015-06-12_1556.log" // Should be some file on your system
    val conf = new SparkConf().setAppName("Simple Application").setMaster("local")
    val sc = new SparkContext(conf)
    val logData = sc.textFile(logFile, 2).cache()
    val result = logData.filter(line => line.contains("#*#"))

    println("********统计开始**********")

    //转化为key-value形式的RDD。
    val jobNameAndTime = result.map(line => (line.split("#*#").last.split(" ").head, line.split("#*#").last.split(" ").last.toInt/1000))

    val jobNameTimes = jobNameAndTime.map(line => (line._1, 1)).reduceByKey((x, y) => x + y)

    val jobAvgTime = jobNameAndTime.reduceByKey((x, y) => (x + y)/2)

    //join方法
    val jobTimesAndAvgTime = jobNameTimes.join(jobAvgTime).sortBy(x => x._2._2)

    println("********************************************************************")

    jobTimesAndAvgTime.map(x => println(s"jobName: ${x._1} | times: ${x._2._1} | avgTime: ${x._2._2}s")).collect

    val writer = new PrintWriter(new File("d://test.txt" ))
    writer.write(jobTimesAndAvgTime.map(x => s"jobName: ${x._1} | times: ${x._2._1} | avgTime: ${x._2._2}s\n").collect.toList.mkString(",").replace(",", ""))
    writer.close


    println(s"一共 ${result.count} 统计条数据")

    println("********************************************************************")


    println("********统计结束**********")


  }

}

------------------------------

每个service以“#*#”开头,后面接上所用的时间。
log日志片段:
2015-06-11 00:05:32.23423742063 [Worker-88] DEBUG c.z.b.v.a.u.c.d.ConnectionFactoryPrefs$$anon$1 - Spark useDatabase =use ran
2015-06-11 00:05:32.82023742649 [worker-1] DEBUG o.a.thrift.transport.TSaslTransport - CLIENT: reading data length: 109
2015-06-11 00:05:35.18423745013 [Worker-88] DEBUG o.a.thrift.transport.TSaslTransport - writing data length: 110
2015-06-11 00:05:35.18423745013 [worker-1] DEBUG o.a.thrift.transport.TSaslTransport - writing data length: 102
2015-06-11 00:05:35.18523745014 [worker-1] DEBUG o.a.thrift.transport.TSaslTransport - CLIENT: reading data length: 778
2015-06-11 00:05:35.18523745014 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - writing data length: 96
2015-06-11 00:05:35.18523745014 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - CLIENT: reading data length: 42
2015-06-11 00:05:35.18523745014 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - writing data length: 83
2015-06-11 00:05:35.18623745015 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - CLIENT: reading data length: 40
2015-06-11 00:05:35.18623745015 [18-worker-1] DEBUG c.z.b.v.a.u.c.j.Quarter1thCleanJob - #*#HelloWorldService 26993
2015-06-11 00:05:35.18623745015 [18-worker-1] DEBUG c.z.b.v.a.u.c.d.ConnectionFactoryPrefs$$anon$1 - database config: DatabaseInfo(jdbc:hive2://192.168.2.110:11000,mr,mr,org.apache.hive.jdbc.HiveDriver,ran)
2015-06-11 00:05:35.18723745016 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - opening transport org.apache.thrift.transport.TSaslClientTransport@c0770c
2015-06-11 00:05:35.18723745015 [18-worker-1] DEBUG c.z.b.v.a.u.c.j.Quarter1thCleanJob - #*#HelloWorldService 36993 
2015-06-11 00:05:35.18723745016 [18-worker-1] DEBUG o.a.t.t.TSaslClientTransport - Sending mechanism name PLAIN and initial response of length 6
2015-06-11 00:05:35.18723745016 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - CLIENT: Writing message with status START and payload length 5
2015-06-11 00:05:35.18723745016 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - CLIENT: Writing message with status COMPLETE and payload length 6
2015-06-11 00:05:35.18723745016 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - CLIENT: Start message handled
2015-06-11 00:05:35.18723745016 [18-worker-1] DEBUG o.a.thrift.transport.TSaslTransport - CLIENT: Main negotiation loop complete
2015-06-11 00:05:35.18723745015 [18-worker-1] DEBUG c.z.b.v.a.u.c.j.Quarter1thCleanJob - #*#HelloSUMService 336993 
2015-06-11 00:05:35.18723745015 [18-worker-1] DEBUG c.z.b.v.a.u.c.j.Quarter1thCleanJob - #*#HelloSUMService 236993 



版权声明:本文为博主原创文章,未经博主允许不得转载。

相关文章推荐

scala实战之spark用户在线时长和登录次数统计实例

接触spark后就开始学习scala语言了,因为有一点python和java的基础学习起来还行,今天在这里把我工作中应用scala编程统计分析用户行为日志的实例和大家分析一下,我这里主要讲一下用户的在...

Spark 加强版WordCount ,统计日志中文件访问数量

原文地址:http://blog.csdn.net/whzhaochao/article/details/72416956写在前面学习Scala和Spark基本语法比较枯燥无味,搞搞简单的实际运用可以...

spark count统计元素个数

太简单了,直接上代码,不解析 public static void myCount(){         SparkConf conf=new SparkConf()       ...

Spark MLlib Statistics统计

1、Spark MLlib Statistics统计 Spark Mllib 统计模块代码结构如下: 1.1 列统计汇总 计算每列最大值、最小值、平均值、方差值、L1范数、L2范...

Spark MLlib特征处理:均值、方差、协方差 ---原理及实战

原理 向量a→=(x1,x2,x3...xn)\overrightarrow{a}= \left ( x_{1},x_{2},x_{3}...x_{n} \right ),aka_{k}是a→\o...

Spark计算均值

作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 用spark来快速计算分组的平均值,写法很便捷,话不多说上代码 object ColumnVal...

Scala pair RDD 统计均值,方差等

val conf = new SparkConf().setAppName("hh") conf.setMaster("local[3]") val sc = new SparkCon...

Hadoop经典案例Spark实现(四)——平均成绩

Hadoop经典案例Spark实现(四)——平均成绩

Spark高级数据分析--简单的数据概率统计

2017年6月7号 本文主要讨论利用Spark来实现一些spss的简单功能,比如均值、最大/小值,标准差等等。参考的教材是:《Spark高级数据分析》 参考的一些blog:http://www.c...

Spark实践-日志查询

环境 win 7 jdk 1.7.0_79 (Oracle Corporation) scala version 2.10.5 spark 1.6.1 详细配置: Spark...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)