用户行为日志的统计,Java mapreduce与Scala spark的代码存档...

    原意是想存档一份spark的wordcount的代码,但觉得wordcount能体现出的东西少了一些,再加上写成spark遇到了各种各样的坑,索性就把之前一个用java mapreduce写的用户行为日志统计的代码用scala的spark逻辑上大致实现了一次(不完全一致,有实现的细节差别),以证明初步写成一个spark程序。代码仅供参考map,reduce文件读写过程,由于缺少引用的相关包,单独的代码是不能直接运行的。两份代码都是在maven框架下写的,注意pom中的依赖,其中的spark版本最好要与集群配置的spark版本一致。

    无论mapreduce, 还是写spark。给我感觉对于读入文件实际上是已经分了一个初始的<K,V>的,其中的K可能是一个void之类的,而每一个value对应的就是输入文件的每一行(可以自己修改规则)。

    java mapreduce大致的逻辑比较简单,spark注意一下flatmap与map的区别。flatmap是将每一行的数据作了map后再做一个扁平化操作。比如做wordcount,输入文件是

123 456 123

123 456

    如果直接对该文件进行map得到的结果是

Array[Array[(K,V)]]] = {

Array[(K,V)] = { (123,1), (456,1), (123, 1) }

Array[(K,V)] = { (123,1), (456,1) }

}

    对其做flatmap可以得到

Array[(K,V)] = { (123,1), (456,1), (123, 1), (123,1), (456,1) }

    方便下一步的shuffle与reduce操作。

    而对于spark的reduce操作,除了可以写成下述的reduceByKey之外,还可以写成reduce根据key来定制对value的操作。这里一般是带入两个值进去然后写该两个value变为一个value的逻辑,暂时不知道是否能写成Java mapreduce中foreach求和之类的过程。

    注意spark在map的时候不要随便返回一个null,可能会导致程序运行失败,返回一个该类型的空对象就好。

    总的来说spark代码比mapreduce的短,并且扩展性更强(比如可以很方便的在做完一次mapreduce之后再接着做mapreduce)。当然只算是第一个spark代码,要学的东西还有很多。


Java mapreduce:

package com.news.rec.monitor;

import com.newsRec.model.UserActionLog;
import com.sohu.newsRec.parser.UserLogParser;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

import java.io.IOException;

/**
 * Created by jacobzhou on 2016/9/1.
 */
public class UserActiveCount extends Configured implements Tool {
    private static String DELIMA = "\t";
    public static class MRMapper extends Mapper<Object, Text, Text, Text> {
        private UserLogParser userActorParser = new UserLogParser();
        private long getReadTime(String line){
            String readTime = "";
            int index = line.indexOf("readTime") + 9;
            while (line.charAt(index)>='0' && line.charAt(index)<='9'){
                readTime += line.charAt(index);
                index ++;
            }
            if (!readTime.equals(""))
                return Long.parseLong(readTime);
            else
                return 0;
        }
        protected void map(Object key, Text value, Mapper.Context context) throws IOException, InterruptedException {
            String line = value.toString();
            UserActionLog userLog = userActorParser.parseKV(line);
            String act = userLog.getAct();
            long gbCode = userLog.getgbCode();
            long pvNum = 0;
            long expoNum = 0;
            long tmNum = 0;
            long readTime = getReadTime(line);
            if (readTime<4 || readTime>3000)
                readTime = 0;
            if (act.equals("expo")) expoNum = 1;
            else if (act.equals("pv")) pvNum = 1;
            else if (act.equals("tm")){
                tmNum = 1;
                if (readTime == 0)
                    return;
            }
            String net = userLog.getNet();
            if (net==null || net.trim().equals("")){
                net = "blank";
            }
            String wKey = "net" + DELIMA + net + DELIMA + "gbCode" + DELIMA + gbCode;
            String wValue = expoNum + DELIMA + pvNum + DELIMA + tmNum + DELIMA + readTime;
            context.write(new Text(wKey), new Text(wValue));
        }
        protected void cleanup(Context context) throws IOException, InterruptedException {}
    }
    public static class MRReducer extends Reducer<Text,Text,Text,Text> {
        public void reduce(Text key, Iterable<Text> values, Context context)
                throws IOException, InterruptedException {
            String sKey[] = key.toString().split(DELIMA);
            long expoNum, pvNum, tmNum, readTime;
            String result;
            expoNum=pvNum=tmNum=readTime=0;
            for (Text val : values) {
                String data[] = val.toString().split(DELIMA);
                expoNum += Long.parseLong(data[0]);
                pvNum += Long.parseLong(data[1]);
                tmNum += Long.parseLong(data[2]);
                readTime += Long.parseLong(data[3]);
            }
            result = expoNum + DELIMA + pvNum + DELIMA + tmNum + DELIMA + readTime;
            context.write(key, new Text(result));
        }
    }
    public int run(String[] args) throws Exception {
        Configuration conf = getConf();
        conf.set("mapreduce.job.queuename", "datacenter");
        conf.set("mapred.max.map.failures.percent", "5");
        int reduceTasksMax = 10;
        Job job = new Job(conf);
        job.setJobName("userActiveStatistic job");
        job.setNumReduceTasks(reduceTasksMax);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        job.setMapperClass(MRMapper.class);
        job.setReducerClass(MRReducer.class);
        job.setJarByClass(UserActiveCount .class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.addInputPath(job,new Path(args[0]));
        TextOutputFormat.setOutputPath(job, new Path(args[1]));
        return job.waitForCompletion(true) ? 0 : 1;
    }
    public static void main(String[] args) throws Exception {
        try{
            System.out.println("start run job!");
            int ret = ToolRunner.run(new UserActiveCount(), args);
            System.exit(ret);
        }catch (Exception e){
            e.printStackTrace();
        }
    }
}

Scala Spark, map, 把每行数据变成一个(key,value)

package zzy

import com.newsRec.parser.UserLogParser
import org.apache.spark.{SparkConf, SparkContext}

/**
  * Created by jacobzhou on 2016/10/11.
  */
object newsMonitor {
  private val DELIMA: String ="\t"
  private val userActorParser = new UserLogParser
  var num = 0
  def mapData(line : String): (String,String) ={
    if (num < 100) {
      println(line)
      num = num + 1
    }
    val userLog = UserLogParser.parseKV(line)
    val act: String = userLog.getAct
    val gbCode: Long = userLog.getgbCode
    var pvNum: Long = 0
    var expoNum: Long = 0
    var tmNum: Long = 0
    if (act == "expo") expoNum = 1
    else if (act == "pv") pvNum = 1
    else if (act == "tm") tmNum = 1
    var net: String = userLog.getNet
    if (net == null || net.trim == "") net = "blank"
    val wKey: String = "net" + DELIMA + net + DELIMA + "gbCode" + DELIMA + gbCode
    val wValue: String = expoNum + DELIMA + pvNum + DELIMA + tmNum
    (wKey , wValue)
  }
  def reduceData(a: String, b : String): String = {
    var expoNum: Long = 0L
    var pvNum: Long = 0L
    var tmNum: Long = 0L
    val dataA: Array[String] = a.split(DELIMA)
    val dataB: Array[String] = b.split(DELIMA)
    expoNum = dataA(0).toLong + dataB(0).toLong
    pvNum = dataA(1).toLong + dataB(1).toLong
    tmNum = dataA(2).toLong + dataB(2).toLong
    return expoNum + DELIMA + pvNum + DELIMA + tmNum
  }
  def main(args: Array[String]): Unit ={
    println("Running")
    val conf = new SparkConf()
    conf.setAppName("SparkTest")
    val input = args(0)
    val output = args(1)
    val sc = new SparkContext(conf)
    val inData = sc.textFile(input)
    val tmp = inData.map(line => mapData(line)).reduceByKey((x,y) => reduceData(x,y));//.collect().foreach(println)
    tmp.saveAsTextFile(output);
  }
}

Scala Spark, flatmap,有更好的扩展性,比如一行数据拆分成多个(key,value)就要先组合成一个List[(key,vlaue)]再通过flatmap展开

package zzy

import com.newsRec.parser.UserLogParser
import org.apache.spark.{SparkConf, SparkContext}

/**
  * Created by jacobzhou on 2016/9/18.
  */
object newsMonitor {
  private val DELIMA: String ="\t"
  private val userActorParser = new UserLogParser
  var num = 0
  def mapData(line : String): Map[String,String] ={
    if (num < 100) {
      println(line)
      num = num + 1
    }
    val userLog = UserLogParser.parseKV(line)
    val act: String = userLog.getAct
    val gbCode: Long = userLog.getgbCode
    var pvNum: Long = 0
    var expoNum: Long = 0
    var tmNum: Long = 0
    if (act == "expo") expoNum = 1
    else if (act == "pv") pvNum = 1
    else if (act == "tm") tmNum = 1
    var net: String = userLog.getNet
    if (net == null || net.trim == "") net = "blank"
    val wKey: String = "net" + DELIMA + net + DELIMA + "gbCode" + DELIMA + gbCode
    val wValue: String = expoNum + DELIMA + pvNum + DELIMA + tmNum
    return Map(wKey -> wValue);
  }
  def reduceData(a: String, b : String): String = {
    var expoNum: Long = 0L
    var pvNum: Long = 0L
    var tmNum: Long = 0L
    val dataA: Array[String] = a.split(DELIMA)
    val dataB: Array[String] = b.split(DELIMA)
    expoNum = dataA(0).toLong + dataB(0).toLong
    pvNum = dataA(1).toLong + dataB(1).toLong
    tmNum = dataA(2).toLong + dataB(2).toLong
    return expoNum + DELIMA + pvNum + DELIMA + tmNum
  }
  def main(args: Array[String]): Unit ={
    println("Running")
    val conf = new SparkConf()
    conf.setAppName("SparkTest")
    val input = args(0)
    val output = args(1)
    val sc = new SparkContext(conf)
    val inData = sc.textFile(input)
    val tmp = inData.flatMap(line => mapData(line)).reduceByKey((x,y) => reduceData(x,y));//.collect().foreach(println)
    tmp.saveAsTextFile(output);
  }
}


启动spark脚本举例

output=zeyangzhou/count
input=zeyangzhou/data
hadoop fs -rmr $output 
jar=/opt/develop/zeyangzhou/zzy-1.0-SNAPSHOT-jar-with-dependencies.jar
SPARK=/usr/lib/spark/bin/spark-submit
${SPARK} --queue datacenter \
         --class zzy.newsMonitor \
         --executor-memory 15g \
         --master yarn-cluster \
         --driver-memory 20g  \
         --num-executors 30 \
         --executor-cores 15 \
         $jar $input $output
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值