大数据笔记--Hadoop(第四篇)

10 篇文章 0 订阅

目录

一、MapReduce

1、概述

2、特点

i、优点

ii、缺点

二、入门案例

1、理解

2、从本地上传文件到集群的hdfs上

3、编写Java代码,实现mapreduce的字符统计

3、配置windows的hadoop

4、运行程序

三、练习与IDEA插件

1、练习-单词统计

2、练习-IP去重

?3、IDEA插件

?四、组件

1、Writable-序列化

2、Partitioner-分区

三、WritableComparable-排序

1、WritableComparable

2、案例①

3、案例②


一、MapReduce

1、概述

MapReduce是Hadoop提供的一套进行分布式计算机制

MapReduce是Doug Cutting根据Google的论文来仿照实现的

MapReduce会将整个计算过程拆分为2个阶段:Map阶段和Reduce阶段。在Map阶段,用户需要考虑对数据进行规整和映射;在Reduce阶段,用户需要考虑对数据进行最后的规约

2、特点

i、优点

**易于编程:**MapReduce提供了相对简单的编程模型。这就保证MapReduce相对易于学习。用户在使用的时候,只需要实现一些接口或者去继承一些类,覆盖需要实现的逻辑,即可实现分布式计算

**具有良好的可扩展性:**如果当前集群的性能不够,那么MapReduce能够轻易的通过增加节点数量的方式来提高集群性能

**高容错性:**当某一个节点产生故障的时候,MapReduce会自动的将这个节点上的计算任务进行转移而整个过程不需要用户手动参与

适合于大量数据的计算,尤其是PB级别以上的数据,因此MapReduce更适合于离线计算

ii、缺点

不适合于实时处理:MapReduce要求处理的数据是静态的,实时的特点在于数据池是动态的

不擅长流式计算:MapReduce的运行效率相对较低,在处理流式计算的时候,效率更低

不擅长DAG(有向图)运算:如果希望把上一个MapReduce的运行结果作为下一个MapReduce的输入数据,那么需要手动使用工作流进行调度,而MapReduce本身并没有这种调度功能

二、入门案例

1、理解

案例:统计文件中每一个字符出现的次数

2、从本地上传文件到集群的hdfs上

我们可以在hadoop01的/home路径下创建一个文件夹用于保存我们的数据

cd /home

mkdir data

然后从本地将所需分析数据的文件上传到此路径下,这里我数据文件夹命名为txt。

你可以用 rz命令。如果没有就安装一下yum -y install lrzsz

然后再将文件夹上传到hdfs上,上传到根目录

hadoop fs -put /home/data/txt /

3、编写Java代码,实现mapreduce的字符统计

与hdfs的API操作类似,引入pom.xml依赖,之前写过,然后再resources创建log4j2.xml。

然后开始编写我们的代码:

创建三个类:

Mapper代码:

package org.example.charcount;


import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

// 用于完成Map阶段
// 再MapReduce中,要求被处理的数据能够被序列化
// MApReduce提供了一套单独的序列化机制
// KEYIN-输入的键的类型。如果不指定,那么默认情况下,表示行的字节偏移量
// VALUEIN-输入值得类型。如果不指定,那么默认情况下,表示的读取到的一行数据
// KEYOUT-输出的键的类型。当前案例中,输出的键表示的是字符
// VALUEOUT-输出的值的类型。当前案例,输出的值表示的是次数
public class CharCountMapper extends Mapper<LongWritable,Text, Text,LongWritable> {
    private final LongWritable once = new LongWritable(1);
    // 覆盖map方法,将处理逻辑写到这个方法中
    // key:键。表示的是行的字节偏移量
    // value:值。表示读取到的一行数据
    // context:配置参数

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        // 将一行数据中的字符拆分出来
        char[] cs = value.toString().toCharArray();
        // 假设数据是hello,那么拆分出来的数组中包含的就是{'h','r','l','l','o'}
        // 可以写出h:1 e:1 l:2 o:1
        // 可以写出h:1 e:1 l:1 l:1 o:1
        for (char c:cs){
            context.write(new Text(c+""),once);
        }
    }
}

reducer代码:

package org.example.charcount;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

// KEYIN、VALUEIN输入的键的类型。
// Reducer的数据从Mapper来的
// 所以Mapper的输出就是Reducer的输入
// KEYOUT、VALUEOUT-输出的值的类型。当前案例中,要输出每一个字符对应的总次数
public class CharCountReducer extends Reducer<Text, LongWritable,Text,LongWritable> {
    // 覆盖reduce方法,将计算逻辑写到这个方法中
    // key:键。当前案例中,键是字符
    // values:值。当前案例中,值是次数的集合对应的迭代器
    // context:配置参数

    @Override
    protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException {
        // key='a'
        // value={1,1,1,1,1,1,1...}
        // 定义变量来记录总次数
        int sum=0;
        for (LongWritable value:values){
            sum+=value.get();
        }
        context.write(key,new LongWritable(sum));
    }
}

Driver代码:

package org.example.charcount;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class CharCountDriver {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        // 构建环境变量
        Configuration conf = new Configuration();
        // 构建任务
        Job job = Job.getInstance(conf);

        // 设置入口类
        job.setJarByClass(CharCountDriver.class);
        // 设置Mapper类
        job.setMapperClass(CharCountMapper.class);
        // 设置Reducer类
        job.setReducerClass(CharCountReducer.class);
        // 设置Mapper的输出类型
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);
        // 设置Reduce的输出类型
        job.setOutputKeyClass(Text.class);
        job.setOutputKeyClass(LongWritable.class);

        // 设置输入路径
        FileInputFormat.addInputPath(job, new Path("hdfs://hadoop01:9000/txt/characters.txt"));
        // 设置输出路径-要求输出路径必须不存在
        FileOutputFormat.setOutputPath(job,new Path("hdfs://hadoop01:9000/result/char_count"));
        // 提交任务
        job.waitForCompletion(true);
    }
}

现在运行我们会发现会报错:因为Hadoop对Windows系统的兼容性不强,所以在Windows中运行Hadoop程序的时候需要添加一些其他的配置

3、配置windows的hadoop

这些软件压缩包后面将放在我的百度云中,现在你们可以从网上下载 ,也可以私聊我要。

bin.7z解压的时候会遇到相同名字,我们选择全部替换。

配置之后,需要双击winutils.exe,如果出现一个黑色窗口一闪而过,那么没有任何问题;

如果双击winutils.exe之后报错,那么将msvcr120.dll文件放到C:\Windows\System32目录下,然后再次双击winutils.exe工具,查看是否报错

然后配置环境变量:

新建HADOOP_HOME

修改Path

新建HADOOP_USER_NAME

4、运行程序

①、如果运行程序的时候,出现了null/bin/winutils.exe错误,那么解决方案:

先检查环境变量是否配置正确

如果环境变量配置正确,但是运行程序依然报错,那么可以在Driver类中添加代码:System.setProperty(“hadoop.home.dir”, “Hadoop的解压路径”);

②、如果运行程序的时候,出现了NativeIO$Windows,那么说明Hadoop和Windows系统兼容性不够强,在运行程序的时候,检查出错,解决方案如下:

先检查环境变量是否配置正确

如果环境变量配置正确,那么可以将Hadoop解压目录的bin目录下的hadoop.dll文件拷贝到C:\Windows\System32目录下,再次运行程序查看结果是否正确

③、如果上述两种方案依然不能解决问题,那么需要在当前工程下建好对应的包,然后将jar目录下的NativaIO.java拷贝到这个包下

④、重新运行程序的时候你需要注意删除hdfs的result目录

查看

hadoop fs -ls -R /

递归删除

hadoop fs -rm -r /result

成功后我们可以看到result文件下有char_count文件,下载到本地可以看到里面的结果:

注意这是hadoop的hdfs可视化界面,端口9870,hadoop3.X的端口,hadoop2.X是50070

浏览器输入hadoop01:9870

我们把part-00000下载到本地查看结果:

三、练习与IDEA插件

1、练习-单词统计

①、我们再做一个练习-单词统计,在txt文件夹创建一个word文件写入数据

hello tom hello bob david joy hello
hello rose joy hello rose
jerry hello tom hello joy
hello rose joy tom hello david

②、在IDEA编写代码,我们还在那个MapReduce的Maven工程内新建一个wordcount的包

下面新建三个类:

WordCountMapperWordCountReducerWordCountDriver

package org.example.wordcount;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class WordCountMapper extends Mapper<LongWritable, Text,Text, IntWritable> {
   private final IntWritable once = new IntWritable(1);
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        //拆分单词
       String[] arr= value.toString().split(" ");
        //写出单词
        for (String s:arr){
            context.write(new Text(s),once);
        }

    }
}


package org.example.wordcount;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class WordCountReducer extends Reducer<Text, IntWritable,Text,IntWritable> {

    @Override
    protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
        int sum=0;
        for(IntWritable value :values){
            sum += value.get();
        }
        context.write(key,new IntWritable(sum));
    }
}


package org.example.wordcount;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class WordCountDriver {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(WordCountDriver.class);
        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);

        //如果Mapper和Reducer的输出类型一致,那么可以只设置一次
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        FileInputFormat.addInputPath(job,new Path("hdfs://hadoop01:9000/txt/words.txt"));
        FileOutputFormat.setOutputPath(job,new Path("hdfs://hadoop01:9000/result/word_count"));
        job.waitForCompletion(true);
    }
}

③、结果如下:

2、练习-IP去重

直接上代码:

package org.example.ip;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class IPMapper extends Mapper<LongWritable, Text,Text, NullWritable> {

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        context.write(value,NullWritable.get());
    }
}


package org.example.ip;

import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class IPReducer extends Reducer<Text, NullWritable,Text,NullWritable> {

    @Override
    protected void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
        context.write(key,NullWritable.get());

    }
}


package org.example.ip;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.example.wordcount.WordCountDriver;
import org.example.wordcount.WordCountMapper;
import org.example.wordcount.WordCountReducer;

import java.io.IOException;

public class IPDriver {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(IPDriver.class);
        job.setMapperClass(IPMapper.class);
        job.setReducerClass(IPReducer.class);


        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(NullWritable.class);

        FileInputFormat.addInputPath(job,new Path("hdfs://hadoop01:9000/txt/ip.txt"));
        FileOutputFormat.setOutputPath(job,new Path("hdfs://hadoop01:9000/result/ip"));
        job.waitForCompletion(true);
    }
}

3、IDEA插件

查看结果更加方便:

安装后重启之后:

然后连接我们的hdfs:

这时候我们可以进行上传下载文件,也可以打开刚才的练习结果:

四、组件

1、Writable-序列化

①、在MapReduce中,要求被处理的数据能够被序列化。MapReduce提供了单独的序列化机制- MapReduce底层的序列化机制是基于AVRO实现的。

为了方便操作,在AVRO的基础上,MapReduce提供了更简单的序列化形式- 只需要让被序列化的对象对应的类实现Writable接口,覆盖其中的write和readFields方法

②、MapReduce针对常见类型提供了基本的序列化类

Java类

MapReduce的序列化类型

Byte

ByteWritable

Short

ShortWritable

Int

IntWritable

Long

LongWritable

Float

FloatWritable

Double

DoubleWritable

Boolean

BooleanWritable

String

Text

Array

ArrayWritable

Map

MapWritable

③、注意

在MapReduce中,要求被序列化的对象对应的类中必须提供无参构造

在MapReduce中,要求被序列化的对象的属性值不能为null

④、代码片段

package org.example.serialflow;

import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class Flow implements Writable {
    private int upFlow;
    private int downFlow;
    public int getDownFlow() {
        return downFlow;
    }
    public void setDownFlow(int downFlow) {
        this.downFlow = downFlow;
    }
    public int getUpFlow() {
        return upFlow;
    }
    public void setUpFlow(int upFlow) {
        this.upFlow = upFlow;
    }
    // 需要将有必要的属性依次序列化写出即可
    @Override
    public void write(DataOutput out) throws IOException {
        out.writeInt(getUpFlow());
        out.writeInt(getDownFlow());
    }

    @Override
    public void readFields(DataInput in) throws IOException {
        setUpFlow(in.readInt());
        setDownFlow(in.readInt());
    }
}

2、Partitioner-分区

①、在MapReduce中,分区用于将数据按照指定的条件来进行分隔,本质上就是对数据进行分类

②、在MapReduce中,如果不指定,那么默认使用的是HashPartitioner

③、实际过程中,如果需要指定自己的分类条件,那么需要自定义分区

④、案例:分地区统计每一个人花费的总流量(文件:flow.txt)

在MapReduce中,需要对分区进行编号,编号从0开始依次往上递增

在MapReduce中,如果不指定,那么默认只有1个ReduceTask,每一个ReduceTask会对应一个结果文件。也因此,如果设置了Partitioner,那么需要给定对应数量的ReduceTask - 分区数决定了ReduceTask的数量

看代码:

flow类:

package org.example.partflow;

import org.apache.hadoop.io.Writable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class Flow implements Writable {
    private String city="";
    private int upFlow;
    private int downFlow;


    public String getCity() {
        return city;
    }

    public void setCity(String city) {
        this.city = city;
    }

    public int getUpFlow() {
        return upFlow;
    }

    public void setUpFlow(int upFlow) {
        this.upFlow = upFlow;
    }

    public int getDownFlow() {
        return downFlow;
    }

    public void setDownFlow(int downFlow) {
        this.downFlow = downFlow;
    }

    @Override
    public void write(DataOutput dataOutput) throws IOException {
        dataOutput.writeUTF(getCity());
        dataOutput.writeInt(getUpFlow());
        dataOutput.writeInt(getDownFlow());
    }

    @Override
    public void readFields(DataInput dataInput) throws IOException {
        setCity(dataInput.readUTF());
        setDownFlow(dataInput.readInt());
        setUpFlow(dataInput.readInt());
    }
}

PartFlowMapper类:

package org.example.partflow;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class PartFlowMapper extends Mapper<LongWritable,Text,Text,Flow> {
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        //15936842654 shanghai peter 236 7566
        String[] arr = value.toString().split(" ");
        //封装对象
        Flow f = new Flow();
        f.setCity(arr[1]);
        f.setUpFlow(Integer.parseInt(arr[3]));
        f.setDownFlow(Integer.parseInt(arr[4]));
        context.write(new Text(arr[2]),f);
    }
}

PartFlowReduce类:

package org.example.partflow;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class PartFlowReducer extends Reducer<Text,Flow,Text, IntWritable> {

    @Override
    protected void reduce(Text key, Iterable<Flow> values, Context context) throws IOException, InterruptedException {
        int sum=0;
        for (Flow value:values){
            sum+=value.getUpFlow()+value.getDownFlow();
        }
        context.write(key,new IntWritable(sum));
    }
}

PartFlowPartitioner类:

package org.example.partflow;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Partitioner;

public class PartFlowPartitioner extends Partitioner<Text,Flow> {
    @Override
    public int getPartition(Text text, Flow flow, int i) {
       String city = flow.getCity();
       if(city.equals("beijing")) return 0;
       else if(city.equals("shanghai")) return 1;
       else return 2;
    }
}

PartFlowDriver:

package org.example.partflow;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;


import java.io.IOException;

public class PartFlowDriver {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(PartFlowDriver.class);
        job.setMapperClass(PartFlowMapper.class);
        job.setReducerClass(PartFlowReducer.class);

        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Flow.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        //设置分区类
        job.setPartitionerClass(PartFlowPartitioner.class);
        //设置ReduceTask的数量为3
        job.setNumReduceTasks(3);

        FileInputFormat.addInputPath(job,new Path("hdfs://hadoop01:9000/txt/flow.txt"));
        FileOutputFormat.setOutputPath(job,new Path("hdfs://hadoop01:9000/result/part_flow"));
        job.waitForCompletion(true);
    }
}

结果为;

三、WritableComparable-排序

1、WritableComparable

在MapReduce中,会自动的对放在键的位置上的元素进行排序,因此要求放在键的位置上的元素对应的类必须实现Comparable。

考虑到MapReduce要求被传输的数据能够被序列化,因此放在键的位置上的元素对应的类要考虑实现- WritableComparable

MapReduce中,如果需要对多字段进行排序,那么称之为二次排序

2、案例①

对结果文件中的数据按照下行流量来进行排序(目录:serial_flow)

依然是创建几个类:

package org.example.sortflow;

import org.apache.hadoop.io.WritableComparable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class Flow implements WritableComparable<Flow> {

    private String name = "";
    private int upFlow;
    private int downFlow;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getUpFlow() {
        return upFlow;
    }

    public void setUpFlow(int upFlow) {
        this.upFlow = upFlow;
    }

    public int getDownFlow() {
        return downFlow;
    }

    public void setDownFlow(int downFlow) {
        this.downFlow = downFlow;
    }

    // 按照下行流量来进行升序排序
    @Override
    public int compareTo(Flow o) {
        return this.getDownFlow() - o.getDownFlow();
    }

    @Override
    public void write(DataOutput out) throws IOException {
        out.writeUTF(getName());
        out.writeInt(getUpFlow());
        out.writeInt(getDownFlow());
    }

    @Override
    public void readFields(DataInput in) throws IOException {
        setName(in.readUTF());
        setUpFlow(in.readInt());
        setDownFlow(in.readInt());
    }

}


package org.example.sortflow;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class SortFlowDriver {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(SortFlowDriver.class);
        job.setMapperClass(SortFlowMapper.class);
        job.setReducerClass(SortFlowReducer.class);

        job.setMapOutputKeyClass(Flow.class);
        job.setMapOutputValueClass(NullWritable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        // 输入是目录,那么会处理这个目录下的所有的文件
        FileInputFormat.addInputPath(job, new Path("hdfs://hadoop01:9000/result/serial_flow/"));
        FileOutputFormat.setOutputPath(job, new Path("hdfs://hadoop01:9000/result/sort_flow"));

        job.waitForCompletion(true);
    }

}


package org.example.sortflow;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class SortFlowMapper extends Mapper<LongWritable, Text, Flow, NullWritable> {
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        // adair	13766	19363
        String[] arr = value.toString().split("	");
        Flow f = new Flow();
        f.setName(arr[0]);
        f.setUpFlow(Integer.parseInt(arr[1]));
        f.setDownFlow(Integer.parseInt(arr[2]));
        context.write(f, NullWritable.get());
    }
}


package org.example.sortflow;

import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class SortFlowReducer extends Reducer<Flow, NullWritable, Text, Text> {
    @Override
    protected void reduce(Flow key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
        context.write(new Text(key.getName()), new Text(key.getUpFlow() + "	" + key.getDownFlow()));
    }
}

结果:

3、案例②

先按照月份进行升序排序;如果是同一个月中,按照利润进行降序排序(文件:profit.txt)

文件profit.txt数据:

2 rose 345
1 rose 235
1 tom 234
2 jim 572
3 rose 123
1 jim 321
2 tom 573
3 jim 876
3 tom 648

代码如下:

package org.example.sortprofit;

import org.apache.hadoop.io.WritableComparable;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

public class Profit implements WritableComparable<Profit> {

    private int month;
    private String name = "";
    private int profit;

    public int getMonth() {
        return month;
    }

    public void setMonth(int month) {
        this.month = month;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getProfit() {
        return profit;
    }

    public void setProfit(int profit) {
        this.profit = profit;
    }

    // 先按照月份进行升序排序;同一个月中,按照利润进行降序排序
    @Override
    public int compareTo(Profit o) {
        int r = getMonth() - o.getMonth();
        if (r == 0)
            return o.getProfit() - this.getProfit();
        return r;
    }

    @Override
    public void write(DataOutput out) throws IOException {
        out.writeInt(getMonth());
        out.writeUTF(getName());
        out.writeInt(getProfit());
    }

    @Override
    public void readFields(DataInput in) throws IOException {
        setMonth(in.readInt());
        setName(in.readUTF());
        setProfit(in.readInt());
    }
}


package org.example.sortprofit;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;

public class SortProfitDriver {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(SortProfitDriver.class);
        job.setMapperClass(SortProfitMapper.class);
        job.setReducerClass(SortProfitReducer.class);

        job.setMapOutputKeyClass(Profit.class);
        job.setMapOutputValueClass(NullWritable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        FileInputFormat.addInputPath(job, new Path("hdfs://hadoop01:9000/txt/profit.txt"));
        FileOutputFormat.setOutputPath(job, new Path("hdfs://hadoop01:9000/result/sort_profit"));

        job.waitForCompletion(true);
    }

}


package org.example.sortprofit;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class SortProfitMapper extends Mapper<LongWritable, Text, Profit, NullWritable> {
    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        // 2 rose 345
        String[] arr = value.toString().split(" ");
        Profit p = new Profit();
        p.setMonth(Integer.parseInt(arr[0]));
        p.setName(arr[1]);
        p.setProfit(Integer.parseInt(arr[2]));
        context.write(p, NullWritable.get());
    }
}


package org.example.sortprofit;

import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

import java.io.IOException;

public class SortProfitReducer extends Reducer<Profit, NullWritable, Text, Text> {
    @Override
    protected void reduce(Profit key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
        context.write(new Text(key.getName()), new Text(key.getMonth() + "	" + key.getProfit()));
    }
}

结果如下:

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值