Hbase~MapReduce~读取Hdfs中的数据插入到Hbase表中~自定义MapReduce

      通过 HBase 的相关 JavaAPI,实现伴随 HBase 操作的 MapReduce 过程,比如使用 MapReduce 将数据从本地文件系统导入到 HBase 的表中,比如我们从 HBase 中读取一些原始数据后使用 MapReduce 做数据分析。

1.修改hadoop配置

hadoop版本2.9.2

hbase版本:2.0.3

配置Hadoop启动的时候加载Hbase相关的jar包

修改hadoop的配置文件hadoop-env.sh

添加环境变量配置后重启Hadoop

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/usr/local/hbase-2.0.3/lib/*

2.Java工程开发

添加maven依赖

        <dependencies>
            <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>3.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>3.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>3.1.0</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>2.0.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-mapreduce</artifactId>
            <version>2.0.3</version>
        </dependency>
    </dependencies>

自定义MapReducer

package hadoop;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableOutputFormat;
import org.apache.hadoop.hbase.mapreduce.TableReducer;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import java.io.IOException;

/**
 * @describe: 一自定义mr将Hadoop hdfs中的数据导入到 Hbase
 */
public class ReadHdfsToHbase {
    //列族
    public static final String CF = "info1";

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.zookeeper.quorum", "hadoop1:2181");
        conf.set("hbase.rootdir", "hdfs://hadoop1:9000/HBase");
        conf.set(TableOutputFormat.OUTPUT_TABLE, args[1]);
        Job job = Job.getInstance(conf, ReadHdfsToHbase.class.getSimpleName());
        TableMapReduceUtil.addDependencyJars(job);
        job.setJarByClass(ReadHdfsToHbase.class);

        job.setMapperClass(ReadHdfsToHbaseMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);

        job.setReducerClass(ReadHdfsToHbaseReducer.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        job.setOutputFormatClass(TableOutputFormat.class);
        job.waitForCompletion(true);
    }

    public static class ReadHdfsToHbaseMapper extends Mapper<LongWritable, Text, Text, Text> {
        private final Text outKey = new Text();
        private final Text outValue = new Text();

        @Override
        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            String[] splits = value.toString().split("\t");
            outKey.set(splits[0]);
            outValue.set(splits[1]+"\t"+splits[2]+"\t"+splits[3]+"\t"+splits[4]);
            context.write(outKey, outValue);
        }
    }

    public static class ReadHdfsToHbaseReducer extends TableReducer<Text, Text, NullWritable> {

        @Override
        protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
            super.reduce(key, values, context);
            Put put = new Put(key.getBytes());
            for (Text text : values) {
                String[] splis = text.toString().split("\t");
                if (splis[0] != null && !"NULL".equals(splis[0])) {
                    put.addColumn(CF.getBytes(), "name".getBytes(), splis[0].getBytes());
                }
                if (splis[1] != null && !"NULL".equals(splis[1])) {
                    put.addColumn(CF.getBytes(), "age".getBytes(), splis[1].getBytes());
                }
                if (splis[2] != null && !"NULL".equals(splis[2])) {
                    put.addColumn(CF.getBytes(), "gender".getBytes(), splis[2].getBytes());
                }
                if (splis[3] != null && !"NULL".equals(splis[3])) {
                    put.addColumn(CF.getBytes(), "birthday".getBytes(), splis[3].getBytes());
                }
            }
            context.write(NullWritable.get(), put);
        }
    }
}

3.测试

1.将自定义的MR打Jar包,并将jar包上传到hadoop服务器上
2.在Hbase创建表 create 'stu','info1'
3运行MR
   hadoop jar ReadHdfsToHbase.jar hadoop.ReadHdfsToHbase  /stu.txt  stu
4Hbase 查看数据
   scan ‘stu’

数据文件:stu.txt
1    zhangsan    10    male    NULL
2    lisi    NULL    NULL    NULL
3    wangwu    NULL    NULL    NULL
4    zhaoliu    NULL    NULL    1993
  • 3
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值