Hadoop V2 RPC框架使用实例

本实例主要演示通过Hadoop V2的RPC框架实现一个计算两个整数的Add和Sub,服务接口为 CaculateService ,继承于 VersionedProtocol ,具体代码如下所示:

  • CaculateService
复制代码
package cn.hadoop.service;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.ipc.ProtocolInfo;
import org.apache.hadoop.ipc.VersionedProtocol;

import cn.hadoop.conf.ConfigureAPI;

/**
 * @Date May 7, 2015
 *
 * @Author dengjie
 *
 * @Note Data calculate service interface
 */
@ProtocolInfo(protocolName = "", protocolVersion = ConfigureAPI.VersionID.RPC_VERSION)
public interface CaculateService extends VersionedProtocol {

    // defined add function
    public IntWritable add(IntWritable arg1, IntWritable arg2);

    // defined sub function
    public IntWritable sub(IntWritable arg1, IntWritable arg2);

}
复制代码

  注意,本工程使用的是Hadoop-2.6.0版本,这里CaculateService接口需要加入注解,来声明版本号。

  CaculateServiceImpl类实现CaculateService接口。代码如下所示:

  • CaculateServiceImpl
复制代码
package cn.hadoop.service.impl;

import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.ipc.ProtocolSignature;

import cn.hadoop.conf.ConfigureAPI;
import cn.hadoop.service.CaculateService;

/**
 * @Date May 7, 2015
 *
 * @Author dengjie
 *
 * @Note Implements CaculateService class
 */
public class CaculateServiceImpl implements CaculateService {

    public ProtocolSignature getProtocolSignature(String arg0, long arg1, int arg2) throws IOException {
        return this.getProtocolSignature(arg0, arg1, arg2);
    }

    /**
     * Check the corresponding version
     */
    public long getProtocolVersion(String arg0, long arg1) throws IOException {
        return ConfigureAPI.VersionID.RPC_VERSION;
    }

    /**
     * Add nums
     */
    public IntWritable add(IntWritable arg1, IntWritable arg2) {
        return new IntWritable(arg1.get() + arg2.get());
    }

    /**
     * Sub nums
     */
    public IntWritable sub(IntWritable arg1, IntWritable arg2) {
        return new IntWritable(arg1.get() - arg2.get());
    }

}
复制代码

  CaculateServer服务类,对外提供服务,具体代码如下所示:

  • CaculateServer
复制代码
package cn.hadoop.rpc;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.RPC.Server;
import org.slf4j.LoggerFactory;
import org.slf4j.Logger;

import cn.hadoop.service.CaculateService;
import cn.hadoop.service.impl.CaculateServiceImpl;

/**
 * @Date May 7, 2015
 *
 * @Author dengjie
 *
 * @Note Server Main
 */
public class CaculateServer {

    private static final Logger LOGGER = LoggerFactory.getLogger(CaculateServer.class);

    public static final int IPC_PORT = 9090;

    public static void main(String[] args) {
        try {
            Server server = new RPC.Builder(new Configuration()).setProtocol(CaculateService.class)
                    .setBindAddress("127.0.0.1").setPort(IPC_PORT).setInstance(new CaculateServiceImpl()).build();
            server.start();
            LOGGER.info("CaculateServer has started");
            System.in.read();
        } catch (Exception ex) {
            ex.printStackTrace();
            LOGGER.error("CaculateServer server error,message is " + ex.getMessage());
        }
    }

}
复制代码

  注意,在Hadoop V2版本中,获取RPC下的Server对象不能在使用RPC.getServer()方法了,该方法已被移除,取而代之的是使用Builder方法来构建新的Server对象。

  RPCClient客户端类,用于访问Server端,具体代码实现如下所示:

  • RPCClient
复制代码
package cn.hadoop.rpc;

import java.net.InetSocketAddress;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.ipc.RPC;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import cn.hadoop.service.CaculateService;

/**
 * @Date May 7, 2015
 *
 * @Author dengjie
 *
 * @Note RPC Client Main
 */
public class RPCClient {

    private static final Logger LOGGER = LoggerFactory.getLogger(RPCClient.class);

    public static void main(String[] args) {
        InetSocketAddress addr = new InetSocketAddress("127.0.0.1", CaculateServer.IPC_PORT);
        try {
            RPC.getProtocolVersion(CaculateService.class);
            CaculateService service = (CaculateService) RPC.getProxy(CaculateService.class,
                    RPC.getProtocolVersion(CaculateService.class), addr, new Configuration());
            int add = service.add(new IntWritable(2), new IntWritable(3)).get();
            int sub = service.sub(new IntWritable(5), new IntWritable(2)).get();
            LOGGER.info("2+3=" + add);
            LOGGER.info("5-2=" + sub);
        } catch (Exception ex) {
            ex.printStackTrace();
            LOGGER.error("Client has error,info is " + ex.getMessage());
        }
    }

}
复制代码

  Hadoop V2 RPC服务端截图预览,如下所示:

  Hadoop V2 RPC客户端截图预览,如下所示:

7.总结

  Hadoop V2 RPC框架对Socket通信进行了封装,定义了自己的基类接口VersionProtocol。该框架需要通过网络以序列化的方式传输对象,关于Hadoop V2的序列化可以参考《Hadoop2源码分析-序列化篇》,传统序列化对象较大。框架内部实现了基于Hadoop自己的服务端对象和客户端对象。服务端对象通过new RPC.Builder().builder()的方式来获取,客户端对象通过RPC.getProxy()的方式来获取。并且都需要接受Configuration对象,该对象实现了Hadoop相关文件的配置。

8.结束语

  这篇博客就和大家分享到这里,如果大家在研究学习的过程当中有什么问题,可以加群进行讨论或发送邮件给我,我会尽我所能为您解答,与君共勉!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值