本实例主要演示通过Hadoop V2的RPC框架实现一个计算两个整数的Add和Sub,服务接口为 CaculateService ,继承于 VersionedProtocol ,具体代码如下所示:
- CaculateService
package cn.hadoop.service; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.ipc.ProtocolInfo; import org.apache.hadoop.ipc.VersionedProtocol; import cn.hadoop.conf.ConfigureAPI; /** * @Date May 7, 2015 * * @Author dengjie * * @Note Data calculate service interface */ @ProtocolInfo(protocolName = "", protocolVersion = ConfigureAPI.VersionID.RPC_VERSION) public interface CaculateService extends VersionedProtocol { // defined add function public IntWritable add(IntWritable arg1, IntWritable arg2); // defined sub function public IntWritable sub(IntWritable arg1, IntWritable arg2); }
注意,本工程使用的是Hadoop-2.6.0版本,这里CaculateService接口需要加入注解,来声明版本号。
CaculateServiceImpl类实现CaculateService接口。代码如下所示:
- CaculateServiceImpl
package cn.hadoop.service.impl; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.ipc.ProtocolSignature; import cn.hadoop.conf.ConfigureAPI; import cn.hadoop.service.CaculateService; /** * @Date May 7, 2015 * * @Author dengjie * * @Note Implements CaculateService class */ public class CaculateServiceImpl implements CaculateService { public ProtocolSignature getProtocolSignature(String arg0, long arg1, int arg2) throws IOException { return this.getProtocolSignature(arg0, arg1, arg2); } /** * Check the corresponding version */ public long getProtocolVersion(String arg0, long arg1) throws IOException { return ConfigureAPI.VersionID.RPC_VERSION; } /** * Add nums */ public IntWritable add(IntWritable arg1, IntWritable arg2) { return new IntWritable(arg1.get() + arg2.get()); } /** * Sub nums */ public IntWritable sub(IntWritable arg1, IntWritable arg2) { return new IntWritable(arg1.get() - arg2.get()); } }
CaculateServer服务类,对外提供服务,具体代码如下所示:
- CaculateServer
package cn.hadoop.rpc; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.ipc.RPC; import org.apache.hadoop.ipc.RPC.Server; import org.slf4j.LoggerFactory; import org.slf4j.Logger; import cn.hadoop.service.CaculateService; import cn.hadoop.service.impl.CaculateServiceImpl; /** * @Date May 7, 2015 * * @Author dengjie * * @Note Server Main */ public class CaculateServer { private static final Logger LOGGER = LoggerFactory.getLogger(CaculateServer.class); public static final int IPC_PORT = 9090; public static void main(String[] args) { try { Server server = new RPC.Builder(new Configuration()).setProtocol(CaculateService.class) .setBindAddress("127.0.0.1").setPort(IPC_PORT).setInstance(new CaculateServiceImpl()).build(); server.start(); LOGGER.info("CaculateServer has started"); System.in.read(); } catch (Exception ex) { ex.printStackTrace(); LOGGER.error("CaculateServer server error,message is " + ex.getMessage()); } } }
注意,在Hadoop V2版本中,获取RPC下的Server对象不能在使用RPC.getServer()方法了,该方法已被移除,取而代之的是使用Builder方法来构建新的Server对象。
RPCClient客户端类,用于访问Server端,具体代码实现如下所示:
- RPCClient
package cn.hadoop.rpc; import java.net.InetSocketAddress; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.ipc.RPC; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import cn.hadoop.service.CaculateService; /** * @Date May 7, 2015 * * @Author dengjie * * @Note RPC Client Main */ public class RPCClient { private static final Logger LOGGER = LoggerFactory.getLogger(RPCClient.class); public static void main(String[] args) { InetSocketAddress addr = new InetSocketAddress("127.0.0.1", CaculateServer.IPC_PORT); try { RPC.getProtocolVersion(CaculateService.class); CaculateService service = (CaculateService) RPC.getProxy(CaculateService.class, RPC.getProtocolVersion(CaculateService.class), addr, new Configuration()); int add = service.add(new IntWritable(2), new IntWritable(3)).get(); int sub = service.sub(new IntWritable(5), new IntWritable(2)).get(); LOGGER.info("2+3=" + add); LOGGER.info("5-2=" + sub); } catch (Exception ex) { ex.printStackTrace(); LOGGER.error("Client has error,info is " + ex.getMessage()); } } }
Hadoop V2 RPC服务端截图预览,如下所示:
Hadoop V2 RPC客户端截图预览,如下所示:
7.总结
Hadoop V2 RPC框架对Socket通信进行了封装,定义了自己的基类接口VersionProtocol。该框架需要通过网络以序列化的方式传输对象,关于Hadoop V2的序列化可以参考《Hadoop2源码分析-序列化篇》,传统序列化对象较大。框架内部实现了基于Hadoop自己的服务端对象和客户端对象。服务端对象通过new RPC.Builder().builder()的方式来获取,客户端对象通过RPC.getProxy()的方式来获取。并且都需要接受Configuration对象,该对象实现了Hadoop相关文件的配置。
8.结束语
这篇博客就和大家分享到这里,如果大家在研究学习的过程当中有什么问题,可以加群进行讨论或发送邮件给我,我会尽我所能为您解答,与君共勉!