这篇文章
http://www.wuzesheng.com/?p=2358#more-2358
中的
2.2 IPC Server协议相关部分详细说明了IPC:
举个例子说明Protobuf 的主要运用:
和datanode 交互主要通过ClientDatanodeProtocol
和org.apache.hadoop.hdfs.protocolPB 相关 有三个类 :
ClientDatanodeProtocolPB(接口)
ClientDatanodeProtocolServerSideTranslatorPB implements ClientDatanodeProtocolPB:将来自Client 的 请求翻译成PB,并执行请求,以PB形式返回请求结果.
ClientDatanodeProtocolTranslatorPB implements org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol :它和Datanode 一起实现了ClientDatanodeProtocol ,就是回应client对datanode 请求的proxy.
客户端调用:
org.apache.hadoop.hdfs.BlockReaderLocalLegacy.BlockReaderLocalLegacy
private static BlockLocalPathInfo getBlockPathInfo(UserGroupInformation ugi,ExtendedBlock blk, DatanodeInfo node, Configuration conf, int timeout,Token<BlockTokenIdentifier> token, boolean connectToDnViaHostname)throws IOException {LocalDatanodeInfo localDatanodeInfo = getLocalDatanodeInfo(node.getIpcPort());BlockLocalPathInfo pathinfo = null;ClientDatanodeProtocol proxy = localDatanodeInfo. getDatanodeProxy(ugi, node,conf, timeout, connectToDnViaHostname);try {// make RPC to local datanode to find local pathnames of blockspathinfo = proxy.getBlockLocalPathInfo(blk, token);if (pathinfo != null) {if (LOG.isDebugEnabled()) {LOG.debug("Cached location of block " + blk + " as " + pathinfo);}localDatanodeInfo.setBlockLocalPathInfo(blk, pathinfo);}} catch (IOException e) {localDatanodeInfo.resetDatanodeProxy(); // Reset proxy on errorthrow e;}return pathinfo;}
private synchronized ClientDatanodeProtocol getDatanodeProxy(UserGroupInformation ugi, final DatanodeInfo node,final Configuration conf, final int socketTimeout,final boolean connectToDnViaHostname) throws IOException {if (proxy == null) {try {proxy = ugi.doAs(new PrivilegedExceptionAction<ClientDatanodeProtocol>() {@Overridepublic ClientDatanodeProtocol run() throws Exception {return DFSUtil. createClientDatanodeProtocolProxy(node, conf,socketTimeout, connectToDnViaHostname);}});} catch (InterruptedException e) {LOG.warn("encountered exception ", e);}}return proxy;}
org.apache.hadoop.hdfs. DFSUtil.java line958
static ClientDatanodeProtocol createClientDatanodeProtocolProxy(DatanodeID datanodeid, Configuration conf, int socketTimeout,boolean connectToDnViaHostname) throws IOException {return new ClientDatanodeProtocolTranslatorPB(datanodeid, conf, socketTimeout, connectToDnViaHostname);}
来看下ClientDatanodeProtocolTranslatorPB#getBlockLocalPathInfo 是如何将getBlockLocalPathInfo 封装成PB 请求
@Overridepublic BlockLocalPathInfo getBlockLocalPathInfo(ExtendedBlock block,Token<BlockTokenIdentifier> token) throws IOException {GetBlockLocalPathInfoRequestProto req =GetBlockLocalPathInfoRequestProto.newBuilder() //将请求翻译成PB.setBlock(PBHelper.convert(block)).setToken(PBHelper.convert(token)).build();GetBlockLocalPathInfoResponseProto resp;try {resp = rpcProxy.getBlockLocalPathInfo(NULL_CONTROLLER, req);} catch (ServiceException e) {throw ProtobufHelper.getRemoteException(e);}return new BlockLocalPathInfo(PBHelper.convert(resp.getBlock()),resp.getLocalPath(), resp.getLocalMetaPath());}
客户端将 getBlockLocalPathInfo 这个datanode 的调用 交给ClientDatanodeProtocol 这个proxy 去执行,ClientDatanodeProtocol 将调用包装成rpc call, 序列化成byte, 通过socket 传输,
Server 端调用
org.apache.hadoop.hdfs.server.datanode.DataNode
line 434
private void initIpcServer(Configuration conf) throws IOException {.....// Add all the RPC protocols that the Datanode implementsRPC.setProtocolEngine(conf, ClientDatanodeProtocolPB.class,ProtobufRpcEngine.class);ClientDatanodeProtocolServerSideTranslatorPB clientDatanodeProtocolXlator =new ClientDatanodeProtocolServerSideTranslatorPB(this);BlockingService service = ClientDatanodeProtocolService.newReflectiveBlockingService(clientDatanodeProtocolXlator); // datanode 刚刚启动就注册了ClientDatanodeProtocolService , ipcServer 相当于一个server 监听所有来自ClientDatanodeProtocol 代理的关于datanode 的方法请求ipcServer = new RPC.Builder(conf).setProtocol(ClientDatanodeProtocolPB.class).setInstance(service).setBindAddress(ipcAddr.getHostName()).setPort(ipcAddr.getPort()).setNumHandlers(conf.getInt(DFS_DATANODE_HANDLER_COUNT_KEY,DFS_DATANODE_HANDLER_COUNT_DEFAULT)).setVerbose(false).setSecretManager(blockPoolTokenSecretManager).build();.....}
org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB line107
@Overridepublic GetBlockLocalPathInfoResponseProto getBlockLocalPathInfo(RpcController unused, GetBlockLocalPathInfoRequestProto request)throws ServiceException {BlockLocalPathInfo resp;try {resp = impl. getBlockLocalPathInfo(PBHelper.convert(request.getBlock()), PBHelper.convert(request.getToken()));} catch (IOException e) {throw new ServiceException(e);}return GetBlockLocalPathInfoResponseProto.newBuilder().setBlock(PBHelper.convert(resp.getBlock())).setLocalPath(resp.getBlockPath()).setLocalMetaPath(resp.getMetaPath()).build();}
org.apache.hadoop.hdfs.server.datanode. DataNode# getBlockLocalPathInfo // 真正的调用地方
public BlockLocalPathInfo getBlockLocalPathInfo(ExtendedBlock block,Token<BlockTokenIdentifier> token) throws IOException {....}