Hadoop RPC

这篇文章 http://www.wuzesheng.com/?p=2358#more-2358 中的 2.2 IPC Server协议相关部分详细说明了IPC:

举个例子说明Protobuf 的主要运用:

和datanode 交互主要通过ClientDatanodeProtocol

和org.apache.hadoop.hdfs.protocolPB 相关 有三个类 : 
ClientDatanodeProtocolPB(接口)
ClientDatanodeProtocolServerSideTranslatorPB   implements ClientDatanodeProtocolPB:将来自Client 的 请求翻译成PB,并执行请求,以PB形式返回请求结果.
ClientDatanodeProtocolTranslatorPB  implements org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol :它和Datanode 一起实现了ClientDatanodeProtocol ,就是回应client对datanode 请求的proxy.

客户端调用:
org.apache.hadoop.hdfs.BlockReaderLocalLegacy.BlockReaderLocalLegacy
private static BlockLocalPathInfo getBlockPathInfo(UserGroupInformation ugi,
      ExtendedBlock blk, DatanodeInfo node, Configuration conf, int timeout,
      Token<BlockTokenIdentifier> token, boolean connectToDnViaHostname)
      throws IOException {
    LocalDatanodeInfo localDatanodeInfo = getLocalDatanodeInfo(node.getIpcPort());
    BlockLocalPathInfo pathinfo = null;
    ClientDatanodeProtocol  proxy = localDatanodeInfo. getDatanodeProxy(ugi, node,
        conf, timeout, connectToDnViaHostname);
    try {
      // make RPC to local datanode to find local pathnames of blocks
      pathinfo =  proxy.getBlockLocalPathInfo(blk, token);
      if (pathinfo != null) {
        if (LOG.isDebugEnabled()) {
          LOG.debug("Cached location of block " + blk + " as " + pathinfo);
        }
        localDatanodeInfo.setBlockLocalPathInfo(blk, pathinfo);
      }
    } catch (IOException e) {
      localDatanodeInfo.resetDatanodeProxy(); // Reset proxy on error
      throw e;
    }
    return pathinfo;
  }
  

   private synchronized ClientDatanodeProtocol  getDatanodeProxy(
        UserGroupInformation ugi, final DatanodeInfo node,
        final Configuration conf, final int socketTimeout,
        final boolean connectToDnViaHostname) throws IOException {
      if (proxy == null) {
        try {
          proxy = ugi.doAs(new PrivilegedExceptionAction<ClientDatanodeProtocol>() {
            @Override
            public ClientDatanodeProtocol run() throws Exception {
              return  DFSUtil. createClientDatanodeProtocolProxy(node, conf,
                  socketTimeout, connectToDnViaHostname);
            }
          });
        } catch (InterruptedException e) {
          LOG.warn("encountered exception ", e);
        }
      }
      return proxy;
    }



org.apache.hadoop.hdfs. DFSUtil.java line958

  static  ClientDatanodeProtocol createClientDatanodeProtocolProxy(
      DatanodeID datanodeid, Configuration conf, int socketTimeout,
      boolean connectToDnViaHostname) throws IOException {
    return new  ClientDatanodeProtocolTranslatorPB(
        datanodeid, conf, socketTimeout, connectToDnViaHostname);
  }



来看下ClientDatanodeProtocolTranslatorPB#getBlockLocalPathInfo 是如何将getBlockLocalPathInfo   封装成PB 请求

@Override
  public BlockLocalPathInfo getBlockLocalPathInfo(ExtendedBlock block,
      Token<BlockTokenIdentifier> token) throws IOException {
     GetBlockLocalPathInfoRequestProto req =   
        GetBlockLocalPathInfoRequestProto.newBuilder()        //将请求翻译成PB 
        .setBlock(PBHelper.convert(block))
        .setToken(PBHelper.convert(token)).build();
    GetBlockLocalPathInfoResponseProto resp;
    try {
      resp = rpcProxy.getBlockLocalPathInfo(NULL_CONTROLLER, req);
    } catch (ServiceException e) {
      throw ProtobufHelper.getRemoteException(e);
    }
    return new BlockLocalPathInfo(PBHelper.convert(resp.getBlock()),
        resp.getLocalPath(), resp.getLocalMetaPath());
  }

客户端将   getBlockLocalPathInfo  这个datanode 的调用 交给ClientDatanodeProtocol  这个proxy 去执行,ClientDatanodeProtocol 将调用包装成rpc call, 序列化成byte, 通过socket 传输,



Server 端调用
  org.apache.hadoop.hdfs.server.datanode.DataNode  line 434 
private void initIpcServer(Configuration conf) throws IOException {
   .....
     // Add all the RPC protocols that the Datanode implements    
    RPC.setProtocolEngine(conf,  ClientDatanodeProtocolPB.class,
        ProtobufRpcEngine.class);
     ClientDatanodeProtocolServerSideTranslatorPB clientDatanodeProtocolXlator =
          new ClientDatanodeProtocolServerSideTranslatorPB(this);
    BlockingService service =  ClientDatanodeProtocolService
        .newReflectiveBlockingService(clientDatanodeProtocolXlator);   // datanode 刚刚启动就注册了ClientDatanodeProtocolService , ipcServer 相当于一个server 监听所有来自ClientDatanodeProtocol 代理的关于datanode 的方法请求
     ipcServer  = new RPC.Builder(conf)      
        .setProtocol(ClientDatanodeProtocolPB.class)
        .setInstance(service)
        .setBindAddress(ipcAddr.getHostName())
        .setPort(ipcAddr.getPort())
        .setNumHandlers(
            conf.getInt(DFS_DATANODE_HANDLER_COUNT_KEY,
                DFS_DATANODE_HANDLER_COUNT_DEFAULT)).setVerbose(false)
        .setSecretManager(blockPoolTokenSecretManager).build();
   .....
  }


org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB  line107

@Override
  public GetBlockLocalPathInfoResponseProto  getBlockLocalPathInfo(
      RpcController unused, GetBlockLocalPathInfoRequestProto request)
      throws ServiceException {
    BlockLocalPathInfo resp;
    try {
      resp = impl. getBlockLocalPathInfo(PBHelper.convert(request.getBlock()), PBHelper.convert(request.getToken()));
    } catch (IOException e) {
      throw new ServiceException(e);
    }
    return  GetBlockLocalPathInfoResponseProto.newBuilder()
        .setBlock(PBHelper.convert(resp.getBlock()))
        .setLocalPath(resp.getBlockPath()).setLocalMetaPath(resp.getMetaPath())
        .build();
  }


org.apache.hadoop.hdfs.server.datanode. DataNode# getBlockLocalPathInfo   // 真正的调用地方

 public BlockLocalPathInfo getBlockLocalPathInfo(ExtendedBlock block,
      Token<BlockTokenIdentifier> token) throws IOException {
....
}

        
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值