Hadoop源码分析笔记(十):数据节点--流式接口的实现

流式接口的实现

        数据节点通过数据节点存储DataStorage和文件系统数据集FSDataset,将数据块的物理存储抽象为对象上的服务,流式接口就是构建在这个服务之上、数据节点的另一基本功能。

        为了保证HDFS的设计目标,提供高吞吐的数据访问,数据节点使用基于TCP流的数据访问接口,实现HDFS文件的读写。

       数据节点的流式接口实现是典型的TCP服务器。在Java基本套接字的功能,JDK为基本套接字准备了java.net.Socket和java.net.ServerSocket,其中ServerSocket可在特定端口接受客户的连接请求。Java程序一般通过构造ServerSocket对象,并将该对象绑定(bind)到某空闲端口,然后通过accept()方法监听此端口的入站连接。

      客户端连接到服务器时,accept()方法返回一个Socket对象,服务器使用该Socket对象和客户端进行交互,直到一方关闭连接。数据节点流式接口作为一个TCP服务器,以标准的方式实现了以上步骤。

        在DataNode.startDataNode()方法中,数据节点创建ServerSocket对象并绑定到监听地址的监听端口上,监听地址由配置项${dfs.datanode.address}指定,监听端口由${dfs.datanode.port}配置。接下来,数据节点调用了ServerSocket.setReceiveBufferSize()方法,它设置Socket接收缓存区的大小为128K(默认值一般为8k),这是一个比较重要的参数,数据节点需要提供高吞吐的数据服务,也就需要比较大的接收缓存区。这个缓存区大小设置适用于所有从accept()返回的Socket对象。

       在startDataNode()中,还会为数据节点的流式接口服务线程建立线程组,创建DataXceiverServer服务器,并将该线程组的线程设置为守护线程。这里涉及了Java线程中的两个概念,线程组和守护线程。

        线程组ThreadGroup表示一个线程的集合,通过线程组,Java允许同时对一组线程进行操作。如通过ThreadGroup.interrupt()方法可以中断线程组中的所有线程,通过setDemon(ture)设置组中线程为守护线程等。

        Java线程分为用户线程和守护线程两类,守护线程是一种“在后台提供通用性支持”的线程,比如垃圾回收线程。它与用户线程的唯一差别是当Java虚拟机中所有的线程都是守护线程的时候,虚拟机就可以退出了;如果还有一个或以上的用户线程,虚拟机就不会退出。流式接口服务线程所在的线程组被设置为守护线程,简化了数据节点对这些线程的管理。代码如下:

       

/**
   * This method starts the data node with the specified conf.
   * 
   * @param conf - the configuration
   *  if conf's CONFIG_PROPERTY_SIMULATED property is set
   *  then a simulated storage based data node is created.
   * 
   * @param dataDirs - only for a non-simulated storage data node
   * @throws IOException
   * @throws MalformedObjectNameException 
   * @throws MBeanRegistrationException 
   * @throws InstanceAlreadyExistsException 
   */
  void startDataNode(Configuration conf, 
                     AbstractList<File> dataDirs, SecureResources resources
                     ) throws IOException {

    if(UserGroupInformation.isSecurityEnabled() && resources == null)
      throw new RuntimeException("Cannot start secure cluster without " +
      		"privileged resources.");
    
    this.secureResources = resources;
    // use configured nameserver & interface to get local hostname
    if (conf.get("slave.host.name") != null) {
      machineName = conf.get("slave.host.name");   
    }
    if (machineName == null) {
      machineName = DNS.getDefaultHost(
                                     conf.get("dfs.datanode.dns.interface","default"),
                                     conf.get("dfs.datanode.dns.nameserver","default"));
    }
    InetSocketAddress nameNodeAddr = NameNode.getServiceAddress(conf, true);
    
    this.socketTimeout =  conf.getInt("dfs.socket.timeout",
                                      HdfsConstants.READ_TIMEOUT);
    this.socketWriteTimeout = conf.getInt("dfs.datanode.socket.write.timeout",
                                          HdfsConstants.WRITE_TIMEOUT);
    /* Based on results on different platforms, we might need set the default 
     * to false on some of them. */
    this.transferToAllowed = conf.getBoolean("dfs.datanode.transferTo.allowed", 
                                             true);
    this.writePacketSize = conf.getInt("dfs.write.packet.size", 64*1024);

    InetSocketAddress socAddr = DataNode.getStreamingAddr(conf);
    int tmpPort = socAddr.getPort();
    storage = new DataStorage();
    // construct registration
    this.dnRegistration = new DatanodeRegistration(machineName + ":" + tmpPort);

    // connect to name node
    this.namenode = (DatanodeProtocol) 
      RPC.waitForProxy(DatanodeProtocol.class,
                       DatanodeProtocol.versionID,
                       nameNodeAddr, 
                       conf);
    // get version and id info from the name-node
    NamespaceInfo nsInfo = handshake();
    StartupOption startOpt = getStartupOption(conf);
    assert startOpt != null : "Startup option must be set.";
    
    boolean simulatedFSDataset = 
        conf.getBoolean("dfs.datanode.simulateddatastorage", false);
    if (simulatedFSDataset) {
        setNewStorageID(dnRegistration);
        dnRegistration.storageInfo.layoutVersion = FSConstants.LAYOUT_VERSION;
        dnRegistration.storageInfo.namespaceID = nsInfo.namespaceID;
        // it would have been better to pass storage as a parameter to
        // constructor below - need to augment ReflectionUtils used below.
        conf.set("StorageId", dnRegistration.getStorageID());
        try {
          //Equivalent of following (can't do because Simulated is in test dir)
          //  this.data = new SimulatedFSDataset(conf);
          this.data = (FSDatasetInterface) ReflectionUtils.newInstance(
              Class.forName("org.apache.hadoop.hdfs.server.datanode.SimulatedFSDataset"), conf);
        } catch (ClassNotFoundException e) {
          throw new IOException(StringUtils.stringifyException(e));
        }
    } else { // real storage
      // read storage info, lock data dirs and transition fs state if necessary
      storage.recoverTransitionRead(nsInfo, dataDirs, startOpt);
      // adjust
      this.dnRegistration.setStorageInfo(storage);
      // initialize data node internal structure
      this.data = new FSDataset(storage, conf);
    }
      
    // register datanode MXBean
    this.registerMXBean(conf); // register the MXBean for DataNode
    
    // Allow configuration to delay block reports to find bugs
    artificialBlockReceivedDelay = conf.getInt(
        "dfs.datanode.artificialBlockReceivedDelay", 0);

    // find free port or use privileged port provide
    ServerSocket ss;
    if(secureResources == null) {
      ss = (socketWriteTimeout > 0) ? 
        ServerSocketChannel.open().socket() : new ServerSocket();
      Server.bind(ss, socAddr, 0);
    } else {
      ss = resources.getStreamingSocket();
    }
    ss.setReceiveBufferSize(DEFAULT_DATA_SOCKET_SIZE); 
    // adjust machine name with the actual port
    tmpPort = ss.getLocalPort();
    selfAddr = new InetSocketAddress(ss.getInetAddress().getHostAddress(),
                                     tmpPort);
    this.dnRegistration.setName(machineName + ":" + tmpPort);
    LOG.info("Opened info server at " + tmpPort);
      
    this.threadGroup = new ThreadGroup("dataXceiverServer");
    this.dataXceiverServer = new Daemon(threadGroup, 
        new DataXceiverServer(ss, conf, this));
    this.threadGroup.setDaemon(true); // auto destroy when empty

    this.blockReportInterval =
      conf.getLong("dfs.blockreport.intervalMsec", BLOCKREPORT_INTERVAL);
    this.initialBlockReportDelay = conf.getLong("dfs.blockreport.initialDelay",
                                            BLOCKREPORT_INITIAL_DELAY)* 1000L; 
    if (this.initialBlockReportDelay >= blockReportInterval) {
      this.initialBlockReportDelay = 0;
      LOG.info("dfs.blockreport.initialDelay is greater than " +
        "dfs.blockreport.intervalMsec." + " Setting initial delay to 0 msec:");
    }
    this.heartBeatInterval = conf.getLong("dfs.heartbeat.interval", HEARTBEAT_INTERVAL) * 1000L;
    DataNode.nameNodeAddr = nameNodeAddr;

    //initialize periodic block scanner
    String reason = null;
    if (conf.getInt("dfs.datanode.scan.period.hours", 0) < 0) {
      reason = "verification is turned off by configuration";
    } else if ( !(data instanceof FSDataset) ) {
      reason = "verifcation is supported only with FSDataset";
    } 
    if ( reason == null ) {
      blockScanner = new DataBlockScanner(this, (FSDataset)data, conf);
    } else {
      LOG.info("Periodic Block Verification is disabled because " +
               reason + ".");
    }

    //create a servlet to serve full-file content
    InetSocketAddress infoSocAddr = DataNode.getInfoAddr(conf);
    String infoHost = infoSocAddr.getHostName();
    int tmpInfoPort = infoSocAddr.getPort();
    this.infoServer = (secureResources == null) 
       ? new HttpServer("datanode", infoHost, tmpInfoPort, tmpInfoPort == 0, 
           conf, SecurityUtil.getAdminAcls(conf, DFSConfigKeys.DFS_ADMIN))
       : new HttpServer("datanode", infoHost, tmpInfoPort, tmpInfoPort == 0,
           conf, SecurityUtil.getAdminAcls(conf, DFSConfigKeys.DFS_ADMIN),
           secureResources.getListener());
    if (conf.getBoolean("dfs.https.enable", false)) {
      boolean needClientAuth = conf.getBoolean("dfs.https.need.client.auth", false);
      InetSocketAddress secInfoSocAddr = NetUtils.createSocketAddr(conf.get(
          "dfs.datanode.https.address", infoHost + ":" + 0));
      Configuration sslConf = new Configuration(false);
      sslConf.addResource(conf.get("dfs.https.server.keystore.resource",
          "ssl-server.xml"));
      this.infoServer.addSslListener(secInfoSocAddr, sslConf, needClientAuth);
    }
    this.infoServer.addInternalServlet(null, "/streamFile/*", StreamFile.class);
    this.infoServer.addInternalServlet(null, "/getFileChecksum/*",
        FileChecksumServlets.GetServlet.class);

    this.infoServer.setAttribute("datanode", this);
    this.infoServer.setAttribute("datanode.blockScanner", blockScanner);
    this.infoServer.setAttribute(JspHelper.CURRENT_CONF, conf);
    this.infoServer.addServlet(null, "/blockScannerReport", 
                               DataBlockScanner.Servlet.class);

    if (WebHdfsFileSystem.isEnabled(conf, LOG)) {
      infoServer.addJerseyResourcePackage(DatanodeWebHdfsMethods.class
          .getPackage().getName() + ";" + Param.class.getPackage().getName(),
          WebHdfsFileSystem.PATH_PREFIX + "/*");
    }
    this.infoServer.start();
    // adjust info port
    this.dnRegistration.setInfoPort(this.infoServer.getPort());
    myMetrics = DataNodeInstrumentation.create(conf,
                                               dnRegistration.getStorageID());
    
    // set service-level authorization security policy
    if (conf.getBoolean(
          ServiceAuthorizationManager.SERVICE_AUTHORIZATION_CONFIG, false)) {
      ServiceAuthorizationManager.refresh(conf, new HDFSPolicyProvider());
    }

    // BlockTokenSecretManager is created here, but it shouldn't be
    // used until it is initialized in register().
    this.blockTokenSecretManager = new BlockTokenSecretManager(false,
        0, 0);
    //init ipc server
    InetSocketAddress ipcAddr = NetUtils.createSocketAddr(
        conf.get("dfs.datanode.ipc.address"));
    ipcServer = RPC.getServer(this, ipcAddr.getHostName(), ipcAddr.getPort(), 
        conf.getInt("dfs.datanode.handler.count", 3), false, conf,
        blockTokenSecretManager);
    dnRegistration.setIpcPort(ipcServer.getListenerAddress().getPort());

    LOG.info("dnRegistration = " + dnRegistration);
  }

        DataNode.startDataNode()创建的DataXceiverServer实现了accept()循环,它的实现有如下要点。

         成员变量childSockets包含了所有打开的用于数据传输的Socket,这些Socket被DataXceiver对象使用;成员变量maxXceiverConut,是数据节点流式接口能够支持的最大客户数,它由配置项${dfs.datanode.max.xcievers}指定,默认值是256,在一个繁忙的集群上,应该适当提高该数值。

         DataXceiverServer.run()的accept()调用会阻塞等待客户端的连接,如果有新的服务请求,服务器会创建一个新的线程,即创建一个DataXceiver对象,服务客户。这里,DataXceiverServer使用了一客户一线程的模式,为每一个连接创建一个新线程,并有该线程和客户交互,DataXceiverServer的主循环只是简单地通过accept()监听连接请求。这种模式,非常适合数据节点的流式接口,有利于批量处理数据,提高数据的吞吐量。代码如下:

        

/**
 * Server used for receiving/sending a block of data.
 * This is created to listen for requests from clients or 
 * other DataNodes.  This small server does not use the 
 * Hadoop IPC mechanism.
 */
class DataXceiverServer implements Runnable, FSConstants {
  public static final Log LOG = DataNode.LOG;
  
  ServerSocket ss;
  DataNode datanode;
  // Record all sockets opend for data transfer
  Map<Socket, Socket> childSockets = Collections.synchronizedMap(
                                       new HashMap<Socket, Socket>());
  
  /**
   * Maximal number of concurrent xceivers per node.
   * Enforcing the limit is required in order to avoid data-node
   * running out of memory.
   */
  static final int MAX_XCEIVER_COUNT = 256;
  int maxXceiverCount = MAX_XCEIVER_COUNT;

  /** A manager to make sure that cluster balancing does not
   * take too much resources.
   * 
   * It limits the number of block moves for balancing and
   * the total amount of bandwidth they can use.
   */
  static class BlockBalanceThrottler extends BlockTransferThrottler {
   private int numThreads;
   
   /**Constructor
    * 
    * @param bandwidth Total amount of bandwidth can be used for balancing 
    */
   private BlockBalanceThrottler(long bandwidth) {
     super(bandwidth);
     LOG.info("Balancing bandwith is "+ bandwidth + " bytes/s");
   }
   
   /** Check if the block move can start. 
    * 
    * Return true if the thread quota is not exceeded and 
    * the counter is incremented; False otherwise.
    */
   synchronized boolean acquire() {
     if (numThreads >= Balancer.MAX_NUM_CONCURRENT_MOVES) {
       return false;
     }
     numThreads++;
     return true;
   }
   
   /** Mark that the move is completed. The thread counter is decremented. */
   synchronized void release() {
     numThreads--;
   }
  }

  BlockBalanceThrottler balanceThrottler;
  
  /**
   * We need an estimate for block size to check if the disk partition has
   * enough space. For now we set it to be the default block size set
   * in the server side configuration, which is not ideal because the
   * default block size should be a client-size configuration. 
   * A better solution is to include in the header the estimated block size,
   * i.e. either the actual block size or the default block size.
   */
  long estimateBlockSize;
  
  
  DataXceiverServer(ServerSocket ss, Configuration conf, 
      DataNode datanode) {
    
    this.ss = ss;
    this.datanode = datanode;
    
    this.maxXceiverCount = conf.getInt("dfs.datanode.max.xcievers",
        MAX_XCEIVER_COUNT);
    
    this.estimateBlockSize = conf.getLong("dfs.block.size", DEFAULT_BLOCK_SIZE);
    
    //set up parameter for cluster balancing
    this.balanceThrottler = new BlockBalanceThrottler(
      conf.getLong("dfs.balance.bandwidthPerSec", 1024L*1024));
  }

  /**
   */
  public void run() {
    while (datanode.shouldRun) {
      try {
        Socket s = ss.accept();
        s.setTcpNoDelay(true);
        new Daemon(datanode.threadGroup, 
            new DataXceiver(s, datanode, this)).start();
      } catch (SocketTimeoutException ignored) {
        // wake up to see if should continue to run
      } catch (AsynchronousCloseException ace) {
          LOG.warn(datanode.dnRegistration + ":DataXceiveServer:"
                  + StringUtils.stringifyException(ace));
          datanode.shouldRun = false;
      } catch (IOException ie) {
        LOG.warn(datanode.dnRegistration + ":DataXceiveServer: IOException due to:"
                                 + StringUtils.stringifyException(ie));
      } catch (Throwable te) {
        LOG.error(datanode.dnRegistration + ":DataXceiveServer: Exiting due to:" 
                                 + StringUtils.stringifyException(te));
        datanode.shouldRun = false;
      }
    }
    try {
      ss.close();
    } catch (IOException ie) {
      LOG.warn(datanode.dnRegistration + ":DataXceiveServer: Close exception due to: "
                               + StringUtils.stringifyException(ie));
    }
    LOG.info("Exiting DataXceiveServer");
  }
  
  void kill() {
    assert datanode.shouldRun == false :
      "shoudRun should be set to false before killing";
    try {
      this.ss.close();
    } catch (IOException ie) {
      LOG.warn(datanode.dnRegistration + ":DataXceiveServer.kill(): " 
                              + StringUtils.stringifyException(ie));
    }

    // close all the sockets that were accepted earlier
    synchronized (childSockets) {
      for (Iterator<Socket> it = childSockets.values().iterator();
           it.hasNext();) {
        Socket thissock = it.next();
        try {
          thissock.close();
        } catch (IOException e) {
        }
      }
    }
  }
}

        DataXceiverServer只处理客户端的连接请求,实际的请求处理和数据交换都交由DataXceiver处理。DataXceiver对象拥有自己独立的线程,该DataXceiver对象和它拥有的线程只处理一个客户端请求。DataXceiver实现了Runnable接口,在它的run()方法里,DataXceiver会执行一些流式接口共有的操作,然后根据请求码分别调用不同的DataXceiver成员方法。run()方法代码如下:

      

/**
 * Thread for processing incoming/outgoing data stream.
 */
class DataXceiver implements Runnable, FSConstants {
  public static final Log LOG = DataNode.LOG;
  static final Log ClientTraceLog = DataNode.ClientTraceLog;
  
  Socket s;
  final String remoteAddress; // address of remote side
  final String localAddress;  // local address of this daemon
  DataNode datanode;
  DataXceiverServer dataXceiverServer;
  
  public DataXceiver(Socket s, DataNode datanode, 
      DataXceiverServer dataXceiverServer) {
    
    this.s = s;
    this.datanode = datanode;
    this.dataXceiverServer = dataXceiverServer;
    dataXceiverServer.childSockets.put(s, s);
    remoteAddress = s.getRemoteSocketAddress().toString();
    localAddress = s.getLocalSocketAddress().toString();
    LOG.debug("Number of active connections is: " + datanode.getXceiverCount());
  }

  /**
   * Read/write data from/to the DataXceiveServer.
   */
  public void run() {
    DataInputStream in=null; 
    try {
      in = new DataInputStream(
          new BufferedInputStream(NetUtils.getInputStream(s), 
                                  SMALL_BUFFER_SIZE));
      short version = in.readShort();
      if ( version != DataTransferProtocol.DATA_TRANSFER_VERSION ) {
        throw new IOException( "Version Mismatch" );
      }
      boolean local = s.getInetAddress().equals(s.getLocalAddress());
      byte op = in.readByte();
      // Make sure the xciver count is not exceeded
      int curXceiverCount = datanode.getXceiverCount();
      if (curXceiverCount > dataXceiverServer.maxXceiverCount) {
        throw new IOException("xceiverCount " + curXceiverCount
                              + " exceeds the limit of concurrent xcievers "
                              + dataXceiverServer.maxXceiverCount);
      }
      long startTime = DataNode.now();
      switch ( op ) {
      case DataTransferProtocol.OP_READ_BLOCK:
        readBlock( in );
        datanode.myMetrics.addReadBlockOp(DataNode.now() - startTime);
        if (local)
          datanode.myMetrics.incrReadsFromLocalClient();
        else
          datanode.myMetrics.incrReadsFromRemoteClient();
        break;
      case DataTransferProtocol.OP_WRITE_BLOCK:
        writeBlock( in );
        datanode.myMetrics.addWriteBlockOp(DataNode.now() - startTime);
        if (local)
          datanode.myMetrics.incrWritesFromLocalClient();
        else
          datanode.myMetrics.incrWritesFromRemoteClient();
        break;
      case DataTransferProtocol.OP_REPLACE_BLOCK: // for balancing purpose; send to a destination
        replaceBlock(in);
        datanode.myMetrics.addReplaceBlockOp(DataNode.now() - startTime);
        break;
      case DataTransferProtocol.OP_COPY_BLOCK:
            // for balancing purpose; send to a proxy source
        copyBlock(in);
        datanode.myMetrics.addCopyBlockOp(DataNode.now() - startTime);
        break;
      case DataTransferProtocol.OP_BLOCK_CHECKSUM: //get the checksum of a block
        getBlockChecksum(in);
        datanode.myMetrics.addBlockChecksumOp(DataNode.now() - startTime);
        break;
      default:
        throw new IOException("Unknown opcode " + op + " in data stream");
      }
    } catch (Throwable t) {
      LOG.error(datanode.dnRegistration + ":DataXceiver",t);
    } finally {
      LOG.debug(datanode.dnRegistration + ":Number of active connections is: "
                               + datanode.getXceiverCount());
      IOUtils.closeStream(in);
      IOUtils.closeSocket(s);
      dataXceiverServer.childSockets.remove(s);
    }
  }
  .....
}


       由上所知,DatXceiver.run()首先创建输入流,然后进行版本检查。在前面的介绍的流式接口,所有的请求帧,第一字段都是版本好,所以,DataXceiver能够在run()方法中统一处理请求的版本号。版本检查失败会抛出异常,并执行最后的清理工作:关闭输入流和Socket。方法run()进行的第二项检查是该请求是否超出数据节点的支撑能力,以确保数据节点服务质量。经过这两项检查后,DataXceiver.run()读入请求码,并根据请求码,调用相应的方法,如读数据块会由readBlock()方法进行后续处理。

        DataXceiverServer和DataXceiver实现了数据节点流式接口,它们采用一客户一线程的方式,满足了数据节点流式接口批量读写数据、高吞吐量的特殊要求。

读数据

       客户端读取HDFS文件是通过操作码81的流式接口进行的。读请求包括如下字段:

       blockID(数据块ID):要读取的数据块标识,数据节点通过它定位数据块。

       generationStamp(数据块版本号):用于进行版本检查,防止读取错误的数据。

       startOffset(偏移量):要读取数据位于数据块中的位置。

       length(数据长度):客户端需要读取的数据长度。

       clientName(客户端名字):发起读请求的客户端名字。

       accessToken(访问令牌):安全特性相关。

       DataXceiver.readBlock()给出了数据节点 读数据流式接口实现的框架。方法的开始部分,会通过Socket连接的输入流,读取上述请求信息并构造一个数据块发送器BlockSender对象,然后,通过该对象发送数据,数据发送完毕后,方法执行一些清理工作。

        BlockSender的构造函数会进行一系列的检查,这些检查都通过以后,才会成功创建对象,否则异常通知readBlock()方法,并由该方法返回错误码给客户端,并结束这次请求。DataXceiver.readBlock(),代码如下:

        

/**
   * Read a block from the disk.
   * @param in The stream to read from
   * @throws IOException
   */
  private void readBlock(DataInputStream in) throws IOException {
    //
    // Read in the header
    //
    long blockId = in.readLong();          
    Block block = new Block( blockId, 0 , in.readLong());

    long startOffset = in.readLong();
    long length = in.readLong();
    String clientName = Text.readString(in);
    Token<BlockTokenIdentifier> accessToken = new Token<BlockTokenIdentifier>();
    accessToken.readFields(in);
    OutputStream baseStream = NetUtils.getOutputStream(s, 
        datanode.socketWriteTimeout);
    DataOutputStream out = new DataOutputStream(
                 new BufferedOutputStream(baseStream, SMALL_BUFFER_SIZE));
    
    if (datanode.isBlockTokenEnabled) {
      try {
        datanode.blockTokenSecretManager.checkAccess(accessToken, null, block,
            BlockTokenSecretManager.AccessMode.READ);
      } catch (InvalidToken e) {
        try {
          out.writeShort(DataTransferProtocol.OP_STATUS_ERROR_ACCESS_TOKEN);
          out.flush();
          throw new IOException("Access token verification failed, for client "
              + remoteAddress + " for OP_READ_BLOCK for block " + block);
        } finally {
          IOUtils.closeStream(out);
        }
      }
    }
    // send the block
    BlockSender blockSender = null;
    final String clientTraceFmt =
      clientName.length() > 0 && ClientTraceLog.isInfoEnabled()
        ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress,
            "%d", "HDFS_READ", clientName, "%d", 
            datanode.dnRegistration.getStorageID(), block, "%d")
        : datanode.dnRegistration + " Served block " + block + " to " +
            s.getInetAddress();
    try {
      try {
        blockSender = new BlockSender(block, startOffset, length,
            true, true, false, datanode, clientTraceFmt);
      } catch(IOException e) {
        out.writeShort(DataTransferProtocol.OP_STATUS_ERROR);
        throw e;
      }

      out.writeShort(DataTransferProtocol.OP_STATUS_SUCCESS); // send op status
      long read = blockSender.sendBlock(out, baseStream, null); // send data

      if (blockSender.isBlockReadFully()) {
        // See if client verification succeeded. 
        // This is an optional response from client.
        try {
          if (in.readShort() == DataTransferProtocol.OP_STATUS_CHECKSUM_OK  && 
              datanode.blockScanner != null) {
            datanode.blockScanner.verifiedByClient(block);
          }
        } catch (IOException ignored) {}
      }
      
      datanode.myMetrics.incrBytesRead((int) read);
      datanode.myMetrics.incrBlocksRead();
    } catch ( SocketException ignored ) {
      // Its ok for remote side to close the connection anytime.
      datanode.myMetrics.incrBlocksRead();
    } catch ( IOException ioe ) {
      /* What exactly should we do here?
       * Earlier version shutdown() datanode if there is disk error.
       */
      LOG.warn(datanode.dnRegistration +  ":Got exception while serving " + 
          block + " to " +
                s.getInetAddress() + ":\n" + 
                StringUtils.stringifyException(ioe) );
      throw ioe;
    } finally {
      IOUtils.closeStream(out);
      IOUtils.closeStream(blockSender);
    }
  }


        readBlock()方法还有一个需要提及的地方,在上述代码的最后部分,如果客户端成功读取并校验数据,会发送一个附加的响应码OP_STATUS_CHECKSUM_OK,通知数据节点。如果数据节点发送了完整的一个数据块,这时,数据节点可以根据这个响应码,通知数据块扫描器,让扫描器标记该数据块为客户端校验成功。数据节点使用数据块扫描器定期扫描数据块,以期尽快发现数据块错误,保证节点保存数据的正确性。数据块扫描需要读入数据块的数据和校验信息文件,并做检查,是一个比较耗资源的过程,如果客户端已经进行了这样的校验,数据节点就可以省略重复的工作,以减轻系统负载。

         数据块发送器完成读数据请求的大部分工作,包括:准备、发送读请求应答头、发送应答数据包和清理等。

        准备工作主要由BlockSender的构造函数完成,在为一系列成员变量赋值后,构造函数开始准备数据块的校验信息,打开校验信息文件,并从文件中获取校验方法、校验块大小(它们保存在校验信息文件的头部),涉及的BlockSender构造函数代码如下:

       

BlockSender(Block block, long startOffset, long length,
              boolean corruptChecksumOk, boolean chunkOffsetOK,
              boolean verifyChecksum, DataNode datanode, String clientTraceFmt)
      throws IOException {
    try {
      this.block = block;
      this.chunkOffsetOK = chunkOffsetOK;
      this.corruptChecksumOk = corruptChecksumOk;
      this.verifyChecksum = verifyChecksum;
      this.blockLength = datanode.data.getVisibleLength(block);
      this.transferToAllowed = datanode.transferToAllowed;
      this.clientTraceFmt = clientTraceFmt;

      if ( !corruptChecksumOk || datanode.data.metaFileExists(block) ) {
        checksumIn = new DataInputStream(
                new BufferedInputStream(datanode.data.getMetaDataInputStream(block),
                                        BUFFER_SIZE));

        // read and handle the common header here. For now just a version
       BlockMetadataHeader header = BlockMetadataHeader.readHeader(checksumIn);
       short version = header.getVersion();

        if (version != FSDataset.METADATA_VERSION) {
          LOG.warn("Wrong version (" + version + ") for metadata file for "
              + block + " ignoring ...");
        }
        checksum = header.getChecksum();
      } else {
        LOG.warn("Could not find metadata file for " + block);
        // This only decides the buffer size. Use BUFFER_SIZE?
        checksum = DataChecksum.newDataChecksum(DataChecksum.CHECKSUM_NULL,
            16 * 1024);
      }

      /* If bytesPerChecksum is very large, then the metadata file
       * is mostly corrupted. For now just truncate bytesPerchecksum to
       * blockLength.
       */        
      bytesPerChecksum = checksum.getBytesPerChecksum();
      if (bytesPerChecksum > 10*1024*1024 && bytesPerChecksum > blockLength){
        checksum = DataChecksum.newDataChecksum(checksum.getChecksumType(),
                                   Math.max((int)blockLength, 10*1024*1024));
        bytesPerChecksum = checksum.getBytesPerChecksum();        
      }
      checksumSize = checksum.getChecksumSize();

      if (length < 0) {
        length = blockLength;
      }

      endOffset = blockLength;
      if (startOffset < 0 || startOffset > endOffset
          || (length + startOffset) > endOffset) {
        String msg = " Offset " + startOffset + " and length " + length
        + " don't match block " + block + " ( blockLen " + endOffset + " )";
        LOG.warn(datanode.dnRegistration + ":sendBlock() : " + msg);
        throw new IOException(msg);
      }

      
      offset = (startOffset - (startOffset % bytesPerChecksum));
      if (length >= 0) {
        // Make sure endOffset points to end of a checksumed chunk.
        long tmpLen = startOffset + length;
        if (tmpLen % bytesPerChecksum != 0) {
          tmpLen += (bytesPerChecksum - tmpLen % bytesPerChecksum);
        }
        if (tmpLen < endOffset) {
          endOffset = tmpLen;
        }
      }

      // seek to the right offsets
      if (offset > 0) {
        long checksumSkip = (offset / bytesPerChecksum) * checksumSize;
        // note blockInStream is  seeked when created below
        if (checksumSkip > 0) {
          // Should we use seek() for checksum file as well?
          IOUtils.skipFully(checksumIn, checksumSkip);
        }
      }
      seqno = 0;

      blockIn = datanode.data.getBlockInputStream(block, offset); // seek to offset
      memoizedBlock = new MemoizedBlock(blockIn, blockLength, datanode.data, block);
    } catch (IOException ioe) {
      IOUtils.closeStream(this);
      IOUtils.closeStream(blockIn);
      throw ioe;
    }
  }


         上面的代码决定了我们需要从数据块文件和校验信息文件中读入哪些数据。以数据块文件为例,读请求中提供了偏移量startOffst和数据长度length两个参数,但由于校验信息是按块组织的,为了让客户端能够进行数据校验,必须返回包含用户读取数据的所有块。

         "零拷贝"数据传输

         数据节点是一个I/O密集型Java应用,为了充分利用Java NIO带来的性能提升,BlockSender支持两种数据发送:普通方式和NIO方式。普通方式使用基于Java流的API,实现数据节点“数据流"流式接口,NIO方式则利用了Java NIO中的transferTo()方法,以零拷贝的数据传输高效地实现了相同的接口。

        BlockSender使用了NIO的transferTo()方法,“零拷贝”进行数据高效传输,使得数据块的数据不经过数据节点,带来的一个问题是:数据节点失去了在客户端读取数据的过程中对数据进行校验的能力。所有,BlockSender也支持结合数据校验的数据传输,它被应用与数据块扫描中。另一个解决方案是让客户端对数据进行校验,并上报校验的结果,在DataXceiver.readBlock()清理动作中,数据节点会接受客户端的附加响应码,或获取客户端的校验结果。

 写数据

        流式接口的写数据实现远比读数据复杂。客户端写HDFS文件数据的操作码为80,请求包含如下主要字段:

        blockId(数据块ID):写数据的数据块标识,数据节点通过它定位数据块。

        generationStamp(版本号):版本检查

        pipelineSize(数据流管道的大小):参与到写过程的所有数据节点的个数

        isRecovery(是否是数据恢复过程):这个写操作是不是错误恢复过程中的一部分

        clientName(客户端名字):发起写请求的客户端名字

        hasSrcDataNode(源信息标记):写请求是否携带源信息,如果是true,则包含源信息

        srcDataNode(源信息):类型为DtanodeInfo,包含发起写请求的数据节点的信息

        numTargets(数据目标列表大小):当前数据节点还有多少个下游数据推送目标

        targets(数据目标列表):当前数据节点的下游数据推送目标列表

       accessToken(访问令牌):安全特性相关

        checksum(数据校验信息):类型为DataChecksum,包含了后续写数据数据包的校验方式

       上述字段在writeBlock()入口中读取,并保存在对应的方法变量中,然后,构造数据块接收器BlockReceiver对象,在BlockReceiver的构造函数中,会为写数据块和校验信息文件打开输出数据流,使用的是FSDataset.writeToBlock()方法,在完成一系列检查后,它返回到数据块文件和校验文件的输出流。代码如下:

       

 BlockReceiver(Block block, DataInputStream in, String inAddr,
                String myAddr, boolean isRecovery, String clientName, 
                DatanodeInfo srcDataNode, DataNode datanode) throws IOException {
    try{
      this.block = block;
      this.in = in;
      this.inAddr = inAddr;
      this.myAddr = myAddr;
      this.isRecovery = isRecovery;
      this.clientName = clientName;
      this.offsetInBlock = 0;
      this.srcDataNode = srcDataNode;
      this.datanode = datanode;
      this.checksum = DataChecksum.newDataChecksum(in);
      this.bytesPerChecksum = checksum.getBytesPerChecksum();
      this.checksumSize = checksum.getChecksumSize();
      //
      // Open local disk out
      //
      streams = datanode.data.writeToBlock(block, isRecovery,
                              clientName == null || clientName.length() == 0);
      this.finalized = false;
      if (streams != null) {
        this.out = streams.dataOut;
        this.checksumOut = new DataOutputStream(new BufferedOutputStream(
                                                  streams.checksumOut, 
                                                  SMALL_BUFFER_SIZE));
        // If this block is for appends, then remove it from periodic
        // validation.
        if (datanode.blockScanner != null && isRecovery) {
          datanode.blockScanner.deleteBlock(block);
        }
      }
    } catch (BlockAlreadyExistsException bae) {
      throw bae;
    } catch(IOException ioe) {
      IOUtils.closeStream(this);
      cleanupBlock();
      
      // check if there is a disk error
      IOException cause = FSDataset.getCauseIfDiskError(ioe);
      DataNode.LOG.warn("IOException in BlockReceiver constructor. Cause is ",
          cause);
      
      if (cause != null) { // possible disk error
        ioe = cause;
        datanode.checkDiskError(ioe); // may throw an exception here
      }
      
      throw ioe;
    }
  }


       数据流管道中,顺流的是HDFS的文件数据,而写操作的确认包会逆流而上,所有,这里需要两个Socket对象。其中,对象s用于和管道上游通信,它的输入和输出流分别是in和replyOut;往下游的Socket对象是mirrirSock,关联了输出流mirrorOut和输入流mirrorIn。

        如果当前数据节点不是数据管道的最末端,writeBlock()方法就会使用数据目标列表的第一项,建立到下一个数据节点的Socket连接,连接建立后,通过输出流mirrirOut,往下一个数据节点发起写请求,除了数据目标列表大小和数据目录列表字段会相应的变化以外,其他字段和从上游读到的请求信息是一致的。writeBlock()方法代码如下:

      

/**
   * Write a block to disk.
   * 
   * @param in The stream to read from
   * @throws IOException
   */
  private void writeBlock(DataInputStream in) throws IOException {
    DatanodeInfo srcDataNode = null;
    LOG.debug("writeBlock receive buf size " + s.getReceiveBufferSize() +
              " tcp no delay " + s.getTcpNoDelay());
    //
    // Read in the header
    //
    Block block = new Block(in.readLong(), 
        dataXceiverServer.estimateBlockSize, in.readLong());
    LOG.info("Receiving block " + block + 
             " src: " + remoteAddress +
             " dest: " + localAddress);
    int pipelineSize = in.readInt(); // num of datanodes in entire pipeline
    boolean isRecovery = in.readBoolean(); // is this part of recovery?
    String client = Text.readString(in); // working on behalf of this client
    boolean hasSrcDataNode = in.readBoolean(); // is src node info present
    if (hasSrcDataNode) {
      srcDataNode = new DatanodeInfo();
      srcDataNode.readFields(in);
    }
    int numTargets = in.readInt();
    if (numTargets < 0) {
      throw new IOException("Mislabelled incoming datastream.");
    }
    DatanodeInfo targets[] = new DatanodeInfo[numTargets];
    for (int i = 0; i < targets.length; i++) {
      DatanodeInfo tmp = new DatanodeInfo();
      tmp.readFields(in);
      targets[i] = tmp;
    }
    Token<BlockTokenIdentifier> accessToken = new Token<BlockTokenIdentifier>();
    accessToken.readFields(in);
    DataOutputStream replyOut = null;   // stream to prev target
    replyOut = new DataOutputStream(
                   NetUtils.getOutputStream(s, datanode.socketWriteTimeout));
    if (datanode.isBlockTokenEnabled) {
      try {
        datanode.blockTokenSecretManager.checkAccess(accessToken, null, block, 
            BlockTokenSecretManager.AccessMode.WRITE);
      } catch (InvalidToken e) {
        try {
          if (client.length() != 0) {
            replyOut.writeShort((short)DataTransferProtocol.OP_STATUS_ERROR_ACCESS_TOKEN);
            Text.writeString(replyOut, datanode.dnRegistration.getName());
            replyOut.flush();
          }
          throw new IOException("Access token verification failed, for client "
              + remoteAddress + " for OP_WRITE_BLOCK for block " + block);
        } finally {
          IOUtils.closeStream(replyOut);
        }
      }
    }

    DataOutputStream mirrorOut = null;  // stream to next target
    DataInputStream mirrorIn = null;    // reply from next target
    Socket mirrorSock = null;           // socket to next target
    BlockReceiver blockReceiver = null; // responsible for data handling
    String mirrorNode = null;           // the name:port of next target
    String firstBadLink = "";           // first datanode that failed in connection setup
    short mirrorInStatus = (short)DataTransferProtocol.OP_STATUS_SUCCESS;
    try {
      // open a block receiver and check if the block does not exist
      blockReceiver = new BlockReceiver(block, in, 
          s.getRemoteSocketAddress().toString(),
          s.getLocalSocketAddress().toString(),
          isRecovery, client, srcDataNode, datanode);

      //
      // Open network conn to backup machine, if 
      // appropriate
      //
      if (targets.length > 0) {
        InetSocketAddress mirrorTarget = null;
        // Connect to backup machine
        mirrorNode = targets[0].getName();
        mirrorTarget = NetUtils.createSocketAddr(mirrorNode);
        mirrorSock = datanode.newSocket();
        try {
          int timeoutValue = datanode.socketTimeout +
                             (HdfsConstants.READ_TIMEOUT_EXTENSION * numTargets);
          int writeTimeout = datanode.socketWriteTimeout + 
                             (HdfsConstants.WRITE_TIMEOUT_EXTENSION * numTargets);
          NetUtils.connect(mirrorSock, mirrorTarget, timeoutValue);
          mirrorSock.setSoTimeout(timeoutValue);
          mirrorSock.setSendBufferSize(DEFAULT_DATA_SOCKET_SIZE);
          mirrorOut = new DataOutputStream(
             new BufferedOutputStream(
                         NetUtils.getOutputStream(mirrorSock, writeTimeout),
                         SMALL_BUFFER_SIZE));
          mirrorIn = new DataInputStream(NetUtils.getInputStream(mirrorSock));

          // Write header: Copied from DFSClient.java!
          mirrorOut.writeShort( DataTransferProtocol.DATA_TRANSFER_VERSION );
          mirrorOut.write( DataTransferProtocol.OP_WRITE_BLOCK );
          mirrorOut.writeLong( block.getBlockId() );
          mirrorOut.writeLong( block.getGenerationStamp() );
          mirrorOut.writeInt( pipelineSize );
          mirrorOut.writeBoolean( isRecovery );
          Text.writeString( mirrorOut, client );
          mirrorOut.writeBoolean(hasSrcDataNode);
          if (hasSrcDataNode) { // pass src node information
            srcDataNode.write(mirrorOut);
          }
          mirrorOut.writeInt( targets.length - 1 );
          for ( int i = 1; i < targets.length; i++ ) {
            targets[i].write( mirrorOut );
          }
          accessToken.write(mirrorOut);

          blockReceiver.writeChecksumHeader(mirrorOut);
          mirrorOut.flush();

          // read connect ack (only for clients, not for replication req)
          if (client.length() != 0) {
            mirrorInStatus = mirrorIn.readShort();
            firstBadLink = Text.readString(mirrorIn);
            if (LOG.isDebugEnabled() || mirrorInStatus != DataTransferProtocol.OP_STATUS_SUCCESS) {
              LOG.info("Datanode " + targets.length +
                       " got response for connect ack " +
                       " from downstream datanode with firstbadlink as " +
                       firstBadLink);
            }
          }

        } catch (IOException e) {
          if (client.length() != 0) {
            replyOut.writeShort((short)DataTransferProtocol.OP_STATUS_ERROR);
            Text.writeString(replyOut, mirrorNode);
            replyOut.flush();
          }
          IOUtils.closeStream(mirrorOut);
          mirrorOut = null;
          IOUtils.closeStream(mirrorIn);
          mirrorIn = null;
          IOUtils.closeSocket(mirrorSock);
          mirrorSock = null;
          if (client.length() > 0) {
            throw e;
          } else {
            LOG.info(datanode.dnRegistration + ":Exception transfering block " +
                     block + " to mirror " + mirrorNode +
                     ". continuing without the mirror.\n" +
                     StringUtils.stringifyException(e));
          }
        }
      }

      // send connect ack back to source (only for clients)
      if (client.length() != 0) {
        if (LOG.isDebugEnabled() || mirrorInStatus != DataTransferProtocol.OP_STATUS_SUCCESS) {
          LOG.info("Datanode " + targets.length +
                   " forwarding connect ack to upstream firstbadlink is " +
                   firstBadLink);
        }
        replyOut.writeShort(mirrorInStatus);
        Text.writeString(replyOut, firstBadLink);
        replyOut.flush();
      }

      // receive the block and mirror to the next target
      String mirrorAddr = (mirrorSock == null) ? null : mirrorNode;
      blockReceiver.receiveBlock(mirrorOut, mirrorIn, replyOut,
                                 mirrorAddr, null, targets.length);

      // if this write is for a replication request (and not
      // from a client), then confirm block. For client-writes,
      // the block is finalized in the PacketResponder.
      if (client.length() == 0) {
        datanode.notifyNamenodeReceivedBlock(block, DataNode.EMPTY_DEL_HINT);
        LOG.info("Received block " + block + 
                 " src: " + remoteAddress +
                 " dest: " + localAddress +
                 " of size " + block.getNumBytes());
      }

      if (datanode.blockScanner != null) {
        datanode.blockScanner.addBlock(block);
      }
      
    } catch (IOException ioe) {
      LOG.info("writeBlock " + block + " received exception " + ioe);
      throw ioe;
    } finally {
      // close all opened streams
      IOUtils.closeStream(mirrorOut);
      IOUtils.closeStream(mirrorIn);
      IOUtils.closeStream(replyOut);
      IOUtils.closeSocket(mirrorSock);
      IOUtils.closeStream(blockReceiver);
    }
  }


        DataXceiver委托BlockReceiver.receiveBlock()处理写数据的数据包,成功处理完这些数据包以后,接下来的清理工作有:调用DataNode.notifyNamenodeReceivedBlock()通知名字。

         PacketResponder线程

         当BlockReceiver处理客户端的写数据请求时,方法receiveBlock()接收数据包,校验数据并保存到本地的数据块文件和校验信息文件中,如果节点处于数据流管道的中间,它还需要向下一个数据节点转发数据包。同时,数据节点还需要从下游接收数据包确认,并向上游转发。这里,涉及上面说的两个Socket输入流(in和mirrorIn)的读操作,为此,数据块接收器引入了PacketResponder线程,它和BlockReceiver所在的线程一起工作,分别用于从下游接收应答和从上游接收数据。为什么需要两个线程呢?我们知道,从输入流中读取数据,如果流中有可读的数据,立即读取,如果没有,则会阻塞等待。如果只是用一个线程,轮流读取两个输入流,就会在这两个输入流间引入耦合。客户端如果长时间不往数据节点发送数据,那么,就很可能阻塞了下游确认的接收;另一个极端是,虽然客户端往数据节点写入大量的数据,但由于处理过程正在等待mirrorIn的输入,也就没有机会进行处理,从而影响了数据的吞吐。

        PacketResponder线程将两个输入流的处理过程分开,该线程从下游数据节点接收确认,并在合适的时候,往上游发送。这里的“合适”包括两个条件:

        1、当前数据节点已经顺利处理完该数据包

        2、(数据节点处于管道的中间时)当前数据节点收到下游数据节点的数据包确认。

         这两个条件都满足,意味着当前数据节点和数据流管道后续数据节点都完成了对某个数据包的处理。

       由于当前节点由BlockReceiver线程处理数据包,所有,它必须将处理结果通过某种机制,通知到PacketResponder线程,并由PacketResponder线程进行进一步的处理。

        理解上述条件,PacketResponde的实现就很好理解,代码如下:

      

/**
   * Processed responses from downstream datanodes in the pipeline
   * and sends back replies to the originator.
   */
  class PacketResponder implements Runnable, FSConstants {   

    //packet waiting for ack
    private LinkedList<Packet> ackQueue = new LinkedList<Packet>(); 
    private volatile boolean running = true;
    private Block block;
    DataInputStream mirrorIn;   // input from downstream datanode
    DataOutputStream replyOut;  // output to upstream datanode
    private int numTargets;     // number of downstream datanodes including myself
    private BlockReceiver receiver; // The owner of this responder.
    private Thread receiverThread; // the thread that spawns this responder

    public String toString() {
      return "PacketResponder " + numTargets + " for Block " + this.block;
    }

    PacketResponder(BlockReceiver receiver, Block b, DataInputStream in, 
                    DataOutputStream out, int numTargets,
                    Thread receiverThread) {
      this.receiver = receiver;
      this.block = b;
      mirrorIn = in;
      replyOut = out;
      this.numTargets = numTargets;
      this.receiverThread = receiverThread;
    }

    /**
     * enqueue the seqno that is still be to acked by the downstream datanode.
     * @param seqno
     * @param lastPacketInBlock
     */
    synchronized void enqueue(long seqno, boolean lastPacketInBlock) {
      if (running) {
        LOG.debug("PacketResponder " + numTargets + " adding seqno " + seqno +
                  " to ack queue.");
        ackQueue.addLast(new Packet(seqno, lastPacketInBlock));
        notifyAll();
      }
    }
     ......
     }


      PacketResponder中的成员变量ackQueue,保存了BlockReceiver线程已经处理的写请求数据包。BlockReceiver.receiverPackage()方法每处理完一个数据包,就通过PacketResponder.enqueue()将对应信息(内部类BlockReceiver.Packet中,包括数据包的序列号和是否是最后一个数据包两个字段)放入队列ackQueue中,队列ackQueue中的信息由PacketResonder的run()方法处理,这是一个典型的生产者-消费者模型。

 


       
       版权申明:本文部分摘自【蔡斌、陈湘萍】所著【Hadoop技术内幕 深入解析Hadoop Common和HDFS架构设计与实现原理】一书,仅作为学习笔记,用于技术交流,其商业版权由原作者保留,推荐大家购买图书研究,转载请保留原作者,谢谢!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值