以下是遇到一个hdfs的问题,而梳理的一个简单启动过程:(杂记)
hdfs version:2.7.4
./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
static DataNode::main()
->static secureMain()
->static createDataNode() //创建数据节点实例
->static instantiateDataNode() //解析配置参数,检查数据目录,创建DataNode实例(startDataNode() This method starts the data node with the specified conf)
->DataNode::runDatanodeDaemon()
->DataNode::blockPoolManager.startAll()
->DataNode::dataXceiverServer.start(); //datanode主要的数据读写服务线程(dataXceiverServer是Daemon(ThreadGroup, DataXceiverServer)的实例)
->DataNode::ipcServer.start() //IPC服务For InterDataNodeProtocol TODO
->DataNode::startPlugins(conf) //插件服务 TODO
/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiverServer.java
DataNode::dataXceiverServer.start() 该方法会调用到DataXceiverServer.run()方法
->DataXceiverServer.run() //监听在服务端口上等待请求,接收请求交予DataXceiver工作线程处理
->new Daemon(datanode.threadGroup,DataXceiver.create(peer, datanode, this)).start()
->DataXceiver::run() //启动worker线程,从请求连接中读取数据封装到op中
->DataXceiver::readOp() //读取op类型,消息的第一个字节
->Receiver::processOp(op) //根据op类型来处理该请求
->Receiver::opWriteBlock(in) //假设该op为WRITE_BLOCK
->DataXceiver::writeBlock()
->DataXceiver::blockReceiver.receiveBlock()
->BlockReceiver::receivePacket() //从请求连接中读取数据体并处理
->packetReceiver.receiveNextPacket(in) //接收请求数据包
->packetReceiver.mirrorPacketTo(mirrorOut) //将读取到的数据发送到下一个datanode
->flushOrSync(syncBlock) //将数据提交到操作系统(数据有文件系统来接管或者提交到磁盘)