启动hadoop日常报错,今天是两台datanode只启动了一台,查看日志文件:
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-406105891-192.168.31.129-1524063772
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.UnregisteredNodeException):993 (Datanode Uuid be0c2693-7746-418e-a862-a3d2ce52d81c) service to hd-master/192.168.31.129:9000 is shutting down Datanode DatanodeRegistration(192.168.31.130:50010, datanodeUuid=be0c2693-7746-418e-a862-a3d2ce52d81c, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-ae02fd9e-59c8-4610-913d-3413684b69d7;nsid=1724398700;c=0) is attempting to report storage ID be0c2693-7746-418e-a862-a3d2ce52d81c. Node 192.168.31.131:50010 is expected to serve this storage.
去hdfs/data/current下,看到有不同时刻的version文件和BP-......的文件夹。
于是,先停止hadoop;
保留这次启动时间点对应的version和文件夹,删除剩余所有;
再次启动hadoop,datanode进程出现。