dfs namenode format 导致 datenode不能连接上

 

PROBLEM

hadoop@potr134pc26:/usr/local/hadoop/bin$ rm -r
/usr/local/hadoop-datastore/
----NOW THERE IS NO HADOOP-DATASTORE FOLDER LOCALLY
hadoop@potr134pc26:/usr/local/hadoop/bin$ ./hadoop namenode -format
10/02/10 16:33:50 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = potr134pc26/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
Re-format filesystem in
/home/hadoop/hadoop-datastore/hadoop-hadoop/dfs/name ? (Y or N) Y
10/02/10 16:33:54 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
10/02/10 16:33:54 INFO namenode.FSNamesystem: supergroup=supergroup
10/02/10 16:33:54 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/02/10 16:33:54 INFO common.Storage: Image file of size 96 saved in 0
seconds.
10/02/10 16:33:54 INFO common.Storage: Storage directory
/home/hadoop/hadoop-datastore/hadoop-hadoop/dfs/name has been successfully
formatted.
10/02/10 16:33:54 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at potr134pc26/127.0.0.1
************************************************************/
hadoop@potr134pc26:/usr/local/hadoop/bin$ ./start-all.sh
starting namenode, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-namenode-potr134pc26.out
localhost: starting datanode, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-datanode-potr134pc26.out
localhost: starting secondarynamenode, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-potr134pc26.out
starting jobtracker, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-jobtracker-potr134pc26.out
localhost: starting tasktracker, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-tasktracker-potr134pc26.out

hadoop@potr134pc26:/usr/local/hadoop/bin$ jps
27461 Jps
27354 TaskTracker
27158 SecondaryNameNode
27250 JobTracker
26923 NameNode
hadoop@potr134pc26:/usr/local/hadoop/bin$ ./hadoop dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: %
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

---(((( AT THIS POINT WHEN I CHECKED THE LOG THE DATANODES STILL WASN'T UP
AND RUNNING)----------
mkdir /usr/local/hadoop-datastore
hadoop@potr134pc26:/usr/local/hadoop/bin$ ./stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: no datanode to stop
localhost: stopping secondarynamenode
hadoop@potr134pc26:/usr/local/hadoop/bin$ ./start-all.sh
starting namenode, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-namenode-potr134pc26.out
localhost: starting datanode, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-datanode-potr134pc26.out
localhost: starting secondarynamenode, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-potr134pc26.out
starting jobtracker, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-jobtracker-potr134pc26.out
localhost: starting tasktracker, logging to
/usr/local/hadoop/bin/../logs/hadoop-hadoop-tasktracker-potr134pc26.out
hadoop@potr134pc26:/usr/local/hadoop/bin$ jps
28038 NameNode
28536 Jps
28154 DataNode
28365 JobTracker
28470 TaskTracker
28272 SecondaryNameNode

./hadoop dfs -copyFromLocal /home/hadoop/Desktop/*.txt txtinput
copyFromLocal: `txtinput': specified destination directory doest not exist
hadoop@potr134pc26:/usr/local/hadoop/bin$ ./hadoop dfs -mkdir txtinput
hadoop@potr134pc26:/usr/local/hadoop/bin$ ./hadoop dfs -copyFromLocal
/home/hadoop/Desktop/*.txt txtinput
10/02/10 16:44:36 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/txtinput/20417.txt could only be replicated to 0 nodes,
instead of 1
  at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:396)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

  at org.apache.hadoop.ipc.Client.call(Client.java:739)
  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
  at $Proxy0.addBlock(Unknown Source)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
  at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
  at $Proxy0.addBlock(Unknown Source)
  at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
  at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
  at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
  at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)

10/02/10 16:44:36 WARN hdfs.DFSClient: Error Recovery for block null bad
datanode[0] nodes == null
10/02/10 16:44:36 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/hadoop/txtinput/20417.txt" - Aborting...
10/02/10 16:44:36 WARN hdfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/hadoop/txtinput/7ldvc10.txt could only be replicated to 0 nodes,
instead of 1
  at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:396)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

  at org.apache.hadoop.ipc.Client.call(Client.java:739)
  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
  at $Proxy0.addBlock(Unknown Source)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
  at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
  at $Proxy0.addBlock(Unknown Source)
  at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
  at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
  at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
  at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)

10/02/10 16:44:36 WARN hdfs.DFSClient: Error Recovery for block null bad
datanode[0] nodes == null
10/02/10 16:44:36 WARN hdfs.DFSClient: Could not get block locations.
Source file "/user/hadoop/txtinput/7ldvc10.txt" - Aborting...
copyFromLocal: java.io.IOException: File /user/hadoop/txtinput/20417.txt
could only be replicated to 0 nodes, instead of 1
  at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:396)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

java.io.IOException: File /user/hadoop/txtinput/7ldvc10.txt could only be
replicated to 0 nodes, instead of 1
  at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
  at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:396)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)


hadoop@potr134pc26:/usr/local/hadoop/bin$ ./hadoop dfsadmin -report
Configured Capacity: 0 (0 KB)
Present Capacity: 0 (0 KB)
DFS Remaining: 0 (0 KB)
DFS Used: 0 (0 KB)
DFS Used%: %
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

On Wed, 10 Feb 2010, E. Sammer wrote:

> On 2/10/10 3:57 PM, Nick Klosterman wrote:
>> It appears I have incompatible namespaceIDs. Any thoughts on how to
>> resolve that?
>> This is what the full datanodes log is saying:
>
> Was this data node part of a another DFS cluster at some point? It looks like
> you've reformatted the name node since the datanode connected to it. The
> datanode will refuse to connect to a namenode with a different namespaceId
> because the the data node would have blocks (possibly with the same ids) from
> another cluster. It's a stop gap safety mechanism. You'd have to destroy the
> data directory on the data node to "reinitialize" it so it picks up the new
> namespaceId from the name node at which point it will be allowed to connect.
>
> Just to be clear, this will also kill all data that was stored on the data
> node, so don't do this lightly.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
### 回答1: Hadoop的Namenode和DatanodeHadoop分布式文件系统(HDFS)中的两个重要组件。Namenode是HDFS的主节点,负责管理文件系统的命名空间和控制数据块的复制。Datanode是HDFS的从节点,负责存储和管理数据块。Namenode和Datanode之间通过心跳机制进行通信,以确保数据块的可靠性和一致性。在Hadoop集群中,通常会有多个Datanode,但只有一个Namenode。 ### 回答2: Hadoop是一个用于帮助大规模处理海量数据的分布式系统,它包括了多个部件。其中,namenode和datanode是其中两个关键组件。 1. Namenode Namenode是HDFS中的一个非常重要的组件,它是一个主节点,主要负责管理整个HDFS中所有数据块的元数据信息。这些元数据信息包括了所有数据块的位置、大小、副本数量以及存储节点等等。因此,namenode是整个HDFS系统中的一个中央管理器。 Namenode工作原理是通过将元数据信息存储在内存中,以便快速访问。同时,namenode也会将元数据信息存储在本地磁盘上,以备份和恢复。 除此之外,namenode还负责块的复制和块大小的管理,可以通过重复多少次来确定每个块的位置. 2. Datanode Datanode是HDFS中的另一个重要组件,他是一个从节点,主要负责存储HDFS中的数据块。当客户端需要存储数据时,数据会首先被拆分成一个一个的数据块,并存储在多个datanode中。当客户端需要访问数据时,数据会从datanode中读取并传输回客户端。 与namenode不同的是,datanode存储的是实际的数据块,而不是元数据信息。此外,datanode还会定期向namenode发送心跳信息,以便namenode了解数据块的运行情况。 在HDFS中,由于数据被分散存储在多个datanode中,因此它具有很好的可扩展性和容错性。即使某个datanode出现故障,也可以从其他datanode中获取数据并恢复其完整性。 最后,namenode和datanode是HDFS中的两个非常重要的组件,它们共同协作,实现了整个HDFS的数据存储和管理。由于它们的高可用性、可扩展性和容错性,HDFS可以被广泛应用于大规模数据处理,是企业级数据存储和处理的首选技术之一。 ### 回答3: Hadoop是一个分布式计算系统,它采用了主从架构,由namenode和datanode两类组成。namenodeHadoop集群中的主节点,负责管理整个Hadoop集群的元数据信息,如文件信息、文件块信息、datanode的信息等;而datanode是集群中的工作节点,用来存储文件数据块和进行数据处理。 在Hadoop中,一般将文件分成若干个数据块保存在不同的datanode上,namenode负责维护这些数据块的信息及其所在的datanode信息,实现了文件的存储和读取。当客户端需要读取文件时,会向namenode发送请求,namenode会返回该文件的数据块所在的datanode列表,然后客户端根据返回的列表去请求对应的datanode获取数据块并组合成完整的文件。 而datanode则是集群中的工作节点,负责存储文件数据块。每个data node会包含HDFS文件系统中一部分文件的副本,当某个副本损坏或丢失时,namenode就会通知其他datanode重新复制该文件块,保证文件数据的可靠性。此外,datanode还可以进行数据处理,执行MapReduce任务或其他计算任务。 总的来说,namenodeHadoop集群的管理节点,它负责集群的元数据管理,而datanodeHadoop集群的工作节点,负责存储文件数据块及数据处理。namenode和datanode的配合协作,使得Hadoop分布式文件系统可以高效地完成文件的存储和读取,保证了整个Hadoop集群的高可用性和可靠性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值