This node has namespaceId '1902198261 and clusterId 'CID-0ce44319-7032-49ec-83e5-4df0782a0d4e' but t

一:异常信息

2016-02-14 14:19:32,664 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = vhost37/172.30.134.77
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.3.0-cdh5.0.0
STARTUP_MSG:   classpath = /e3base/hadoop/etc/hadoop:/e3base/hadoop/share/hadoop/common/lib/avro-1.7.5-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/e3base/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/e3base/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/e3base/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/e3base/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/e3base/hadoop/share/hadoop/common/lib/xz-1.0.jar:/e3base/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/e3base/hadoop/share/hadoop/common/lib/asm-3.2.jar:/e3base/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/e3base/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/e3base/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/e3base/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/e3base/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/e3base/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/e3base/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/e3base/hadoop/share/hadoop/common/lib/activation-1.1.jar:/e3base/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/e3base/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/e3base/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/e3base/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/e3base/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/e3base/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/e3base/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/e3base/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/e3base/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/e3base/hadoop/share/hadoop/common/lib/zookeeper-3.4.5-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/e3base/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/e3base/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/e3base/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/e3base/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/e3base/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/e3base/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/e3base/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/e3base/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/e3base/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/e3base/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/e3base/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/e3base/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/e3base/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/e3base/hadoop/share/hadoop/common/lib/hadoop-annotations-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/e3base/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/e3base/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/e3base/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/e3base/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/e3base/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/e3base/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/e3base/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/e3base/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/e3base/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/e3base/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/e3base/hadoop/share/hadoop/common/lib/hadoop-auth-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/e3base/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/e3base/hadoop/share/hadoop/common/hadoop-nfs-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/common/hadoop-common-2.3.0-cdh5.0.0-tests.jar:/e3base/hadoop/share/hadoop/common/hadoop-common-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/hdfs:/e3base/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/e3base/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/e3base/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/e3base/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/e3base/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/e3base/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/e3base/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/e3base/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/e3base/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/e3base/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/e3base/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/e3base/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/e3base/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/e3base/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/e3base/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/e3base/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.3.0-cdh5.0.0-tests.jar:/e3base/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/e3base/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/e3base/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/e3base/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/e3base/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/e3base/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/e3base/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/e3base/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/e3base/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/e3base/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/e3base/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/e3base/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/e3base/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/e3base/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/e3base/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/e3base/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/e3base/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/e3base/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/e3base/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/e3base/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/e3base/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/e3base/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/e3base/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/e3base/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/e3base/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/e3base/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/e3base/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/e3base/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/e3base/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/e3base/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/e3base/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/e3base/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/e3base/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/e3base/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/e3base/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/avro-1.7.5-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/e3base/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.0-tests.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.0.jar:/e3base/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.3.0-cdh5.0.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = git://github.sf.cloudera.com/CDH/cdh.git -r 8e266e052e423af592871e2dfe09d54c03f6a0e8; compiled by 'jenkins' on 2014-03-28T04:29Z
STARTUP_MSG:   java = 1.7.0_80
************************************************************/
2016-02-14 14:19:32,674 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2016-02-14 14:19:32,677 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2016-02-14 14:19:32,947 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-02-14 14:19:33,047 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-02-14 14:19:33,047 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2016-02-14 14:19:33,322 WARN org.apache.hadoop.conf.Configuration: bad conf file: element not <property>
2016-02-14 14:19:33,364 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
2016-02-14 14:19:33,365 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://vhost37:8570
2016-02-14 14:19:33,417 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-02-14 14:19:33,423 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2016-02-14 14:19:33,435 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-02-14 14:19:33,438 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2016-02-14 14:19:33,438 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-02-14 14:19:33,438 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-02-14 14:19:33,472 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-02-14 14:19:33,473 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-02-14 14:19:33,493 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 8570
2016-02-14 14:19:33,494 INFO org.mortbay.log: jetty-6.1.26
2016-02-14 14:19:33,694 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
2016-02-14 14:19:33,738 INFO org.mortbay.log: Started SelectChannelConnector@vhost37:8570
2016-02-14 14:19:33,809 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2016-02-14 14:19:33,841 INFO org.apache.hadoop.hdfs.server.namenode.HostFileManager: read includes:
HostSet(
)
2016-02-14 14:19:33,841 INFO org.apache.hadoop.hdfs.server.namenode.HostFileManager: read excludes:
HostSet(
)
2016-02-14 14:19:33,845 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2016-02-14 14:19:33,845 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2016-02-14 14:19:33,847 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2016-02-14 14:19:33,848 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-02-14 14:19:33,850 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2016-02-14 14:19:33,850 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2016-02-14 14:19:33,856 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2016-02-14 14:19:33,857 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2016-02-14 14:19:33,857 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2016-02-14 14:19:33,857 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2016-02-14 14:19:33,857 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2016-02-14 14:19:33,857 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2016-02-14 14:19:33,857 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2016-02-14 14:19:33,857 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2016-02-14 14:19:33,857 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2016-02-14 14:19:33,863 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = e3base (auth:SIMPLE)
2016-02-14 14:19:33,863 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2016-02-14 14:19:33,863 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2016-02-14 14:19:33,864 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: drmcluster
2016-02-14 14:19:33,864 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true
2016-02-14 14:19:33,866 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2016-02-14 14:19:34,033 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2016-02-14 14:19:34,033 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-02-14 14:19:34,033 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2016-02-14 14:19:34,033 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2016-02-14 14:19:34,035 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2016-02-14 14:19:34,042 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2016-02-14 14:19:34,042 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-02-14 14:19:34,042 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2016-02-14 14:19:34,042 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2016-02-14 14:19:34,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-02-14 14:19:34,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2016-02-14 14:19:34,044 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2016-02-14 14:19:34,045 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2016-02-14 14:19:34,046 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2016-02-14 14:19:34,048 INFO org.apache.hadoop.util.GSet: Computing capacity for map Namenode Retry Cache
2016-02-14 14:19:34,048 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-02-14 14:19:34,048 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2016-02-14 14:19:34,048 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2016-02-14 14:19:34,051 INFO org.apache.hadoop.hdfs.server.namenode.AclConfigFlag: ACLs enabled? false
2016-02-14 14:19:34,059 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /e3base/hdata1/nn/in_use.lock acquired by nodename 2724@vhost37
2016-02-14 14:19:34,161 WARN org.apache.hadoop.conf.Configuration: bad conf file: element not <property>
2016-02-14 14:19:34,171 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /e3base/hdata2/nn/in_use.lock acquired by nodename 2724@vhost37
2016-02-14 14:19:34,262 WARN org.apache.hadoop.conf.Configuration: bad conf file: element not <property>
2016-02-14 14:19:34,317 WARN org.apache.hadoop.conf.Configuration: bad conf file: element not <property>
2016-02-14 14:19:34,431 WARN org.apache.hadoop.conf.Configuration: bad conf file: element not <property>
2016-02-14 14:19:34,718 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2016-02-14 14:19:34,751 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2016-02-14 14:19:34,751 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /e3base/hdata1/nn/current/fsimage_0000000000000000000
2016-02-14 14:19:34,756 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@79b04c7 expecting start txid #1
2016-02-14 14:19:34,756 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da, http://vhost14:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da
2016-02-14 14:19:34,759 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da, http://vhost14:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da' to transaction ID 1
2016-02-14 14:19:34,759 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da' to transaction ID 1
2016-02-14 14:19:34,773 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:e3base (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException: Fetch of http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da failed with status code 403
Response message:
This node has namespaceId '1902198261 and clusterId 'CID-0ce44319-7032-49ec-83e5-4df0782a0d4e' but the requesting node expected '1590940929' and 'CID-ed77dc50-846a-4f07-b244-776718de48da'
2016-02-14 14:19:34,774 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception initializing http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da
org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException: Fetch of http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da failed with status code 403

Response message:

This node has namespaceId '1902198261 and clusterId 'CID-0ce44319-7032-49ec-83e5-4df0782a0d4e' but the requesting node expected '1590940929' and 'CID-ed77dc50-846a-4f07-b244-776718de48da'
2016-02-14 14:19:34,774 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception initializing http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da
org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException: Fetch of http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da failed with status code 403
Response message:
This node has namespaceId '1902198261 and clusterId 'CID-0ce44319-7032-49ec-83e5-4df0782a0d4e' but the requesting node expected '1590940929' and 'CID-ed77dc50-846a-4f07-b244-776718de48da'
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:414)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:402)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
        at org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:442)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:401)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:143)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:243)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:180)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:133)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:802)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:662)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:275)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:879)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:440)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:496)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:652)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:637)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1286)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1352)
2016-02-14 14:19:34,777 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Got error reading edit log input stream http://vhost46:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da; failing over to edit log http://vhost14:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException: got premature end-of-file at txid 0; expected file to go up to 1
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:194)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:180)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:133)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:802)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:662)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:275)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:879)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:440)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:496)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:652)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:637)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1286)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1352)
2016-02-14 14:19:34,778 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://vhost14:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da' to transaction ID 1
2016-02-14 14:19:34,781 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:e3base (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException: Fetch of http://vhost14:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da failed with status code 403
Response message:
This node has namespaceId '1902198261 and clusterId 'CID-0ce44319-7032-49ec-83e5-4df0782a0d4e' but the requesting node expected '1590940929' and 'CID-ed77dc50-846a-4f07-b244-776718de48da'
2016-02-14 14:19:34,781 ERROR org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception initializing http://vhost14:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da
org.apache.hadoop.hdfs.server.namenode.TransferFsImage$HttpGetFailedException: Fetch of http://vhost14:8480/getJournal?jid=drmcluster&segmentTxId=1&storageInfo=-55%3A1590940929%3A0%3ACID-ed77dc50-846a-4f07-b244-776718de48da failed with status code 403
Response message:
This node has namespaceId '1902198261 and clusterId 'CID-0ce44319-7032-49ec-83e5-4df0782a0d4e' but the requesting node expected '1590940929' and 'CID-ed77dc50-846a-4f07-b244-776718de48da'
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:414)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:402)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)
        at org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:442)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:401)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:143)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
        at org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:243)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:180)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:133)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:802)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:662)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:275)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:879)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:638)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:440)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:496)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:652)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:637)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1286)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1352)
2016-02-14 14:19:34,782 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Error replaying edit log at offset 0.  Expected transaction ID was 1
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException: got premature end-of-file at txid 0; expected file to go up to 1
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:194)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:140)
        at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
        at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:83)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:180)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:133)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:802)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:662)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:275)

        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:879)


二:异常分析

Hadoop2.0配置集群时,一般设置两太Namenode(active、standby),正常情况下在NN中start-all.sh会分别出现

NN1:

[e3base@vhost45 ~]$ jps
26019 NameNode
26352 DFSZKFailoverController
27146 Jps

NN2:

[e3base@vhost37 logs]$ jps
4511 Jps
3446 NameNode

若其中一台NN中namenode无法被启动报上面异常,说明在改太机器中HA无法使用,也就是说集群机制不知道哪一台为活跃哪一台为备用的,这时候要查看 tail    -1000 hadoop-e3base-namenode-vhost37.log异常信息如一所示。

三解决方案

查看配合文件yarn-site.xml,添加配置信息

<property>
     <name>yarn.resourcemanager.hostname.rm1</name>
     <value>vhost45</value>
  </property>

<property>
     <name>yarn.resourcemanager.hostname.rm2</name>
      <value>vhost37</value>
 </property>

重启集群start-all.sh

查看两台NN的状态

 hdfs haadmin -DFSHAadmin -getServiceState nn1、 hdfs haadmin -DFSHAadmin -getServiceState nn2

问题至此搞定!!!

参见:http://www.aboutyun.com/thread-10572-1-1.html


QQ技术交流群:513848061 欢迎加入!


  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值