解决hadoop高可用使用start-dfs.sh脚本启动时namenode启动不了的问题

解决hadoop高可用,脚本启动时namenode启动不了的问题

在通过使用hadoop提供的脚本 sbin/start-dfs.sh 启动hdfs时,经常发现高可用的namenode之启动了一个

查看错误日志发现

STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hdp14/192.168.204.14
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 3.1.4
STARTUP_MSG:   classpath = /opt/bigdata/hadoop-3.1.4/etc/hadoop:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-compress-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-io-2.5.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/commons-net-3.6.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/error_prone_annotations-2.2.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/failureaccess-1.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/gson-2.2.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/guava-27.0-jre.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/hadoop-annotations-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/hadoop-auth-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/j2objc-annotations-1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jackson-annotations-2.9.10.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jackson-core-2.9.10.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jackson-databind-2.9.10.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jettison-1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jetty-http-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jetty-io-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jetty-security-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jetty-server-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jetty-servlet-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jetty-util-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jetty-webapp-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jetty-xml-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/json-smart-2.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/netty-3.10.6.Final.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/nimbus-jose-jwt-7.9.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/paranamer-2.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/re2j-1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/stax2-api-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/animal-sniffer-annotations-1.17.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/asm-5.0.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/avro-1.7.7.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/lib/checker-qual-2.5.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/hadoop-common-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/hadoop-common-3.1.4-tests.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/hadoop-nfs-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/hadoop-kms-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/common/hadoop-lzo-0.4.20.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/hadoop-auth-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/nimbus-jose-jwt-7.9.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/curator-framework-2.13.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/curator-client-2.13.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/guava-27.0-jre.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/failureaccess-1.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/checker-qual-2.5.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/error_prone_annotations-2.2.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/j2objc-annotations-1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/animal-sniffer-annotations-1.17.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-server-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-http-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-util-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-io-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-webapp-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-xml-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-servlet-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-security-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/hadoop-annotations-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-beanutils-1.9.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-compress-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jsch-0.1.55.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/curator-recipes-2.13.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jackson-databind-2.9.10.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jackson-annotations-2.9.10.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jackson-core-2.9.10.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/jetty-util-ajax-9.4.20.v20190813.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/netty-all-4.1.48.Final.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-3.1.4-tests.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-client-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-client-3.1.4-tests.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.4-tests.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.4-tests.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.4-tests.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/fst-2.50.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/guice-4.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/jackson-jaxrs-base-2.9.10.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.9.10.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.9.10.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-api-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-client-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-common-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-registry-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-common-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-router-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-services-api-3.1.4.jar:/opt/bigdata/hadoop-3.1.4/share/hadoop/yarn/hadoop-yarn-services-core-3.1.4.jar
STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'root' on 2021-05-06T07:47Z
STARTUP_MSG:   java = 1.8.0_212
************************************************************/
2021-05-13 15:47:16,437 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-05-13 15:47:16,684 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2021-05-13 15:47:16,981 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2021-05-13 15:47:17,253 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2021-05-13 15:47:17,253 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2021-05-13 15:47:17,398 INFO org.apache.hadoop.hdfs.server.namenode.NameNodeUtils: fs.defaultFS is hdfs://ns
2021-05-13 15:47:17,403 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients should use ns to access this namenode/service.
2021-05-13 15:47:17,751 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2021-05-13 15:47:17,807 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://hdp14:50070
2021-05-13 15:47:17,829 INFO org.eclipse.jetty.util.log: Logging initialized @2334ms to org.eclipse.jetty.util.log.Slf4jLog
2021-05-13 15:47:18,119 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2021-05-13 15:47:18,152 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2021-05-13 15:47:18,184 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2021-05-13 15:47:18,190 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2021-05-13 15:47:18,190 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2021-05-13 15:47:18,190 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2021-05-13 15:47:18,282 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2021-05-13 15:47:18,285 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2021-05-13 15:47:18,304 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2021-05-13 15:47:18,307 INFO org.eclipse.jetty.server.Server: jetty-9.4.20.v20190813; built: 2019-08-13T21:28:18.144Z; git: 84700530e645e812b336747464d6fbbf370c9a20; jvm 1.8.0_212-b10
2021-05-13 15:47:18,411 INFO org.eclipse.jetty.server.session: DefaultSessionIdManager workerName=node0
2021-05-13 15:47:18,411 INFO org.eclipse.jetty.server.session: No SessionScavenger set, using defaults
2021-05-13 15:47:18,431 INFO org.eclipse.jetty.server.session: node0 Scavenging every 600000ms
2021-05-13 15:47:18,483 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@d737b89{logs,/logs,file:///opt/bigdata/hadoop-3.1.4/logs/,AVAILABLE}
2021-05-13 15:47:18,485 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.s.ServletContextHandler@3ba987b8{static,/static,file:///opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/webapps/static/,AVAILABLE}
2021-05-13 15:47:18,690 INFO org.eclipse.jetty.util.TypeUtil: JVM Runtime does not support Modules
2021-05-13 15:47:18,735 INFO org.eclipse.jetty.server.handler.ContextHandler: Started o.e.j.w.WebAppContext@6531a794{hdfs,/,file:///opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/webapps/hdfs/,AVAILABLE}{file:/opt/bigdata/hadoop-3.1.4/share/hadoop/hdfs/webapps/hdfs}
2021-05-13 15:47:18,759 INFO org.eclipse.jetty.server.AbstractConnector: Started ServerConnector@4d0d9fe7{HTTP/1.1,[http/1.1]}{hdp14:50070}
2021-05-13 15:47:18,760 INFO org.eclipse.jetty.server.Server: Started @3265ms
2021-05-13 15:47:19,969 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2021-05-13 15:47:20,244 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edit logging is async:true
2021-05-13 15:47:20,353 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: KeyProvider: null
2021-05-13 15:47:20,360 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2021-05-13 15:47:20,360 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2021-05-13 15:47:20,378 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = along (auth:SIMPLE)
2021-05-13 15:47:20,379 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2021-05-13 15:47:20,379 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2021-05-13 15:47:20,379 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: ns
2021-05-13 15:47:20,380 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true
2021-05-13 15:47:20,562 INFO org.apache.hadoop.hdfs.server.common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2021-05-13 15:47:20,602 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2021-05-13 15:47:20,602 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2021-05-13 15:47:20,619 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2021-05-13 15:47:20,619 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2021 五月 13 15:47:20
2021-05-13 15:47:20,625 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2021-05-13 15:47:20,625 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2021-05-13 15:47:20,633 INFO org.apache.hadoop.util.GSet: 2.0% max memory 1.3 GB = 26.0 MB
2021-05-13 15:47:20,633 INFO org.apache.hadoop.util.GSet: capacity      = 2^22 = 4194304 entries
2021-05-13 15:47:20,770 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable = false
2021-05-13 15:47:20,805 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2021-05-13 15:47:20,806 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2021-05-13 15:47:20,806 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2021-05-13 15:47:20,807 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2021-05-13 15:47:20,815 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2021-05-13 15:47:20,816 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2021-05-13 15:47:20,816 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2021-05-13 15:47:20,816 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2021-05-13 15:47:20,817 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2021-05-13 15:47:20,817 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2021-05-13 15:47:20,817 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2021-05-13 15:47:21,014 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
2021-05-13 15:47:21,087 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2021-05-13 15:47:21,088 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2021-05-13 15:47:21,089 INFO org.apache.hadoop.util.GSet: 1.0% max memory 1.3 GB = 13.0 MB
2021-05-13 15:47:21,093 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2021-05-13 15:47:21,097 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2021-05-13 15:47:21,097 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: POSIX ACL inheritance enabled? true
2021-05-13 15:47:21,097 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2021-05-13 15:47:21,098 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occurring more than 10 times
2021-05-13 15:47:21,115 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2021-05-13 15:47:21,121 INFO org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager: SkipList is disabled
2021-05-13 15:47:21,152 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2021-05-13 15:47:21,153 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2021-05-13 15:47:21,163 INFO org.apache.hadoop.util.GSet: 0.25% max memory 1.3 GB = 3.3 MB
2021-05-13 15:47:21,174 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
2021-05-13 15:47:21,198 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2021-05-13 15:47:21,199 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2021-05-13 15:47:21,199 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2021-05-13 15:47:21,215 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2021-05-13 15:47:21,218 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2021-05-13 15:47:21,234 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2021-05-13 15:47:21,235 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2021-05-13 15:47:21,237 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 1.3 GB = 399.8 KB
2021-05-13 15:47:21,237 INFO org.apache.hadoop.util.GSet: capacity      = 2^16 = 65536 entries
2021-05-13 15:47:21,361 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/bigdata/hadoop-3.1.4/data/dfs/name/in_use.lock acquired by nodename 33469@hdp14
2021-05-13 15:47:24,057 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:24,070 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:24,070 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:25,110 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:25,142 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:25,144 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:26,125 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:26,146 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:26,164 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:27,129 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:27,151 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:27,171 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:27,883 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:28,134 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:28,155 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:28,174 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:28,884 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7002 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:29,137 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:29,165 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:29,177 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:29,886 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 8003 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:30,140 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:30,168 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:30,181 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:30,886 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 9004 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:31,142 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:31,174 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:31,184 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:31,888 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 10005 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:32,146 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:32,177 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:32,188 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:32,888 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 11006 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:33,157 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:33,182 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:33,196 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:33,208 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [192.168.204.16:8485, 192.168.204.17:8485, 192.168.204.18:8485]. Skipping.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.204.18:8485: Call From hdp14/192.168.204.14 to hdp18:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.204.17:8485: Call From hdp14/192.168.204.14 to hdp17:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.204.16:8485: Call From hdp14/192.168.204.14 to hdp16:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:305)
	at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:143)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectStreamingInputStreams(QuorumJournalManager.java:619)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:535)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:269)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1675)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1708)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1687)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:714)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:336)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1132)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:747)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:652)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:966)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:939)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1705)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1772)
2021-05-13 15:47:33,240 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2021-05-13 15:47:33,241 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile(file=/opt/bigdata/hadoop-3.1.4/data/dfs/name/current/fsimage_0000000000000002184, cpktTxId=0000000000000002184)
2021-05-13 15:47:33,450 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 123 INodes.
2021-05-13 15:47:33,682 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2021-05-13 15:47:33,684 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 2184 from /opt/bigdata/hadoop-3.1.4/data/dfs/name/current/fsimage_0000000000000002184
2021-05-13 15:47:33,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=true, haEnabled=true, isRollingUpgrade=false)
2021-05-13 15:47:33,712 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2021-05-13 15:47:33,714 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 12444 msecs
2021-05-13 15:47:34,299 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to hdp14:8020
2021-05-13 15:47:34,300 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Enable NameNode state context:false
2021-05-13 15:47:34,323 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2021-05-13 15:47:34,457 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 8020
2021-05-13 15:47:35,108 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState, ReplicatedBlocksState and ECBlockGroupsState MBeans.
2021-05-13 15:47:35,318 INFO org.apache.hadoop.hdfs.server.namenode.LeaseManager: Number of blocks under construction: 0
2021-05-13 15:47:35,437 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON. 
The reported blocks 0 needs additional 83 blocks to reach the threshold 0.9990 of total blocks 84.
The minimum number of live datanodes is not required. Safe mode will be turned off automatically once the thresholds have been reached.
2021-05-13 15:47:35,603 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: hdp14/192.168.204.14:8020
2021-05-13 15:47:35,599 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: starting
2021-05-13 15:47:35,597 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2021-05-13 15:47:35,658 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for standby state
2021-05-13 15:47:35,734 INFO org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Will roll logs on active node every 120 seconds.
2021-05-13 15:47:35,735 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.ha.tail-edits.period.backoff-max(0) assuming SECONDS
2021-05-13 15:47:35,735 INFO org.apache.hadoop.conf.Configuration.deprecation: No unit for dfs.ha.tail-edits.rolledits.timeout(60) assuming SECONDS
2021-05-13 15:47:35,865 INFO org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Starting standby checkpoint thread...
Checkpointing active NN to possible NNs: [http://hdp15:50070]
Serving checkpoints at http://hdp14:50070
2021-05-13 15:47:36,931 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:36,955 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:36,968 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:38,124 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:38,125 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:38,133 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:39,142 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:39,146 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:39,161 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:40,187 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:40,189 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:40,191 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:41,201 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:41,203 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:41,204 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:41,894 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:42,208 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:42,212 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:42,209 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:42,895 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7002 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:43,211 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:43,219 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:43,228 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:43,898 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 8004 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:44,217 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:44,233 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:44,244 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:44,899 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 9006 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:45,220 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:45,237 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:45,249 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:45,900 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 10007 ms (timeout=20000 ms) for a response for selectStreamingInputStreams. No responses yet.
2021-05-13 15:47:46,223 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:46,240 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:46,255 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:46,258 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [192.168.204.16:8485, 192.168.204.17:8485, 192.168.204.18:8485]. Skipping.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.204.18:8485: Call From hdp14/192.168.204.14 to hdp18:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.204.17:8485: Call From hdp14/192.168.204.14 to hdp17:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.204.16:8485: Call From hdp14/192.168.204.14 to hdp16:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:305)
	at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:143)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectStreamingInputStreams(QuorumJournalManager.java:619)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:535)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:269)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1675)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1708)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:342)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:505)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:451)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:468)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:482)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:464)
2021-05-13 15:47:46,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem write lock held for 10365 ms via java.lang.Thread.getStackTrace(Thread.java:1559)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1058)
org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:263)
org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.writeUnlock(FSNamesystemLock.java:215)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.writeUnlock(FSNamesystem.java:1651)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:383)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:505)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:451)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:468)
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:482)
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:464)
	Number of suppressed write-lock reports: 0
	Longest write-lock held interval: 10365.0 
	Total suppressed write-lock held time: 0.0
2021-05-13 15:47:46,299 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2021-05-13 15:47:46,307 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.sleep(EditLogTailer.java:444)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:537)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:451)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:468)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:482)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:464)
2021-05-13 15:47:46,327 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2021-05-13 15:47:46,368 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
2021-05-13 15:47:47,396 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:47,397 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:47,401 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:48,413 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:48,435 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:48,535 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:49,448 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:49,460 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:49,563 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:50,458 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:50,474 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:50,606 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:51,480 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:51,500 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:51,631 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:52,505 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:52,513 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:52,647 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:53,521 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:53,570 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:53,652 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:54,564 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:54,623 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:54,714 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:55,640 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:55,643 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:55,729 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:56,669 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp17/192.168.204.17:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:56,688 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp18/192.168.204.18:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:56,802 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hdp16/192.168.204.16:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2021-05-13 15:47:56,838 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [192.168.204.16:8485, 192.168.204.17:8485, 192.168.204.18:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.204.17:8485: Call From hdp14/192.168.204.14 to hdp17:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.204.16:8485: Call From hdp14/192.168.204.14 to hdp16:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.204.18:8485: Call From hdp14/192.168.204.14 to hdp18:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:305)
	at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:143)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:233)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:478)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet$6.apply(JournalSet.java:616)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:385)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:613)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1605)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1258)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1969)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
	at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:60)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1813)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1779)
	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:112)
	at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:5409)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
2021-05-13 15:47:56,855 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [192.168.204.16:8485, 192.168.204.17:8485, 192.168.204.18:8485], stream=null))
2021-05-13 15:47:56,926 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hdp14/192.168.204.14
************************************************************/

namenode启动过程中要连接journalnode,被journalnode拒绝,journalnode因该不会有问题。

查看脚本sbin/start-dfs.sh使用时的提示

Starting namenodes on [hdp14 hdp15]
Starting datanodes
Starting journal nodes [hdp16 hdp17 hdp18]
Starting ZK Failover Controllers on NN hosts [hdp14 hdp15]

确实是先启动namenode再启动journalnode,导致namenode启动之后连不上journalnode,超过重试次数(默认10次)或时间之后导致namenode启动失败。

解决方案:增加重试次数和重试间隔时间

修改配置文件core-site.xml,增加一下内容

<!-- 增加重试次数和重试时间-->
<property>
	<name>ipc.client.connect.max.retries</name>
	<value>100</value>
	<description>Indicates the number of retries a client will make to establish a server connection.</description>
</property>
<property>
	<name>ipc.client.connect.retry.interval</name>
	<value>10000</value>
	<description>Indicates the number of milliseconds a client will wait forbefore retrying to establish a server connection.</description>
</property>

重启集群,2个namenode已将全部启动,问题解决

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值