文章目录
注:长时间挂起虚拟机会导致各种不可知的报错,绝大部分能以重启解决
记录1:Ubuntu20.04安装eclipse进度条卡60%
问题描述
部分文件无法下载
解决方案:
https://blog.csdn.net/u010692693/article/details/121157741
记录1:Ubuntu20.04安装eclipse进度条卡60%
问题描述
部分文件无法下载
解决方案:
https://blog.csdn.net/u010692693/article/details/121157741
记录2:Hadoop集群连接报错
问题描述
$ hdfs namenode -format
22/06/13 22:13:06 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = zhiyue-virtual-machine/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.10.2
STARTUP_MSG: classpath = /opt/hadoop-2.10.2/etc/hadoop:/opt/hadoop-2.10.2/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/zookeeper-3.4.14.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/netty-3.10.6.Final.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/spotbugs-annotations-3.1.9.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/httpcore-4.4.13.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-compress-1.21.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/slf4j-api-1.7.36.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/hadoop-auth-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/httpclient-4.5.13.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/reload4j-1.2.18.3.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/json-smart-1.3.3.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/curator-recipes-2.13.0.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/curator-client-2.13.0.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/woodstox-core-5.3.0.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jsch-0.1.55.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-lang3-3.4.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/nimbus-jose-jwt-7.9.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/stax2-api-4.2.1.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jsr305-3.0.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/hadoop-annotations-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/junit-4.13.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/curator-framework-2.13.0.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-beanutils-1.9.4.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop-2.10.2/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/hadoop-2.10.2/share/hadoop/common/hadoop-nfs-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/hadoop-common-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/common/hadoop-common-2.10.2-tests.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/netty-3.10.6.Final.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jackson-annotations-2.9.10.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/reload4j-1.2.18.3.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/netty-all-4.1.50.Final.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jackson-databind-2.9.10.7.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jsr305-3.0.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jackson-core-2.9.10.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/xercesImpl-2.12.0.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/xml-apis-1.4.01.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-client-2.10.2-tests.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-native-client-2.10.2-tests.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-rbf-2.10.2-tests.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-client-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-2.10.2-tests.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-native-client-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/hdfs/hadoop-hdfs-rbf-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jcip-annotations-1.0-1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-net-3.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/guice-3.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/zookeeper-3.4.14.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/netty-3.10.6.Final.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/spotbugs-annotations-3.1.9.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/metrics-core-3.0.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-io-2.5.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/httpcore-4.4.13.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-compress-1.21.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-configuration-1.6.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/avro-1.7.7.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/activation-1.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/gson-2.2.4.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/api-util-1.0.0-M20.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/httpclient-4.5.13.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/reload4j-1.2.18.3.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/json-smart-1.3.3.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/curator-recipes-2.13.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jettison-1.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/curator-client-2.13.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jets3t-0.9.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/woodstox-core-5.3.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jsch-0.1.55.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/xmlenc-0.52.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-lang3-3.4.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/nimbus-jose-jwt-7.9.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/stax2-api-4.2.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jsr305-3.0.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/paranamer-2.3.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/curator-framework-2.13.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jsp-api-2.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-digester-1.8.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-beanutils-1.9.4.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/asm-3.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/audience-annotations-0.5.0.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-math3-3.1.1.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jetty-sslengine-6.1.26.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/lib/snappy-java-1.0.5.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-registry-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-api-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-common-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-common-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-client-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/yarn/hadoop-yarn-server-router-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/netty-3.10.6.Final.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/commons-io-2.5.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/commons-compress-1.21.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/avro-1.7.7.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/reload4j-1.2.18.3.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/junit-4.13.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/lib/snappy-java-1.0.5.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.10.2-tests.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.10.2.jar:/opt/hadoop-2.10.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.10.2.jar:/opt/hadoop-2.10.2/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = Unknown -r 965fd380006fa78b2315668fbc7eb432e1d8200f; compiled by ‘ubuntu’ on 2022-05-24T22:35Z
STARTUP_MSG: java = 11.0.15
22/06/13 22:13:06 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
22/06/13 22:13:06 INFO namenode.NameNode: createNameNode [-format]
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/opt/hadoop-2.10.2/share/hadoop/common/lib/hadoop-auth-2.10.2.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Formatting using clusterid: CID-9efa1d71-140d-4b68-b5b3-00e9c16db549
22/06/13 22:13:07 INFO namenode.FSEditLog: Edit logging is async:true
22/06/13 22:13:07 INFO namenode.FSNamesystem: KeyProvider: null
22/06/13 22:13:07 INFO namenode.FSNamesystem: fsLock is fair: true
22/06/13 22:13:07 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
22/06/13 22:13:07 INFO namenode.FSNamesystem: fsOwner = zhiyue (auth:SIMPLE)
22/06/13 22:13:07 INFO namenode.FSNamesystem: supergroup = supergroup
22/06/13 22:13:07 INFO namenode.FSNamesystem: isPermissionEnabled = true
22/06/13 22:13:07 INFO namenode.FSNamesystem: HA Enabled: false
22/06/13 22:13:07 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
22/06/13 22:13:07 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
22/06/13 22:13:07 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
22/06/13 22:13:07 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
22/06/13 22:13:07 INFO blockmanagement.BlockManager: The block deletion will start around 2022 6月 13 22:13:07
22/06/13 22:13:07 INFO util.GSet: Computing capacity for map BlocksMap
22/06/13 22:13:07 INFO util.GSet: VM type = 64-bit
22/06/13 22:13:07 INFO util.GSet: 2.0% max memory 1000 MB = 20 MB
22/06/13 22:13:07 INFO util.GSet: capacity = 2^21 = 2097152 entries
22/06/13 22:13:07 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
22/06/13 22:13:07 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
22/06/13 22:13:07 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
22/06/13 22:13:07 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
22/06/13 22:13:07 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
22/06/13 22:13:07 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
22/06/13 22:13:07 INFO blockmanagement.BlockManager: defaultReplication = 1
22/06/13 22:13:07 INFO blockmanagement.BlockManager: maxReplication = 512
22/06/13 22:13:07 INFO blockmanagement.BlockManager: minReplication = 1
22/06/13 22:13:07 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
22/06/13 22:13:07 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
22/06/13 22:13:07 INFO blockmanagement.BlockManager: encryptDataTransfer = false
22/06/13 22:13:07 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
22/06/13 22:13:07 INFO namenode.FSNamesystem: Append Enabled: true
22/06/13 22:13:07 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
22/06/13 22:13:07 INFO util.GSet: Computing capacity for map INodeMap
22/06/13 22:13:07 INFO util.GSet: VM type = 64-bit
22/06/13 22:13:07 INFO util.GSet: 1.0% max memory 1000 MB = 10 MB
22/06/13 22:13:07 INFO util.GSet: capacity = 2^20 = 1048576 entries
22/06/13 22:13:07 INFO namenode.FSDirectory: ACLs enabled? false
22/06/13 22:13:07 INFO namenode.FSDirectory: XAttrs enabled? true
22/06/13 22:13:07 INFO namenode.NameNode: Caching file names occurring more than 10 times
22/06/13 22:13:07 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
22/06/13 22:13:07 INFO util.GSet: Computing capacity for map cachedBlocks
22/06/13 22:13:07 INFO util.GSet: VM type = 64-bit
22/06/13 22:13:07 INFO util.GSet: 0.25% max memory 1000 MB = 2.5 MB
22/06/13 22:13:07 INFO util.GSet: capacity = 2^18 = 262144 entries
22/06/13 22:13:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
22/06/13 22:13:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
22/06/13 22:13:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
22/06/13 22:13:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
22/06/13 22:13:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
22/06/13 22:13:07 INFO util.GSet: Computing capacity for map NameNodeRetryCache
22/06/13 22:13:07 INFO util.GSet: VM type = 64-bit
22/06/13 22:13:07 INFO util.GSet: 0.029999999329447746% max memory 1000 MB = 307.2 KB
22/06/13 22:13:07 INFO util.GSet: capacity = 2^15 = 32768 entries
22/06/13 22:13:07 INFO namenode.FSImage: Allocated new BlockPoolId: BP-948325319-127.0.1.1-1655129587324
22/06/13 22:13:07 WARN namenode.NameNode: Encountered exception during format:
java.io.IOException: Cannot create directory /opt/hadoop/tmp/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage
S
t
o
r
a
g
e
D
i
r
e
c
t
o
r
y
.
c
l
e
a
r
D
i
r
e
c
t
o
r
y
(
S
t
o
r
a
g
e
.
j
a
v
a
:
361
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
N
N
S
t
o
r
a
g
e
.
f
o
r
m
a
t
(
N
N
S
t
o
r
a
g
e
.
j
a
v
a
:
571
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
N
N
S
t
o
r
a
g
e
.
f
o
r
m
a
t
(
N
N
S
t
o
r
a
g
e
.
j
a
v
a
:
592
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
F
S
I
m
a
g
e
.
f
o
r
m
a
t
(
F
S
I
m
a
g
e
.
j
a
v
a
:
185
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
N
a
m
e
N
o
d
e
.
f
o
r
m
a
t
(
N
a
m
e
N
o
d
e
.
j
a
v
a
:
1211
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
N
a
m
e
N
o
d
e
.
c
r
e
a
t
e
N
a
m
e
N
o
d
e
(
N
a
m
e
N
o
d
e
.
j
a
v
a
:
1655
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
n
a
m
e
n
o
d
e
.
N
a
m
e
N
o
d
e
.
m
a
i
n
(
N
a
m
e
N
o
d
e
.
j
a
v
a
:
1782
)
22
/
06
/
1322
:
13
:
07
E
R
R
O
R
n
a
m
e
n
o
d
e
.
N
a
m
e
N
o
d
e
:
F
a
i
l
e
d
t
o
s
t
a
r
t
n
a
m
e
n
o
d
e
.
j
a
v
a
.
i
o
.
I
O
E
x
c
e
p
t
i
o
n
:
C
a
n
n
o
t
c
r
e
a
t
e
d
i
r
e
c
t
o
r
y
/
o
p
t
/
h
a
d
o
o
p
/
t
m
p
/
d
f
s
/
n
a
m
e
/
c
u
r
r
e
n
t
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
h
d
f
s
.
s
e
r
v
e
r
.
c
o
m
m
o
n
.
S
t
o
r
a
g
e
StorageDirectory.clearDirectory(Storage.java:361) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:571) at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:592) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:185) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1211) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1655) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782) 22/06/13 22:13:07 ERROR namenode.NameNode: Failed to start namenode. java.io.IOException: Cannot create directory /opt/hadoop/tmp/dfs/name/current at org.apache.hadoop.hdfs.server.common.Storage
StorageDirectory.clearDirectory(Storage.java:361)atorg.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:571)atorg.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:592)atorg.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:185)atorg.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1211)atorg.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1655)atorg.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)22/06/1322:13:07ERRORnamenode.NameNode:Failedtostartnamenode.java.io.IOException:Cannotcreatedirectory/opt/hadoop/tmp/dfs/name/currentatorg.apache.hadoop.hdfs.server.common.StorageStorageDirectory.clearDirectory(Storage.java:361)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:571)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:592)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:185)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1211)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1655)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
22/06/13 22:13:07 INFO util.ExitUtil: Exiting with status 1: java.io.IOException: Cannot create directory /opt/hadoop/tmp/dfs/name/current
22/06/13 22:13:07 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at zhiyue-virtual-machine/127.0.1.1
************************************************************/
zhiyue@zhiyue-virtual-machine:/opt/hadoop-2.10.2$
解决方案:
配置文件配置错误,或是bashsr文件配置错误,修改后成功运行
记录4:运行hdfs命令报错
问题描述
运行任意hdfs命令报如下错:
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/opt/hadoop-2.10.2/share/hadoop/common/lib/hadoop-auth-2.10.2.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
解决方案:
降低jdk版本
sudo apt install openjdk-8-jdk
sudo vim ~/.bashrc
添加
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
source ~/.bashrc
java -version
记录5:错误http://mirrors.aliyun.com/ubuntu xenial InRelease 暂时不能解析域名“mirrors.aliyun.com”
问题描述
在进行未知操作后,Ubuntu虚拟机无法连接至镜像,初步判断为失去网络链接,无法ping通外网
解决方案:
重启虚拟机
记录6:hadoop节点启动异常
问题描述未知原因报
./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
zhiyue@localhost’s password:
localhost: starting namenode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-namenode-zhiyue-virtual-machine.out
zhiyue@localhost’s password:
localhost: starting datanode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-datanode-zhiyue-virtual-machine.out
Starting secondary namenodes [0.0.0.0]
zhiyue@0.0.0.0’s password:
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-secondarynamenode-zhiyue-virtual-machine.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.10.2/logs/yarn-zhiyue-resourcemanager-zhiyue-virtual-machine.out
zhiyue@localhost’s password:
localhost: starting nodemanager, logging to /opt/hadoop-2.10.2/logs/yarn-zhiyue-nodemanager-zhiyue-virtual-machine.out
解决方案:
./bin/hdfs namenode -format后显示如下
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
zhiyue@localhost’s password:
localhost: namenode running as process 48655. Stop it first.
zhiyue@localhost’s password:
localhost: datanode running as process 49436. Stop it first.
Starting secondary namenodes [0.0.0.0]
zhiyue@0.0.0.0’s password:
0.0.0.0: secondarynamenode running as process 50605. Stop it first.
starting yarn daemons
resourcemanager running as process 51083. Stop it first.
zhiyue@localhost’s password:
localhost: nodemanager running as process 51883. Stop it first.
再使用如下处理
zhiyue@zhiyue-virtual-machine:/opt/hadoop-2.10.2$ ./sbin/stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
zhiyue@localhost’s password:
localhost: stopping namenode
zhiyue@localhost’s password:
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
zhiyue@0.0.0.0’s password:
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
zhiyue@localhost’s password:
localhost: stopping nodemanager
localhost: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
报错回归原样
zhiyue@zhiyue-virtual-machine:/opt/hadoop-2.10.2$ ./sbin/start-dfs.sh
Starting namenodes on [localhost]
zhiyue@localhost’s password:
localhost: starting namenode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-namenode-zhiyue-virtual-machine.out
zhiyue@localhost’s password:
localhost: starting datanode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-datanode-zhiyue-virtual-machine.out
Starting secondary namenodes [0.0.0.0]
zhiyue@0.0.0.0’s password:
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-secondarynamenode-zhiyue-virtual-machine.out
尝试rm -rf current文件夹
再重启hadoop服务后仍然报如下错
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
zhiyue@localhost’s password:
localhost: starting namenode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-namenode-zhiyue-virtual-machine.out
zhiyue@localhost’s password:
localhost: starting datanode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-datanode-zhiyue-virtual-machine.out
Starting secondary namenodes [0.0.0.0]
zhiyue@0.0.0.0’s password:
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.10.2/logs/hadoop-zhiyue-secondarynamenode-zhiyue-virtual-machine.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.10.2/logs/yarn-zhiyue-resourcemanager-zhiyue-virtual-machine.out
zhiyue@localhost’s password:
localhost: starting nodemanager, logging to /opt/hadoop-2.10.2/logs/yarn-zhiyue-nodemanager-zhiyue-virtual-machine.out
再反复观察/opt/hadoop-2.10.2/logs/hadoop-zhiyue-namenode-zhiyue-virtual-machine.log日志后使用教材修改方式
$stop-dfs.sh # 关闭 Hadoop
$rm -r /opt/hadoop/tmp #删除 tmp 文件夹,注意这会删除 HDFS 中原有的所有数据
$hdfs namenode -format # 重新格式化 NameNode
$start-dfs.sh # 重启
注:运行后仍然报该错,但jps出现datanode,可正常使用hdfs命令
运行java程序报如下错
Error occurred during initialization of boot layer
java.lang.module.FindException: Unable to derive module descriptor for /opt/hadoop-2.10.2/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar
Caused by: java.lang.module.InvalidModuleDescriptorException: Provider class com.fasterxml.jackson.core.JsonFactory not in module
删除module-info.java
导入包报错
The package org.apache.hadoop.fs is accessible from more than one module: hadoop.common, hadoop.hdfs
https://blog.csdn.net/weixin_30784141/article/details/98025951
关于hdfs查找相对路径出错的问题
https://blog.csdn.net/qq_43688472/article/details/112526512
eclipse执行正常导出后执行报错
Error: A JNI error has occurred, please check your installation and try again
Exception in thread “main” java.lang.UnsupportedClassVersionError: Test/Task1 has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0
先查看java 与javac版本是否相同
在查看eclipse当前使用jdk版本是否与运行版本相同
注:调整eclipse的project下版本而非window下版本
hbase报错ERROR:Can’t get master address from Zookeeper;znode data == null
https://blog.csdn.net/qq_45640525/article/details/109048465或重启
关于mysql安装未弹出界面,root用户不可正常登录的问题
https://rudon.blog.csdn.net/article/details/121116264?spm=1001.2101.3001.6650.5&utm_medium=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7Edefault-5-121116264-blog-121552024.pc_relevant_multi_platform_whitelistv2&depth_1-utm_source=distribute.pc_relevant.none-task-blog-2%7Edefault%7EBlogCommendFromBaidu%7Edefault-5-121116264-blog-121552024.pc_relevant_multi_platform_whitelistv2&utm_relevant_index=9
关于hive接入mysql报错
mysql> grant all on . to hive@localhost identified by ‘hive’;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ‘identified by ‘hive’’ at line 1
https://blog.csdn.net/weixin_42534009/article/details/105913449