Hadoop2.4.1(QJM HA)+HBASE0.98 双MASTER问题分析

7 篇文章 0 订阅
3 篇文章 0 订阅

一,问题源由

参考网上各位同行的博客,基于HADOOP2.4.1(QJM HA)+HBASE0.98(双master)想搭建一个HA集群,方法有很多,到处都是,这就不说了,主要说一下问题解决过程。

集群搭建好以后,每次重新启动都会报下面的错误。

2016 01 11日星期一 16:22:31 CSTStarting master on M-172-16-73-194

core file size          (blocks, -c) 0

data seg size           (kbytes, -d) unlimited

scheduling priority             (-e) 0

file size               (blocks, -f) unlimited

pending signals                 (-i) 128494

max locked memory       (kbytes, -l) 64

max memory size         (kbytes, -m) unlimited

open files                      (-n) 102400

pipe size            (512 bytes, -p) 8

POSIX message queues     (bytes, -q) 819200

real-time priority              (-r) 0

stack size              (kbytes, -s) 10240

cpu time               (seconds, -t) unlimited

max user processes              (-u) 102400

virtual memory          (kbytes, -v) unlimited

file locks                      (-x) unlimited

2016-01-11 16:22:33,460 INFO  [main] util.VersionInfo: HBase 0.98.9-hadoop2

2016-01-11 16:22:33,462 INFO  [main] util.VersionInfo: Subversiongit://acer/usr/src/hbase -r 96878ece501b0643e879254645d7f3a40eaf101f

2016-01-11 16:22:33,462 INFO  [main] util.VersionInfo: Compiled by apurtellon Mon Dec 15 23:00:20 PST 2014

2016-01-11 16:22:34,375 INFO  [main] util.ServerCommandLine: env:TERM=linux

2016-01-11 16:22:34,375 INFO  [main] util.ServerCommandLine:env:JAVA_HOME=/usr/java/jdk1.7.0_67/

2016-01-11 16:22:34,375 INFO  [main] util.ServerCommandLine:env:HBASE_HOME=/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/..

2016-01-11 16:22:34,375 INFO  [main] util.ServerCommandLine:env:HBASE_ENV_INIT=true

2016-01-11 16:22:34,375 INFO  [main] util.ServerCommandLine: env:SSH_CLIENT=219.141.184.18151615 22

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine:env:MAIL=/var/spool/mail/root

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine:env:HOSTNAME=M-172-16-73-194

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine:env:PWD=/export/distributed/hbase/hbase-0.98.9-hadoop2/bin

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine:env:HBASE_ZNODE_FILE=/export/distributed/hbase/hbase-0.98.9-hadoop2/pids/hbase-hadoop-master.znode

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine:env:CVS_RSH=ssh

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine:env:HBASE_MANAGES_ZK=false

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine: env:G_BROKEN_FILENAMES=1

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine:env:HBASE_NICENESS=0

2016-01-11 16:22:34,376 INFO  [main] util.ServerCommandLine:env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat

2016-01-11 16:22:34,377 INFO  [main] util.ServerCommandLine:env:HBASE_REST_OPTS=

2016-01-11 16:22:34,377 INFO  [main] util.ServerCommandLine:env:HBASE_PID_DIR=/export/distributed/hbase/hbase-0.98.9-hadoop2/pids

2016-01-11 16:22:34,377 INFO  [main] util.ServerCommandLine:env:HISTSIZE=1000

2016-01-11 16:22:34,377 INFO  [main] util.ServerCommandLine:env:PATH=/export/distributed/hadoop/hadoop-2.4.1/bin:/usr/lib/scala/bin:/usr/java/jdk1.7.0_67/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/local/apache-maven-3.3.1/bin:/usr/local/apache-ant-1.9.4/bin:/root/bin

2016-01-11 16:22:34,377 INFO  [main] util.ServerCommandLine:env:QTLIB=/usr/lib64/qt-3.3/lib

2016-01-11 16:22:34,377 INFO  [main] util.ServerCommandLine:env:HBASE_SECURITY_LOGGER=INFO,RFAS

2016-01-11 16:22:34,377 INFO  [main] util.ServerCommandLine:env:HBASE_START_FILE=/export/distributed/hbase/hbase-0.98.9-hadoop2/pids/hbase-hadoop-master.autorestart

2016-01-11 16:22:34,378 INFO  [main] util.ServerCommandLine:env:SERVER_GC_OPTS=-XX:CMSInitiatingOccupancyFraction=70 -XX:+UseParNewGC-XX:+UseConcMarkSweepGC

2016-01-11 16:22:34,378 INFO  [main] util.ServerCommandLine:env:HBASE_LOGFILE=hbase-hadoop-master-M-172-16-73-194.log

2016-01-11 16:22:34,378 INFO  [main] util.ServerCommandLine: env:SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass

2016-01-11 16:22:34,378 INFO  [main] util.ServerCommandLine: env:SHLVL=5

2016-01-11 16:22:34,378 INFO  [main] util.ServerCommandLine:env:HBASE_LOG_DIR=/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../logs

2016-01-11 16:22:34,378 INFO  [main] util.ServerCommandLine:env:HBASE_OPTS= -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC-XX:+CMSParallelRemarkEnabled -XX:+UseCMSInitiatingOccupancyOnly-XX:+UseParNewGC -Xmn1024m -XX:CMSInitiatingOccupancyFraction=70-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../logs-Dhbase.log.file=hbase-hadoop-master-M-172-16-73-194.log-Dhbase.home.dir=/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/.. -Dhbase.id.str=hadoop-Dhbase.root.logger=INFO,RFA-Djava.library.path=/export/distributed/hadoop/hadoop-2.4.1/lib/native-Dhbase.security.logger=INFO,RFAS

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:HBASE_HEAPSIZE=4192

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:SSH_TTY=/dev/pts/0

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:LOGNAME=hadoop

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:QTDIR=/usr/lib64/qt-3.3

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:SSH_CONNECTION=219.141.184.181 51615 172.16.73.194 22

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:MALLOC_ARENA_MAX=4

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:LD_LIBRARY_PATH=:/export/distributed/hadoop/hadoop-2.4.1/lib/native

2016-01-11 16:22:34,379 INFO  [main] util.ServerCommandLine:env:SHELL=/bin/bash

2016-01-11 16:22:34,380 INFO  [main] util.ServerCommandLine:env:HBASE_ROOT_LOGGER=INFO,RFA

2016-01-11 16:22:34,383 INFO  [main] util.ServerCommandLine:env:CLASSPATH=/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../conf:/usr/java/jdk1.7.0_67//lib/tools.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/..:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/activation-1.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/aopalliance-1.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/asm-3.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/avro-1.7.4.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-beanutils-1.7.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-beanutils-core-1.8.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-cli-1.2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-codec-1.7.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-collections-3.2.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-compress-1.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-configuration-1.6.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-daemon-1.0.13.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-digester-1.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-el-1.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-httpclient-3.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-io-2.4.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-lang-2.6.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-logging-1.1.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-math-2.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/commons-net-3.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/findbugs-annotations-1.3.9-1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/gmbal-api-only-3.0.0-b023.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/grizzly-framework-2.1.2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/grizzly-http-2.1.2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/grizzly-http-server-2.1.2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/grizzly-http-servlet-2.1.2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/grizzly-rcm-2.1.2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/guava-12.0.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/guice-3.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/guice-servlet-3.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-annotations-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-auth-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-client-2.2.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-common-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-hdfs-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-mapreduce-client-app-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-mapreduce-client-common-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-mapreduce-client-core-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-mapreduce-client-hs-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-mapreduce-client-hs-plugins-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-mapreduce-client-jobclient-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-mapreduce-client-shuffle-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-mapreduce-examples-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-yarn-api-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-yarn-client-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-yarn-common-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-yarn-server-common-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-yarn-server-nodemanager-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-yarn-server-resourcemanager-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-yarn-server-tests-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hadoop-yarn-server-web-proxy-2.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hamcrest-core-1.3.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-annotations-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-checkstyle-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-client-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-common-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-common-0.98.9-hadoop2-tests.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-examples-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-hadoop2-compat-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-hadoop-compat-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-it-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-it-0.98.9-hadoop2-tests.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-prefix-tree-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-protocol-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-rest-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-server-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-server-0.98.9-hadoop2-tests.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-shell-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-testing-util-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/hbase-thrift-0.98.9-hadoop2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/high-scale-lib-1.1.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/htrace-core-2.04.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/httpclient-4.1.3.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/httpcore-4.1.3.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jackson-core-asl-1.8.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jackson-jaxrs-1.8.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jackson-mapper-asl-1.8.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jackson-xc-1.8.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jamon-runtime-2.3.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jasper-compiler-5.5.23.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jasper-runtime-5.5.23.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/javax.inject-1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/javax.servlet-3.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/javax.servlet-api-3.0.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jaxb-api-2.2.2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jaxb-impl-2.2.3-1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jcodings-1.0.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jersey-client-1.9.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jersey-core-1.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jersey-grizzly2-1.9.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jersey-guice-1.9.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jersey-json-1.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jersey-server-1.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jersey-test-framework-core-1.9.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jersey-test-framework-grizzly2-1.9.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jets3t-0.6.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jettison-1.3.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jetty-6.1.26.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jetty-sslengine-6.1.26.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jetty-util-6.1.26.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/joni-2.1.2.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jruby-complete-1.6.8.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jsch-0.1.42.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jsp-2.1-6.1.14.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jsp-api-2.1-6.1.14.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/jsr305-1.3.9.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/junit-4.11.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/libthrift-0.9.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/log4j-1.2.17.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/management-api-3.0.0-b012.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/metrics-core-2.2.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/netty-3.6.6.Final.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/paranamer-2.3.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/protobuf-java-2.5.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/servlet-api-2.5-6.1.14.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/slf4j-api-1.6.4.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/slf4j-log4j12-1.6.4.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/snappy-java-1.0.4.1.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/xmlenc-0.52.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/xz-1.0.jar:/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../lib/zookeeper-3.4.6.jar:/export/distributed/hadoop/hadoop-2.4.1/etc/hadoop:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/common/lib/*:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/common/*:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/hdfs:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/hdfs/lib/*:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/hdfs/*:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/yarn/lib/*:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/yarn/*:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/lib/*:/export/distributed/hadoop/hadoop-2.4.1/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/export/distributed/hadoop/hadoop-2.4.1/etc/hadoop

2016-01-11 16:22:34,384 INFO  [main] util.ServerCommandLine:env:HBASE_THRIFT_OPTS=-Xms2g -Xmx2g

2016-01-11 16:22:34,384 INFO  [main] util.ServerCommandLine:env:QTINC=/usr/lib64/qt-3.3/include

2016-01-11 16:22:34,385 INFO  [main] util.ServerCommandLine:env:USER=hadoop

2016-01-11 16:22:34,385 INFO  [main] util.ServerCommandLine:env:HBASE_CLASSPATH=/export/distributed/hadoop/hadoop-2.4.1/etc/hadoop

2016-01-11 16:22:34,385 INFO  [main] util.ServerCommandLine:env:HOME=/home/hadoop

2016-01-11 16:22:34,385 INFO  [main] util.ServerCommandLine:env:HISTCONTROL=ignoredups

2016-01-11 16:22:34,385 INFO  [main] util.ServerCommandLine:env:LESSOPEN=|/usr/bin/lesspipe.sh %s

2016-01-11 16:22:34,385 INFO  [main] util.ServerCommandLine:env:LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:

2016-01-11 16:22:34,386 INFO  [main] util.ServerCommandLine:env:HBASE_LOG_PREFIX=hbase-hadoop-master-M-172-16-73-194

2016-01-11 16:22:34,386 INFO  [main] util.ServerCommandLine:env:LANG=zh_CN.UTF-8

2016-01-11 16:22:34,386 INFO  [main] util.ServerCommandLine:env:HBASE_IDENT_STRING=hadoop

2016-01-11 16:22:34,389 INFO  [main] util.ServerCommandLine: vmName=JavaHotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=24.65-b04

2016-01-11 16:22:34,389 INFO  [main] util.ServerCommandLine:vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx4192m,-XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC,-XX:+CMSParallelRemarkEnabled, -XX:+UseCMSInitiatingOccupancyOnly,-XX:+UseParNewGC, -Xmn1024m, -XX:CMSInitiatingOccupancyFraction=70,-XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -Dhbase.log.dir=/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/../logs,-Dhbase.log.file=hbase-hadoop-master-M-172-16-73-194.log,-Dhbase.home.dir=/export/distributed/hbase/hbase-0.98.9-hadoop2/bin/..,-Dhbase.id.str=hadoop, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/export/distributed/hadoop/hadoop-2.4.1/lib/native,-Dhbase.security.logger=INFO,RFAS]

2016-01-11 16:22:35,184 DEBUG [main]master.HMaster: master/M-172-16-73-194/172.16.73.194:60000 HConnectionserver-to-server retries=350

2016-01-11 16:22:42,177 INFO  [main] ipc.RpcServer:master/M-172-16-73-194/172.16.73.194:60000: started 10 reader(s).

2016-01-11 16:22:42,595 INFO  [main] impl.MetricsConfig: loaded propertiesfrom hadoop-metrics2-hbase.properties

2016-01-11 16:22:42,762 INFO  [main] impl.MetricsSystemImpl: Scheduledsnapshot period at 10 second(s).

2016-01-11 16:22:42,763 INFO  [main] impl.MetricsSystemImpl: HBase metricssystem started

2016-01-11 16:22:43,623 ERROR [main]master.HMasterCommandLine: Master exiting

java.lang.RuntimeException: Failedconstruction of Master: class org.apache.hadoop.hbase.master.HMaster

         atorg.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3017)

         atorg.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:186)

         atorg.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:135)

         atorg.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

         atorg.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)

         atorg.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3031)

Caused by:java.net.UnknownHostException: HADOOPCLUSTER1

         atorg.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)

         atorg.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)

         atorg.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:144)

         atorg.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)

         atorg.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)

         atorg.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)

         atorg.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)

         atorg.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)

         atorg.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)

         atorg.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)

         atorg.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)

         atorg.apache.hadoop.fs.Path.getFileSystem(Path.java:296)

         atorg.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:927)

         atorg.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:533)

         atsun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

         atsun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

         atsun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

         atjava.lang.reflect.Constructor.newInstance(Constructor.java:526)

         atorg.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3012)

二,解决过程

1,  看到一个exception

Caused by: java.net.UnknownHostException:HADOOPCLUSTER1

自然想到是没有配置hosts

于是配置HOSTS文件,HADOOPCLUSTER1指向一个主nodeIP,集群启动成功,HBASE主备MASTER倒换成功,以为问题解决。

。。。。。。。。。

直到有一天,xx云由于网络问题,触发了namenode发生了倒换,发现所有的regionserver全部挂掉

2,打印异常

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException):Operation category READ is not supported in state standby

     at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)

   问题很明显是,namenode倒换引起的,也就是namenode倒换以后hmaster并不知情。

各种查资料,有地谈到,在hbaseconf下面创建一个指向hdfs-site.xml的软链接或在hbase-env.shHBASE_CLASSPATH中增加一个指写hadoop配置的路径可以解决倒换问题。

修改以后,问题仍然没有解决。

3,接着查资料,aboutyun论坛求助,发现1中解决问题解决的不对,压根不应该给HADOOPCLUST1hosts中给配置一个IP,应该还有其它问题,但没有人知道是什么问题,无招,根据调用的栈来查代码。

org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)

很明显调用栈中有这行代码,看意思是要创建一个NONHAnamenode代理,,,,,明显有问题应该是创建HA的代理才对,查代码

------------------------------------createproxy---------------------------------------------

     @SuppressWarnings("unchecked")

 public static <T> ProxyAndInfo<T> createProxy(Configurationconf,

     URI nameNodeUri, Class<T> xface) throws IOException {

   Class<FailoverProxyProvider<T>> failoverProxyProviderClass =

       getFailoverProxyProviderClass(conf, nameNodeUri, xface);

 

   if (failoverProxyProviderClass == null){

     // Non-HA case---------------走错了,到这了,看下面的getFailoverProxyProviderClass为什么反射出一个空的failoverProxyProviderClass

     return createNonHAProxy(conf, NameNode.getAddress(nameNodeUri), xface,

         UserGroupInformation.getCurrentUser(), true);

    }else {

     // HA case-----------走到这就对了

     FailoverProxyProvider<T> failoverProxyProvider = NameNodeProxies

         .createFailoverProxyProvider(conf, failoverProxyProviderClass, xface,

              nameNodeUri);

     Conf config = new Conf(conf);

     T proxy = (T) RetryProxy.create(xface, failoverProxyProvider,

         RetryPolicies.failoverOnNetworkException(

              RetryPolicies.TRY_ONCE_THEN_FAIL,config.maxFailoverAttempts,

              config.maxRetryAttempts,config.failoverSleepBaseMillis,

              config.failoverSleepMaxMillis));

     

     Text dtService = HAUtil.buildTokenServiceForLogicalUri(nameNodeUri);

     return new ProxyAndInfo<T>(proxy, dtService);

    }

  }

 

-------------------------------getFailoverProxyProviderClass-------------------

 public static <T> Class<FailoverProxyProvider<T>>getFailoverProxyProviderClass(

     Configuration conf, URI nameNodeUri, Class<T> xface) throwsIOException {

   if (nameNodeUri == null) {

     return null;

    }

   String host = nameNodeUri.getHost();

 

   String configKey = DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX +"."

       + host;

   try {

     @SuppressWarnings("unchecked")

     Class<FailoverProxyProvider<T>> ret =(Class<FailoverProxyProvider<T>>) conf

         .getClass(configKey, null, FailoverProxyProvider.class);--------根据configKey来反射出Provider,反射结果却是null,可能跟configKey有关系。

     if (ret != null) {

       // If we found a proxy provider, then this URI should be a logical NN.

       // Given that, it shouldn't have a non-default port number.

       int port = nameNodeUri.getPort();

       if (port > 0 && port != NameNode.DEFAULT_PORT) {

         throw new IOException("Port " + port + " specified in URI"

              + nameNodeUri + " but host'" + host

              + "' is a logical (HA)namenode"

              + " and does not use portinformation.");

       }

     }

     return ret;

    }catch (RuntimeException e) {

     if (e.getCause() instanceof ClassNotFoundException) {

       throw new IOException("Could not load failover proxy provider class"

           + conf.get(configKey) + " which is configured for authority "

           + nameNodeUri, e);

     } else {

       throw e;

     }

    }

  }

 

----------DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX-------------

  public static final String  DFS_CLIENT_FAILOVER_PROXY_PROVIDER_KEY_PREFIX= "dfs.client.failover.proxy.provider";

 

 

查看hdfs-site.xml中dfs.client.failover.proxy.provider对应的配置

<property>

 <name>dfs.client.failover.proxy.provider.weather</name>

  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

NM,,,,,,weather是什么鬼….改成HADOOPCLUSTER1自己集群的名称以后问题解决。

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值