hadoop-HA/Zookeeper集群调度

1.集群规划

| hadoop202 | hadoop203 | | hadoop204 |
|:NameNode:| NameNode:|:| -------------😐
| JournalNode | JournalNode | JournalNode |
| DataNode | DataNode | DataNode |
| ZK | ZK | ZK |
| ----------------- | ResourceManager | ----------------- |
| JNodeManager | NodeManager | NodeManager |

2.创建和删除HA


```powershell



 
  
  [root@hadoop204
  ~]# su cevent
  [cevent@hadoop204
  root]$ cd /opt/module/
  [cevent@hadoop204
  module]$ ll
  总用量 12
  drwxr-xr-x.
  11 cevent cevent 4096 3月  21 14:08 hadoop-2.7.2
  drwxr-xr-x.  8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
  drwxr-xr-x.
  11 cevent cevent 4096 4月  18 21:02 zookeeper-3.4.10
  [cevent@hadoop204
  module]$ mkdir hadoop-HA 创建文件夹
  [cevent@hadoop204
  module]$ ll
  总用量 16
  drwxr-xr-x.
  11 cevent cevent 4096 3月  21 14:08 hadoop-2.7.2
  drwxrwxr-x.  2 cevent cevent 4096 4月  20 13:30 hadoop-HA
  drwxr-xr-x.  8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
  drwxr-xr-x.
  11 cevent cevent 4096 4月  18 21:02 zookeeper-3.4.10
  [cevent@hadoop204
  module]$ cp -R hadoop-2.7.2/ hadoop-HA/ 复制文件夹
  [cevent@hadoop204
  module]$ rm -rf hadoop-HA/  删除文件夹
  [cevent@hadoop204
  module]$ ll
  总用量 12
  drwxr-xr-x.
  11 cevent cevent 4096 3月  21 14:08 hadoop-2.7.2
  drwxr-xr-x.  8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
  drwxr-xr-x. 11 cevent cevent 4096 4月  18 21:02 zookeeper-3.4.10
  
 



## 2.Hadoop202下创建Hadoop-HA

```powershell



 
  
  [root@hadoop202
  ~]# su cevent
  [cevent@hadoop202
  root]$ cd /opt/module/
  [cevent@hadoop202
  module]$ ll
  总用量 12
  drwxr-xr-x.
  12 cevent cevent 4096 3月  24 18:58 hadoop-2.7.2
  drwxr-xr-x.  8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
  drwxr-xr-x.
  11 cevent cevent 4096 4月  18 16:12 zookeeper-3.4.10
  [cevent@hadoop202
  module]$ mkdir hadoop-HA
  [cevent@hadoop202
  module]$ ll
  总用量 16
  drwxr-xr-x.
  12 cevent cevent 4096 3月  24 18:58 hadoop-2.7.2
  drwxrwxr-x.  2 cevent cevent 4096 4月  20 14:16 hadoop-HA
  drwxr-xr-x.  8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
  drwxr-xr-x.
  11 cevent cevent 4096 4月  18 16:12 zookeeper-3.4.10
  [cevent@hadoop202
  module]$ cp -R hadoop-2.7.2/ hadoop-HA/
  [cevent@hadoop202
  module]$ cd hadoop-HA/
  [cevent@hadoop202
  hadoop-HA]$ ll
  总用量 4
  drwxr-xr-x.
  12 cevent cevent 4096 4月  20 14:17 hadoop-2.7.2
  [cevent@hadoop202
  hadoop-HA]$ cd hadoop-2.7.2/
  [cevent@hadoop202
  hadoop-2.7.2]$ ll
  总用量 63140
  drwxr-xr-x.
  2 cevent cevent     4096 4月  20 14:16 bin
  drwxrwxr-x.
  3 cevent cevent     4096 4月  20 14:17 data
  drwxr-xr-x.
  3 cevent cevent     4096 4月  20 14:17 etc
  drwxr-xr-x.
  2 cevent cevent     4096 4月  20 14:17 include
  drwxrwxr-x.
  2 cevent cevent     4096 4月  20 14:16 input
  drwxr-xr-x.
  3 cevent cevent     4096 4月  20 14:17 lib
  drwxr-xr-x.
  2 cevent cevent     4096 4月  20 14:17 libexec
  -rw-r--r--.
  1 cevent cevent    15429 4月  20 14:17 LICENSE.txt
  drwxrwxr-x.
  3 cevent cevent     4096 4月  20 14:17 logs
  -rw-r--r--.
  1 cevent cevent 64574641 4月  20 14:17 map-reduce-driver.jar
  -rw-r--r--.
  1 cevent cevent       26 4月  20 14:16 netred.txt
  -rw-r--r--.
  1 cevent cevent      101 4月  20 14:17 NOTICE.txt
  -rw-r--r--.
  1 cevent cevent     1366 4月  20 14:17 README.txt
  drwxr-xr-x.
  2 cevent cevent     4096 4月  20 14:16 sbin
  drwxr-xr-x.
  4 cevent cevent     4096 4月  20 14:17 share
  -rw-r--r--.
  1 cevent cevent       94 4月  20 14:17 together
  -rw-rw-r--.
  1 cevent cevent       26 4月  20 14:16 xishi.txt
  [cevent@hadoop202
  hadoop-2.7.2]$ pwd 
  确保三台HadoopHA服务器均在同一个路径下
  /opt/module/hadoop-HA/hadoop-2.7.2
  
 



3.Hadoop203




 
  
  [cevent@hadoop203
  root]$ cd /opt/module/
  [cevent@hadoop203
  module]$ ll
  总用量 12
  drwxr-xr-x.
  11 cevent cevent 4096 3月  21 14:03 hadoop-2.7.2
  drwxr-xr-x.  8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
  drwxr-xr-x.
  11 cevent cevent 4096 4月  18 21:02 zookeeper-3.4.10
  [cevent@hadoop203
  module]$ mkdir hadoop-HA/
  [cevent@hadoop203
  module]$ ll
  总用量 16
  drwxr-xr-x.
  11 cevent cevent 4096 3月  21 14:03 hadoop-2.7.2
  drwxrwxr-x.  2 cevent cevent 4096 4月  20 14:25 hadoop-HA
  drwxr-xr-x.  8 cevent cevent 4096 4月  11 2015 jdk1.7.0_79
  drwxr-xr-x.
  11 cevent cevent 4096 4月  18 21:02 zookeeper-3.4.10
  [cevent@hadoop203
  module]$ cp -R hadoop-2.7.2/ hadoop-HA/
  [cevent@hadoop203
  module]$ cd hadoop-HA/
  [cevent@hadoop203
  hadoop-HA]$ ll
  总用量 4
  drwxr-xr-x.
  11 cevent cevent 4096 4月  20 14:26 hadoop-2.7.2
  [cevent@hadoop203
  hadoop-HA]$ cd hadoop-2.7.2/
  [cevent@hadoop203
  hadoop-2.7.2]$ ll
  总用量 64
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:25 bin
  drwxrwxr-x.
  3 cevent cevent  4096 4月  20 14:25 data
  drwxr-xr-x.
  3 cevent cevent  4096 4月  20 14:25 etc
  -rw-rw-r--.
  1 cevent cevent    48 4月  20 14:26 hongfei203.txt
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:25 include
  drwxr-xr-x.
  3 cevent cevent  4096 4月  20 14:25 lib
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:25 libexec
  -rw-r--r--.
  1 cevent cevent 15429 4月  20 14:26 LICENSE.txt
  drwxrwxr-x.
  3 cevent cevent  4096 4月  20 14:25 logs
  -rw-r--r--.
  1 cevent cevent   101 4月  20 14:25 NOTICE.txt
  -rw-r--r--.
  1 cevent cevent  1366 4月  20 14:26 README.txt
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:25 sbin
  drwxr-xr-x.
  4 cevent cevent  4096 4月  20 14:26 share
  [cevent@hadoop203
  hadoop-2.7.2]$ pwd
  /opt/module/hadoop-HA/hadoop-2.7.2
  
 


4.Hadoop204

…省略

5.配置env.sh,确保jdk




 
  
  #
  Licensed to the Apache Software Foundation (ASF) under one
  # or
  more contributor license agreements. 
  See the NOTICE file
  #
  distributed with this work for additional information
  #
  regarding copyright ownership.  The ASF
  licenses this file
  # to
  you under the Apache License, Version 2.0 (the
  #
  "License"); you may not use this file except in compliance
  #
  with the License.  You may obtain a
  copy of the License at
  #
  #    
  http://www.apache.org/licenses/LICENSE-2.0
  #
  #
  Unless required by applicable law or agreed to in writing, software
  #
  distributed under the License is distributed on an "AS IS" BASIS,
  #
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  # See
  the License for the specific language governing permissions and
  #
  limitations under the License.
   
  # Set
  Hadoop-specific environment variables here.
   
  # The
  only required environment variable is JAVA_HOME.  All others are
  #
  optional.  When running a distributed
  configuration it is best to
  # set
  JAVA_HOME in this file, so that it is correctly defined on
  # remote
  nodes.
   
  # The
  java implementation to use.
  export JAVA_HOME=/opt/module/jdk1.7.0_79
   
  # The
  jsvc implementation to use. Jsvc is required to run secure datanodes
  #
  that bind to privileged ports to provide authentication of data transfer
  # protocol.  Jsvc is not required if SASL is configured
  for authentication of
  #
  data transfer protocol using non-privileged ports.
  #export
  JSVC_HOME=${JSVC_HOME}
   
  export
  HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
   
  #
  Extra Java CLASSPATH elements.  Automatically
  insert capacity-scheduler.
  for f
  in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
    if [ "$HADOOP_CLASSPATH" ]; then
      export
  HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
    else
      export HADOOP_CLASSPATH=$f
    fi
  done
   
  # The
  maximum amount of heap to use, in MB. Default is 1000.
  #export
  HADOOP_HEAPSIZE=
  #export
  HADOOP_NAMENODE_INIT_HEAPSIZE=""
   
  #
  Extra Java runtime options.  Empty by
  default.
  export
  HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
   
  #
  Command specific options appended to HADOOP_OPTS when specified
  export
  HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
  -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender}
  $HADOOP_NAMENODE_OPTS"
  export
  HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS
  $HADOOP_DATANODE_OPTS"
   
  export
  HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS}
  -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
   
  export
  HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
  export
  HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
   
  # The
  following applies to multiple commands (fs, dfs, fsck, distcp etc)
  export
  HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
  #HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData
  $HADOOP_JAVA_PLATFORM_OPTS"
   
  # On
  secure datanodes, user to run the datanode as after dropping privileges.
  #
  This **MUST** be uncommented to enable secure HDFS if using privileged ports
  # to
  provide authentication of data transfer protocol.  This **MUST NOT** be
  #
  defined if SASL is configured for authentication of data transfer protocol
  # using
  non-privileged ports.
  export
  HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
   
  #
  Where log files are stored. 
  $HADOOP_HOME/logs by default.
  #export
  HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER
   
  #
  Where log files are stored in the secure data environment.
  export
  HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
   
  ###
  #
  HDFS Mover specific parameters
  ###
  #
  Specify the JVM options to be used when starting the HDFS Mover.
  #
  These options will be appended to the options specified as HADOOP_OPTS
  # and
  therefore may override any similar flags set in HADOOP_OPTS
  #
  #
  export HADOOP_MOVER_OPTS=""
   
  ###
  #
  Advanced Users Only!
  ###
   
  # The
  directory where pid files are stored. /tmp by default.
  #
  NOTE: this should be set to a directory that can only be written to by 
  #       the user that will run the hadoop
  daemons.  Otherwise there is the
  #       potential for a symlink attack.
  export
  HADOOP_PID_DIR=${HADOOP_PID_DIR}
  export
  HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
   
  # A
  string representing this instance of hadoop. $USER by default.
  export HADOOP_IDENT_STRING=$USER
  
 



```powershell
在这里插入代码片

## 6.配置core-site.xml

```powershell



 
  
  <?xml
  version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet
  type="text/xsl" href="configuration.xsl"?>
  <!--
    Licensed under the Apache License, Version
  2.0 (the "License");
    you may not use this file except in
  compliance with the License.
    You may obtain a copy of the License at
   
      http://www.apache.org/licenses/LICENSE-2.0
   
    Unless required by applicable law or agreed
  to in writing, software
    distributed under the License is
  distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY
  KIND, either express or implied.
    See the License for the specific language
  governing permissions and
    limitations under the License. See
  accompanying LICENSE file.
  -->
   
  <!--
  Put site-specific property overrides in this file. -->
   
  <configuration>
   
  <!-- 把两个NameNode)的地址组装成一个集群mycluster -->
                  <property>
                         
  <name>fs.defaultFS</name>
                 
  <value>hdfs://ceventcluster</value>
                  </property>
   
                  <!-- 指定hadoop运行时产生文件的存储目录 -->
                  <property>
                          <name>hadoop.tmp.dir</name>
                         
  <value>/opt/module/hadoop-HA/hadoop-2.7.2/data/tmp</value>
                  </property>
   
  </configuration>
   
  
 



7.配置hdfs-site




 
  
  [cevent@hadoop202
  hadoop-2.7.2]$ pwd
  /opt/module/hadoop-HA/hadoop-2.7.2
  [cevent@hadoop202
  hadoop-2.7.2]$ vim
  etc/hadoop/hdfs-site.xml 
  <?xml version="1.0"
  encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl"
  href="configuration.xsl"?>
  <!--
    Licensed under the
  Apache License, Version 2.0 (the "License");
    you may not use
  this file except in compliance with the License.
    You may obtain a
  copy of the License at
   
     
  http://www.apache.org/licenses/LICENSE-2.0
   
    Unless required by
  applicable law or agreed to in writing, software
    distributed under
  the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES
  OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for
  the specific language governing permissions and
    limitations under
  the License. See accompanying LICENSE file.
  -->
   
  <!-- Put site-specific property overrides in this file.
  -->
   
  <configuration>
   
      <!-- 指定hdfs副本数replicaiton  -->
      <property>
         
  <name>dfs.replication</name>
          <value>3</value>
      </property>
   
         
  <property>
                 
  <name>dfs.namenode.checkpoint.period</name>
                 
  <value>120</value>
         
  </property>
      <property>
         
  <name>dfs.namenode.name.dir</name>
         
  <value>/opt/module/hadoop-HA/hadoop-2.7.2/data/tmp/dfs/name</value>
      </property>
   
      <property>
         
  <name>dfs.hosts</name>
         
  <value>/opt/module/hadoop-HA/hadoop-2.7.2/etc/hadoop/dfs.hosts</value>
      </property>
   
      <property>
         
  <name>dfs.hosts.exclude</name>
         
  <value>/opt/module/hadoop-HA/hadoop-2.7.2/etc/hadoop/dfs.hosts.exclude</value>
      </property>
   
      <!-- 指定hadoop辅助名称节点主机配置  -->
      <property>
         
  <name>dfs.namenode.secondary.http-address</name>
         
  <value>hadoop202.cevent.com:50090</value>
      </property>
      <property>
         
  <name>dfs.datanode.data.dir</name>
         
  <value>file:///${hadoop.tmp.dir}/dfs/data1,file:///${hadoop.tmp.dir}/dfs/data2</value>
      </property>
   
      <!-- 完全分布式集群名称 -->
         
  <property>
                 
  <name>dfs.nameservices</name>
                  <value>ceventcluster</value>
         
  </property>
  <!-- 集群中NameNode节点都有哪些 -->
         
  <property>
                 
  <name>dfs.ha.namenodes.ceventcluster</name>
                 
  <value>nn1,nn2</value>
         
  </property>
   
          <!-- nn1的RPC通信地址 -->
          <property>
                 
  <name>dfs.namenode.rpc-address.ceventcluster.nn1</name>
                 
  <value>hadoop202.cevent.com:9000</value>
         
  </property>
   
          <!-- nn2的RPC通信地址 -->
         
  <property>
                  <name>dfs.namenode.rpc-address.ceventcluster.nn2</name>
                 
  <value>hadoop203.cevent.com:9000</value>
         
  </property>
   
          <!-- nn1的http通信地址 -->
         
  <property>
                 
  <name>dfs.namenode.http-address.ceventcluster.nn1</name>
                 
  <value>hadoop202.cevent.com:50070</value>
         
  </property>
   
          <!-- nn2的http通信地址 -->
         
  <property>
                 
  <name>dfs.namenode.http-address.ceventcluster.nn2</name>
                 
  <value>hadoop203.cevent.com:50070</value>
         
  </property>
   
          <!-- 指定NameNode元数据在JournalNode上的存放位置 -->
         
  <property>
                 
  <name>dfs.namenode.shared.edits.dir</name>
         
  <value>qjournal://hadoop202.cevent.com:8485;hadoop203.cevent.com:8485;hadoop204.cevent.com:8485/ceventcluster</value>
         
  </property>
   
          <!-- 配置隔离机制,即同一时刻只能有一台服务器对外响应
  -->
         
  <property>
                 
  <name>dfs.ha.fencing.methods</name>
                 
  <value>sshfence</value>
         
  </property>
   
          <!-- 使用隔离机制时需要ssh无秘钥登录-->
         
  <property>
                 
  <name>dfs.ha.fencing.ssh.private-key-files</name>
                 
  <value>/home/cevent/.ssh/id_rsa</value>
         
  </property>
   
          <!-- 声明journalnode服务器存储目录-->
         
  <property>
                 
  <name>dfs.journalnode.edits.dir</name>
                 
  <value>/opt/module/hadoop-HA/hadoop-2.7.2/data/jn</value>
         
  </property>
   
          <!-- 关闭权限检查-->
         
  <property>
                 
  <name>dfs.permissions.enable</name>
                 
  <value>false</value>
          </property>
   
          <!-- 访问代理类:client,ceventcluster,active配置失败自动切换实现方式-->
         
  <property>
                 
  <name>dfs.client.failover.proxy.provider.ceventcluster</name>
         
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
         
  </property>
  </configuration>
  
 


8.同步hdfs-site和core-site




 
  
  [cevent@hadoop202
  hadoop-2.7.2]$ xsync etc/hadoop/core-site.xml 
  fname=core-site.xml
  pdir=/opt/module/hadoop-HA/hadoop-2.7.2/etc/hadoop
  ---------------
  hadoop203.cevent.com ----------------
  sending
  incremental file list
  core-site.xml
   
  sent
  510 bytes  received 43 bytes  368.67 bytes/sec
  total
  size is 1127  speedup is 2.04
  ---------------
  hadoop204.cevent.com ----------------
  sending
  incremental file list
  core-site.xml
   
  sent
  510 bytes  received 43 bytes  1106.00 bytes/sec
  total size is 1127 
  speedup is 2.04
   
  
 


9.查看203及204结果




 
  
  [cevent@hadoop203
  hadoop-2.7.2]$ cat etc/hadoop/hdfs-site.xml 
  <?xml
  version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet
  type="text/xsl" href="configuration.xsl"?>
  <!--
    Licensed under the Apache License, Version
  2.0 (the "License");
    you may not use this file except in
  compliance with the License.
    You may obtain a copy of the License at
   
     
  http://www.apache.org/licenses/LICENSE-2.0
   
    Unless required by applicable law or agreed
  to in writing, software
    distributed under the License is
  distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY
  KIND, either express or implied.
    See the License for the specific language
  governing permissions and
    limitations under the License. See
  accompanying LICENSE file.
  -->
   
  <!--
  Put site-specific property overrides in this file. -->
   
  <configuration>
   
      <!-- 指定hdfs副本数replicaiton  -->
      <property>
         
  <name>dfs.replication</name>
          <value>3</value>
      </property>
   
          <property>
                  <name>dfs.namenode.checkpoint.period</name>
                 
  <value>120</value>
          </property>
      <property>
         
  <name>dfs.namenode.name.dir</name>
          <value>/opt/module/hadoop-HA/hadoop-2.7.2/data/tmp/dfs/name</value>
      </property>
   
      <property>
          <name>dfs.hosts</name>
         
  <value>/opt/module/hadoop-HA/hadoop-2.7.2/etc/hadoop/dfs.hosts</value>
      </property>
   
      <property>
          <name>dfs.hosts.exclude</name>
         
  <value>/opt/module/hadoop-HA/hadoop-2.7.2/etc/hadoop/dfs.hosts.exclude</value>
      </property>
   
      <!-- 指定hadoop辅助名称节点主机配置  -->
      <property>
         
  <name>dfs.namenode.secondary.http-address</name>
         
  <vlaue>hadoop202.cevent.com:50090</value>
      </property>
      <property>
         
  <name>dfs.datanode.data.dir</name>
         
  <value>file:///${hadoop.tmp.dir}/dfs/data1,file:///${hadoop.tmp.dir}/dfs/data2</value>
      </property>
   
      <!-- 完全分布式集群名称 -->
  
 


10.删除data和logs




 
  
  [root@hadoop204
  ~]# su cevent
  [cevent@hadoop204
  root]$ cd /opt/module/hadoop-HA/hadoop-2.7.2/
  [cevent@hadoop204
  hadoop-2.7.2]$ ll
  总用量 64
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:27 bin
  -rw-rw-r--.
  1 cevent cevent    24 4月  20 14:27 cece.txt
  drwxrwxr-x.
  3 cevent cevent  4096 4月  20 14:27 data
  drwxr-xr-x.
  3 cevent cevent  4096 4月  20 14:27 etc
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:27 include
  drwxr-xr-x.
  3 cevent cevent  4096 4月  20 14:27 lib
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:27 libexec
  -rw-r--r--.
  1 cevent cevent 15429 4月  20 14:27 LICENSE.txt
  drwxrwxr-x.
  3 cevent cevent  4096 4月  20 14:27 logs
  -rw-r--r--.
  1 cevent cevent   101 4月  20 14:27 NOTICE.txt
  -rw-r--r--.
  1 cevent cevent  1366 4月  20 14:27 README.txt
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:27 sbin
  drwxr-xr-x.
  4 cevent cevent  4096 4月  20 14:27 share
  [cevent@hadoop204
  hadoop-2.7.2]$ pwd
  /opt/module/hadoop-HA/hadoop-2.7.2
  [cevent@hadoop204
  hadoop-2.7.2]$ rm -rf data/ logs/  删除目录
  [cevent@hadoop204
  hadoop-2.7.2]$ ll
  总用量 56
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:27 bin
  -rw-rw-r--.
  1 cevent cevent    24 4月  20 14:27 cece.txt
  drwxr-xr-x.
  3 cevent cevent  4096 4月  20 14:27 etc
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:27 include
  drwxr-xr-x.
  3 cevent cevent  4096 4月  20 14:27 lib
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:27 libexec
  -rw-r--r--.
  1 cevent cevent 15429 4月  20 14:27 LICENSE.txt
  -rw-r--r--.
  1 cevent cevent   101 4月  20 14:27 NOTICE.txt
  -rw-r--r--.
  1 cevent cevent  1366 4月  20 14:27 README.txt
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:27 sbin
  drwxr-xr-x. 4 cevent cevent  4096 4月  20 14:27 share
  格式化namenode
  [cevent@hadoop202 hadoop-2.7.2]$ bin/hdfs namenode -format
   
  
 


11.启动journalnode报错




 
  
  [cevent@hadoop202
  hadoop-2.7.2]$ sbin/hadoop-daemon.sh start
  journalnode 启动journalnode服务
  starting
  journalnode, logging to
  /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-journalnode-hadoop202.cevent.com.out
  [Fatal
  Error] hdfs-site.xml:49:44: The element type "vlaue" must be
  terminated by the matching end-tag "</vlaue>".
  Exception
  in thread "main" java.lang.RuntimeException:
  org.xml.sax.SAXParseException; systemId: file:/opt/module/hadoop-HA/hadoop-2.7.2/etc/hadoop/hdfs-site.xml;
  lineNumber: 49; columnNumber: 44; The element type "vlaue"
  must be terminated by the matching end-tag "</vlaue>".
          at
  org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2645)
          at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2492)
          at
  org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405)
          at
  org.apache.hadoop.conf.Configuration.set(Configuration.java:1143)
          at
  org.apache.hadoop.conf.Configuration.set(Configuration.java:1115)
          at
  org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1451)
          at
  org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)
          at
  org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)
  
 


{这里请认真查看hdfs-site配置,其中cluster集群的名称等变量一定统一一致,可解决bug}

12.提示启动journalnode权限不够




 
  
  [cevent@hadoop203
  hadoop-2.7.2]$ sbin/hadoop-daemon.sh start
  journalnode
  chown:
  正在更改"/opt/module/hadoop-HA/hadoop-2.7.2/logs" 的所有者: 不允许的操作
  starting
  journalnode, logging to /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-journalnode-hadoop203.cevent.com.out
  sbin/hadoop-daemon.sh:
  line 159: /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-journalnode-hadoop203.cevent.com.out:
  权限不够
  head:
  无法打开"/opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-journalnode-hadoop203.cevent.com.out"
  读取数据: 没有那个文件或目录
  sbin/hadoop-daemon.sh:
  line 177: /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-journalnode-hadoop203.cevent.com.out:
  权限不够
  sbin/hadoop-daemon.sh:
  line 178: /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-journalnode-hadoop203.cevent.com.out:
  权限不够
  [cevent@hadoop203
  hadoop-2.7.2]$ ll
  总用量 60
  drwxr-xr-x.
  2 cevent cevent  4096 4月
   20 14:25 bin
  drwxr-xr-x.
  3 cevent cevent  4096 4月  20 14:25 etc
  -rw-rw-r--.
  1 cevent cevent    48 4月  20 14:26 hongfei203.txt
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:25 include
  drwxr-xr-x.
  3 cevent cevent  4096 4月  20 14:25 lib
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:25 libexec
  -rw-r--r--.
  1 cevent cevent 15429 4月  20 14:26 LICENSE.txt
  drwxr-xr-x.
  2 root   root    4096 4月  21 20:29 logs
  -rw-r--r--.
  1 cevent cevent   101 4月  20 14:25 NOTICE.txt
  -rw-r--r--.
  1 cevent cevent  1366 4月  20 14:26 README.txt
  drwxr-xr-x.
  2 cevent cevent  4096 4月  20 14:25 sbin
  drwxr-xr-x. 4 cevent cevent  4096 4月  20 14:26 share
  删除logs
  
 


13.改变权限,每个服务器都改或者同步文件xsync

在这里插入代码片

14.格式化namenode报错




 
  
  [cevent@hadoop202
  hadoop-2.7.2]$ bin/hdfs namenode -format
  20/04/21
  21:03:17 INFO namenode.NameNode: STARTUP_MSG: 
  /************************************************************
  STARTUP_MSG:
  Starting NameNode
  STARTUP_MSG:   host = hadoop202.cevent.com/192.168.1.202
  STARTUP_MSG:   args = [-format]
  STARTUP_MSG:  
  version = 2.7.2
  20/04/21
  21:03:19 INFO namenode.FSNamesystem: HA Enabled: false
  20/04/21
  21:03:19 WARN namenode.FSNamesystem: Configured NNs:
   
  20/04/21
  21:03:19 ERROR namenode.FSNamesystem: FSNamesystem initialization failed.
  java.io.IOException:
  Invalid configuration: a shared edits dir must not be specified if HA is not
  enabled.
          at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762)
          at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
  20/04/21
  21:03:19 INFO namenode.FSNamesystem: Stopping services started for active
  state
  20/04/21
  21:03:19 INFO namenode.FSNamesystem: Stopping services started for standby
  state
  20/04/21
  21:03:19 WARN namenode.NameNode: Encountered exception during format: 
  java.io.IOException:
  Invalid configuration: a shared edits dir must not be specified if HA is not
  enabled.
          at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762)
          at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
  20/04/21
  21:03:19 ERROR namenode.NameNode: Failed to start namenode.
  java.io.IOException:
  Invalid configuration: a shared edits dir must not be specified if HA is not
  enabled.
          at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:762)
          at
  org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:697)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:984)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
          at
  org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
  20/04/21
  21:03:19 INFO util.ExitUtil: Exiting with status 1
  20/04/21
  21:03:19 INFO namenode.NameNode: SHUTDOWN_MSG: 
  /************************************************************
  SHUTDOWN_MSG:
  Shutting down NameNode at hadoop202.cevent.com/192.168.1.202
  ************************************************************/
   
  
 


15.修改core-site




 
  
  [cevent@hadoop202 hadoop-2.7.2]$ vim etc/hadoop/core-site.xml
  <?xml
  version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet
  type="text/xsl" href="configuration.xsl"?>
  <!--
    Licensed under the Apache License, Version
  2.0 (the "License");
    you may not use this file except in
  compliance with the License.
    You may obtain a copy of the License at
   
     
  http://www.apache.org/licenses/LICENSE-2.0
   
    Unless required by applicable law or agreed
  to in writing, software
    distributed under the License is
  distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY
  KIND, either express or implied.
    See the License for the specific language
  governing permissions and
    limitations under the License. See
  accompanying LICENSE file.
  -->
   
  <!--
  Put site-specific property overrides in this file. -->
   
  <configuration>
   
  <!--
  把两个NameNode)的地址组装成一个集群mycluster -->
                  <property>
                         
  <name>fs.defaultFS</name>
                         
  <value>hdfs://ceventcluster</value>
                  </property>
   
                  <property>
                         
  <name>dfs.nameservices</name>
                          <value>ceventcluster</value>
                  </property>
   
   
                  <!-- 指定hadoop运行时产生文件的存储目录
  -->
                  <property>
                         
  <name>hadoop.tmp.dir</name>
                         
  <value>/opt/module/hadoop-HA/hadoop-2.7.2/data/tmp</value>
                  </property>
   
  </configuration>
  ~
  
 


16.hadoop202-格式化进行启动




 
  
  [cevent@hadoop202
  hadoop-2.7.2]$ bin/hdfs namenode -format
  20/04/22
  15:15:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct =
  0.9990000128746033
  20/04/22
  15:15:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
  20/04/22
  15:15:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
  20/04/22 15:15:43
  INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
  20/04/22
  15:15:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
  20/04/22
  15:15:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes
  = 1,5,25
  20/04/22
  15:15:43 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
  20/04/22
  15:15:43 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap
  and retry cache entry expiry time is 600000 millis
  20/04/22
  15:15:43 INFO util.GSet: Computing capacity for map NameNodeRetryCache
  20/04/22
  15:15:43 INFO util.GSet: VM type      
  = 64-bit
  20/04/22
  15:15:43 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
  20/04/22
  15:15:43 INFO util.GSet: capacity     
  = 2^15 = 32768 entries
  20/04/22
  15:15:45 INFO namenode.FSImage: Allocated new BlockPoolId:
  BP-1826555003-192.168.1.202-1587539745872
  20/04/22
  15:15:45 INFO common.Storage: Storage directory
  /opt/module/hadoop-HA/hadoop-2.7.2/data/tmp/dfs/name has been successfully formatted.
  20/04/22
  15:15:47 INFO namenode.NNStorageRetentionManager: Going to retain 1 images
  with txid >= 0
  20/04/22
  15:15:47 INFO util.ExitUtil: Exiting with status 0
  20/04/22
  15:15:48 INFO namenode.NameNode: SHUTDOWN_MSG: 
  /************************************************************
  SHUTDOWN_MSG:
  Shutting down NameNode at hadoop202.cevent.com/192.168.1.202
  ************************************************************/
  [cevent@hadoop202
  hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
  starting namenode,
  logging to
  /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-namenode-hadoop202.cevent.com.out
  [cevent@hadoop202
  hadoop-2.7.2]$ jps
  3319 NameNode
  3127 JournalNode
  3395 Jps
  
 


17.hadoop203同步nn1的namenode,需要先启动hadoop-daemon.sh start namenode(202)



hadoop203同步nn1的namenode,需要先启动hadoop-daemon.sh
start namenode(202)

[cevent@hadoop203
hadoop-2.7.2]$ bin/hdfs namenode -bootstrapStandby
同步nn1(202)的数据到nn2(203)

20/04/22 15:26:51
INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]

20/04/22 15:26:51
INFO namenode.NameNode: createNameNode [-bootstrapStandby]

20/04/22 15:26:51
WARN common.Util: Path /opt/module/hadoop-HA/hadoop-2.7.2/data/tmp/dfs/name
should be specified as a URI in configuration files. Please update hdfs configuration.

20/04/22 15:26:51
WARN common.Util: Path /opt/module/hadoop-HA/hadoop-2.7.2/data/tmp/dfs/name
should be specified as a URI in configuration files. Please update hdfs
configuration.

=====================================================

About to bootstrap Standby ID nn2 from:

          
Nameservice ID: ceventcluster

        Other
Namenode ID: nn1

  Other NN's HTTP
address: http://hadoop202.cevent.com:50070

  Other NN's
IPC  address:
hadoop202.cevent.com/192.168.1.202:9000

            
Namespace ID: 250059450

            Block
pool ID: BP-1826555003-192.168.1.202-1587539745872

              
Cluster ID: CID-c81b20f0-9a11-4c83-9b96-295b76f5fee4

           Layout
version: -63

      
isUpgradeFinalized: true

=====================================================

20/04/22 15:26:52
INFO common.Storage: Storage directory
/opt/module/hadoop-HA/hadoop-2.7.2/data/tmp/dfs/name has been successfully
formatted.

20/04/22 15:26:52
WARN common.Util: Path /opt/module/hadoop-HA/hadoop-2.7.2/data/tmp/dfs/name
should be specified as a URI in configuration files. Please update hdfs
configuration.

20/04/22 15:26:52
WARN common.Util: Path /opt/module/hadoop-HA/hadoop-2.7.2/data/tmp/dfs/name
should be specified as a URI in configuration files. Please update hdfs
configuration.

20/04/22 15:26:53
INFO namenode.TransferFsImage: Opening connection to
http://hadoop202.cevent.com:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:250059450:0:CID-c81b20f0-9a11-4c83-9b96-295b76f5fee4

20/04/22 15:26:53
INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000
milliseconds

20/04/22 15:26:53
INFO namenode.TransferFsImage: Transfer took 0.00s at 0.00 KB/s

20/04/22 15:26:53
INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000
size 352 bytes.

20/04/22 15:26:53
INFO util.ExitUtil: Exiting with status 0

20/04/22 15:26:53
INFO namenode.NameNode: SHUTDOWN_MSG: 

/************************************************************

SHUTDOWN_MSG:
Shutting down NameNode at hadoop203.cevent.com/192.168.1.203

************************************************************/

[cevent@hadoop203
hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode  启动nn2(203)的namenode

starting namenode,
logging to /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-namenode-hadoop203.cevent.com.out

[cevent@hadoop203
hadoop-2.7.2]$ jps

3713 NameNode

3351 JournalNode

3789 Jps

 

18.启动结果验证

windows启动202:http://hadoop202.cevent.com:50070/dfshealth.html#tab-overview

202启动

windows启动203:http://hadoop203.cevent.com:50070/dfshealth.html#tab-overview

203启动

19.hadoop202启动hadoop-daemons.sh start datanode




 
  
  [cevent@hadoop202
  hadoop-2.7.2]$ sbin/hadoop-daemons.sh start datanode
  hadoop203.cevent.com:
  starting datanode, logging to
  /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-datanode-hadoop203.cevent.com.out
  hadoop204.cevent.com:
  starting datanode, logging to /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-datanode-hadoop204.cevent.com.out
  hadoop202.cevent.com:
  starting datanode, logging to
  /opt/module/hadoop-HA/hadoop-2.7.2/logs/hadoop-cevent-datanode-hadoop202.cevent.com.out
  hadoop205.cevent.com:
  ssh: connect to host hadoop205.cevent.com port 22: No route to host
  [cevent@hadoop202
  hadoop-2.7.2]$ jps
  3319 NameNode
  3763 DataNode
  3851 Jps
  3127 JournalNode
   
  
 


20.切换其中nn1为active激活状态(取消备用standby)



[cevent@hadoop202 hadoop-2.7.2]$ bin/hdfs haadmin -transitionToActive nn1

访问路径(hadoop202已转换为active):http://hadoop202.cevent.com:50070/dfshealth.html#tab-overview

切换active成功

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值