ubuntu16.04安装hadoop-3.1.2

跳板机: tian-1

tian-10   master

tian-11   datanode

tian-12  datanode

vi  /etc/hosts,追加这几行

192.168.18.253  tian-10
192.168.18.178  tian-11
192.168.18.81   tian-12


配置tian-10到其他三台机器的互信,互信配置省略

ssh-keygen 
ssh-copy-id -i /root/.ssh/id_rsa.pub root@tian-11
ssh-copy-id -i /root/.ssh/id_rsa.pub root@tian-12


每个节点执行以下操作:

关闭防火墙:

systemctl stop firewalld

systemctl disable firewalld

在安装hadoop之前先安装zookeeper

在tian-10上执行:

安装jdk:从官网下载1.8以上的jdk    http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html,我的之前已经下载好了,因此直接用

 tar xf jdk-8u162-ea-bin-b01-linux-x64-04_oct_2017.tar.gz  -C /usr/local/

vi /etc/profile   在后面追加这几行

export JAVA_HOME=/usr/local/jdk1.8.0_162
export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.14/
export JAVA_BIN=$JAVA_HOME/bin

export PATH=$PATH:$JAVA_HOME/bin

source /etc/profile

root@tian-10:~# java -version
java version "1.8.0_162-ea"
Java(TM) SE Runtime Environment (build 1.8.0_162-ea-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.162-b01, mixed mode)


tar xf zookeeper-3.4.14.tar.gz -C /usr/local/

vi /etc/profile

export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.14/
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$SPARK_HOME/bin

source /etc/profile

ln -sv /usr/local/zookeeper-3.4.14/     /usr/local/zookeeper

 cd /usr/local/zookeeper
  ls
 cd conf/

vi zoo.cfg 

dataDir=/usr/local/zookeeper/tmp
dataLogDir=/usr/local/zookeeper/logs

server.1=tian-10:2888:3888
server.2=tian-11:2888:3888
server.3=tian-12:2888:3888

mkdir  /usr/local/zookeeper/{tmp,logs}

scp   -r /usr/local/zookeeper/  tian-11:/usr/local/

scp   -r /usr/local/zookeeper/  tian-12:/usr/local/

echo 1 >  /usr/local/zookeeper/tmp/myid

在tian-11上执行:  echo 2 >  /usr/local/zookeeper/tmp/myid

在tian-12上执行:  echo 3 >  /usr/local/zookeeper/tmp/myid

在三个节点上分别执行: 

          zkServer.sh start

          jps 

             jps命令可以查看到QuorumPeerMain

         zkServer.sh  status   可以看到一个leader和两个follower

root@tian-10:/usr/local/zookeeper# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower


root@tian-11:~# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader

root@tian-12:~# zkServer.sh  status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

如果要停止zookper,可以用  zkServer.sh stop

安装hadoop:

    下载hadoop:

        在跳板机上操作:

          cd /usr/local/src

           screen

           wget  https://www-eu.apache.org/dist/hadoop/common/hadoop-3.1.2/hadoop-3.1.2.tar.gz

   分别scp到tian-10,tian-11,tian-12

在所有节点上执行:  

        adduser hadoop    

         passwd  hadoop   设置hadoop密码,在服务启动的时候会用到

   在tian-10上执行:

      tar xf  /usr/local/src/hadoop-3.1.2.tar.gz -C /usr/local/

      ln -sv  /usr/local/hadoop-3.1.2/ /usr/local/hadoop

      cd /usr/local/hadoop/etc/hadoop/

      vi  hadoop-env.sh

           export JAVA_HOME=/usr/local/jdk1.8.0_162/

       vi core-site.xml          

<configuration>

 <property>

<!-- 指定hdfs的nameservice为ns --> 
    <name>fs.default.name</name>
    <value>hdfs://tian-10:9000</value>
</property>

<property>

<!--指定hadoop数据临时存放目录--> 
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/app/tmp/</value>
</property>

<!--指定zookeeper地址-->

 <property>  
      <name>ha.zookeeper.quorum</name>
      <value>tian-10:21810,tian-11:21810,tian-12:21810</value>                  
 </property>

</configuration>

mkdir    -p /home/hadoop/app/tmp/

vi  hdfs-site.xml

<!--设定副本数-->

<configuration>
    <property>
         <name>dfs.replication</name>
         <value>3</value>
     </property>
</configuration>

mapred-site.xml文件不用改,默认就好

vi yarn-site.xml

<configuration>
      <!-- 开启YARN HA -->
      <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
      </property>

  <!-- 启用自动故障转移 -->
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>    

  <!-- 指定YARN HA的名称 -->
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>yarncluster</value>
  </property>


  <!-- 指定两个resourcemanager的名称 -->
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>

<!--配置rm1,rm2-->
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>tian-10</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>tian-11</value>
  </property>
  <!-- 配置zookeeper的地址 -->
  <property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>tian-10:2181,tian-11:2181,tian-12:2181</value>
  </property>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>tian-10:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>tian-10:8030</value>
    </property>

        <property>  
            <name>yarn.resourcemanager.resource-tracker.address</name>  
            <value>tian-10:8031</value>  
        </property>    

        <property>  
                <name>yarn.resourcemanager.admin.address</name>  
            <value>tian-10:8033</value>  
    </property>

        <property>  
                <name>yarn.resourcemanager.webapp.address</name>  
                <value>tian-10:8088</value>  
        </property>

</configuration>


vi  /etc/profile

export HADOOP_HOME=/usr/local/hadoop

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$SPARK_HOME/bin

source /etc/profile
vi  slaves

tian-10
tian-11
tian-12

scp  /etc/profile   tian-11:/etc

scp  /etc/profile   tian-12:/etc

scp -r  /usr/local/hadoop-3.1.2/   tian-11:/usr/local/
scp -r  /usr/local/hadoop-3.1.2/   tian-12:/usr/local/

三个节点分别执行

source  /etc/profile

 chown -R hadoop. /usr/local/hadoop-3.1.2/  /home/hadoop/

将tian-11,tian-12的hadoop分别做软连接,如果之前有链接,比如我这里之前用的hadoop-2.2.7,因此需要先删掉此前的链接

rm /usr/local/hadoop

ln -sv  /usr/local/hadoop-3.1.2/ /usr/local/hadoop

source  /etc/profile

chown -R hadoop. /usr/local/hadoop-3.1.2/

启动服务

cd /usr/local/hadoop/sbin/

./start-all.sh

报错如下;

hadoop@tian-10:~$ /usr/local/hadoop/sbin/start-all.sh 
WARNING: HADOOP_PREFIX has been replaced by HADOOP_HOME. Using value of HADOOP_PREFIX.
WARNING: Attempting to start all Apache Hadoop daemons as hadoop in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [tian-10]
/usr/local/hadoop/sbin/start-dfs.sh: line 94: hadoop_uservar_su: command not found
Starting datanodes
/usr/local/hadoop/sbin/start-dfs.sh: line 107: hadoop_uservar_su: command not found
/usr/local/hadoop/bin/hdfs: line 239: hadoop_abs: command not found
/usr/local/hadoop/bin/hdfs: line 248: hadoop_need_reexec: command not found
/usr/local/hadoop/bin/hdfs: line 256: hadoop_verify_user_perm: command not found
/usr/local/hadoop/bin/hdfs: line 267: hadoop_add_client_opts: command not found
/usr/local/hadoop/bin/hdfs: line 274: hadoop_subcommand_opts: command not found
/usr/local/hadoop/bin/hdfs: line 277: hadoop_generic_java_subcmd_handler: command not found
Starting resourcemanagers on []
/usr/local/hadoop/sbin/start-yarn.sh: line 68: hadoop_uservar_su: command not found
Starting nodemanagers
/usr/local/hadoop/sbin/start-yarn.sh: line 79: hadoop_uservar_su: command not found

 

还没解决
      

      

     

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值