docker安装centos,部署Hadoop完全分布式

centos添加docker源

yum -y install yum-utils

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker

yum -y install docker-ce

 启动

版本

 

 拉取centos7镜像

容器:

查看docker ps -a        删除docker rm -f  CONTAINER ID
停止docker stop CONTAINER ID   启动docker start CONTAINER ID
进入docker exec -it df5a5d453876 /bin/bash

镜像:
查看docker images   删除镜像docker rmi IMAGE ID

 启动容器

docker run -d --name master --privileged=true -p 10022:22  centos:centos7 /usr/sbin/init
docker run -d --name slave1 --privileged=true -p 10023:22  centos:centos7 /usr/sbin/init
docker run -d --name slave2 --privileged=true -p 10024:22  centos:centos7 /usr/sbin/init

进入master容器

docker exec -it master /bin/bash

版本

下载常用的包和编译环境(所有节点都要执行)

yum install -y net-tools bash* iproute openssh-server openssh-clients vim lrzsz wget gcc-c++ pcre pcre-devel zlib zlib-devel ruby openssl openssl-devel patch bash-completion zlib.i686 libstdc++.i686 lsof unzip zip

启动ssh(所有节点都要执行)

systemctl start sshd

systemctl enable sshd

master主机映射

slave1主机映射

 slave2主机映射

修改root密码(所有节点都要执行)

 passwd root

配置ssh免密(前面已经安装了openssh-server openssh-clients)

ssh-keygen -t rsa(三台都要做)

ssh-copy-id master

ssh-copy-id slave1

ssh-copy-id slave2

宿主机的hadoop包传到centos容器中

docker cp /root/software/ master:/opt/

安装jdk

tar -xf jdk-8u171-linux-x64.tar.gz -C /opt/module/

环境变量(一起连hadoop配了)

export JAVA_HOME=/opt/module/jdk1.8.0_171
export HADOOP_HOME=/opt/module/hadoop-2.7.6
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

配置hadoop

 vi /opt/module/hadoop-2.7.6/etc/hadoop/hadoop-env.sh 

export JAVA_HOME=/opt/module/jdk1.8.0_171

vi /opt/module/hadoop-2.7.6/etc/hadoop/core-site.xml 

<property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
</property>
<property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/module/hadoop-2.7.6/tmp</value>
</property>

vi /opt/module/hadoop-2.7.6/etc/hadoop/mapred-site.xml

<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>

vi /opt/module/hadoop-2.7.6/etc/hadoop/yarn-site.xml  

<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>master</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>master:8032</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>master:8030</value>
</property>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>master:8031</value>
</property>

vi /opt/module/hadoop-2.7.6/etc/hadoop/slaves

slave1
slave2

hadoop格式化

hadoop namenode -format

22/10/20 08:15:52 INFO namenode.FSImageFormatProtobuf: Image file /opt/module/hadoop-2.7.6/hdfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.

22/10/20 08:15:52 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

22/10/20 08:15:52 INFO util.ExitUtil: Exiting with status 0

22/10/20 08:15:52 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at 69d57f8abb48/172.17.0.2

************************************************************/

启动

 jps脚本

 vi /usr/local/bin/xc.sh

chmod +x xc.sh

[root@master bin]# curl master:50070
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="REFRESH" content="0;url=dfshealth.html" />
<title>Hadoop Administration</title>
</head>
</html>

记一次报错

[root@master hadoop]# start-dfs.sh 
Starting namenodes on [master]
master: starting namenode, logging to /opt/module/hadoop-2.7.6/logs/hadoop-root-namenode-master.out
slave1: starting datanode, logging to /opt/module/hadoop-2.7.6/logs/hadoop-root-datanode-slave1.out
slave2: starting datanode, logging to /opt/module/hadoop-2.7.6/logs/hadoop-root-datanode-slave2.out
slave1: /opt/module/hadoop-2.7.6/bin/hdfs: line 28: which: command not found
slave1: dirname: missing operand
slave1: Try 'dirname --help' for more information.
slave1: /opt/module/hadoop-2.7.6/bin/hdfs: line 35: /opt/module/hadoop-2.7.6/../libexec/hdfs-config.sh: No such file or directory
slave1: /opt/module/hadoop-2.7.6/bin/hdfs: line 304: exec: : not found
slave2: /opt/module/hadoop-2.7.6/bin/hdfs: line 28: which: command not found
slave2: dirname: missing operand
slave2: Try 'dirname --help' for more information.
slave2: /opt/module/hadoop-2.7.6/bin/hdfs: line 35: /opt/module/hadoop-2.7.6/../libexec/hdfs-config.sh: No such file or directory
slave2: /opt/module/hadoop-2.7.6/bin/hdfs: line 304: exec: : not found
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/module/hadoop-2.7.6/logs/hadoop-root-secondarynamenode-master.out

原因精简的centos确少编译环境。解决方法 上面用yum 下载了很多依赖,解决。

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值