1、原容器
java:openjdk-8u111-jre
jre路径:
/usr/lib/jvm/java-8-openjdk-amd64
/usr/lib/jvm/java-1.8.0-openjdk-amd64
2、安装ssh
docker run -it --name hadoop-test java:openjdk-8u111-jre bash
apt-get update
apt-get install openssh
service ssh start
3、生成密钥
ssh-keygen -t rsa
cd /root/.ssh
cat id_rsa.pub >> authorized_keys
4、修改ssh配置
将配置文件/etc/ssh/ssh_config 中的
StrictHostKeyChecking ask改为StrictHostKeyChecking no,GSSAPIAuthentication yes改为GSSAPIAuthentication no。
5、从官网上下载hadoop安装包,并解压到容器下/home路径下
6、run.sh文件编写, 放在/home路径下
#!/bin/bash
service ssh restart
cd /home/hadoop-2.10.1/sbin
if [[ "$TYPE" == "master" ]];then
./start-all.sh
fi
while true;
do
sleep 1m
done
7、提交镜像
docker commit hadoop-test hadoop:2.10.1
8、编写Dockerfile
FROM hadoop:2.10.1
WORKDIR /home
EXPOSE 9000
ENTRYPOINT ["bash", "run.sh"]
9、构建容器:
docker build . --rm --tag=hadoop:2.10.1-u1
10、配置容器
参考:https://blog.csdn.net/m0_67390969/article/details/126553657
11、格式化
进入/home/hadoop-2.10.1/bin,运行以下命令:
./hdfs namenode -format
12、启动容器
#新建容器网络
docker network create -d bridge hadoop
#启动slave容器
docker run -d --name hadoop-slave1 --net=hadoop -h slave1 -e TYPE="slave" -v /opt/docker/hadoop/conf:/home/hadoop-2.10.1/etc/hadoop/ -v /opt/docker/hadoop/logs:/home/hadoop-2.10.1/logs -v /opt/docker/hadoop/slave1/tmp:/hadoop/tmp -v /opt/docker/hadoop/slave1/data:/hadoop/data -v /opt/docker/hadoop/slave1/name:/hadoop/name -v /opt/docker/hadoop/ssh/authorized_keys:/root/.ssh/authorized_keys hadoop:2.10.1-u1
docker run -d --name hadoop-slave2 --net=hadoop -h slave2 -e TYPE="slave" -v /opt/docker/hadoop/conf:/home/hadoop-2.10.1/etc/hadoop/ -v /opt/docker/hadoop/logs:/home/hadoop-2.10.1/logs -v /opt/docker/hadoop/slave2/tmp:/hadoop/tmp -v /opt/docker/hadoop/slave2/data:/hadoop/data -v /opt/docker/hadoop/slave2/name:/hadoop/name -v /opt/docker/hadoop/ssh/authorized_keys:/root/.ssh/authorized_keys hadoop:2.10.1-u1
#启动master容器
docker run -d --name hadoop-master -p 9000:9000 --net=hadoop -h master -e TYPE="master" -v /opt/docker/hadoop/conf:/home/hadoop-2.10.1/etc/hadoop/ -v /opt/docker/hadoop/logs:/home/hadoop-2.10.1/logs -v /opt/docker/hadoop/master/tmp:/hadoop/tmp -v /opt/docker/hadoop/master/data:/hadoop/data -v /opt/docker/hadoop/master/name:/hadoop/name -v /opt/docker/hadoop/ssh/authorized_keys:/root/.ssh/authorized_keys hadoop:2.10.1-u1
13、测试是否正常
进入/home/hadoop-2.10.1/bin,运行以下命令:
hadoop fs -ls / hadoop fs -mkdir /tmp hadoop fs -ls / hadoop fs -df /
如果显示如下结果代表成功运行:

参考资料:
https://blog.csdn.net/m0_67390969/article/details/126553657
https://devpress.csdn.net/cloudnative/62f442f67e668234661883e1.html
https://www.shuzhiduo.com/A/D854NPDVzE/
https://blog.csdn.net/zhangvalue/article/details/103748438
https://blog.csdn.net/wejack/article/details/126368643
关于hadoop:从 / 到:9000的调用在连接异常时失败:java.net.ConnectException:连接被拒绝 | 码农家园

本文详细描述了如何在Docker容器中安装和配置Hadoop环境,包括使用Java基础镜像,安装SSH,生成密钥,修改SSH配置,下载并解压Hadoop安装包,编写启动脚本,创建和提交镜像,构建和启动容器,以及处理可能的连接问题。
846

被折叠的 条评论
为什么被折叠?



