实现kerberos认证的hadoop 3.2 分布式部署docker脚本

共搭建了DNS Server , NTP Server,Kerberos 5 Server,Hadoop 3.2 HDFS,Haddop 3.2 Yarn,形成了docker file文件,所有资源从网上下载,本地不需要资源即可构建,构建后不需要修改即可启动,启动后即是集群,贴出主要内容

Hadoop Base Dockfile 脚本:

FROM centos:7

ARG HADOOP_DOWNLOAD=https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz 
ARG HADOOP_GROUP=hadoop
ARG HADOOP_VERSION=3.2.1

ENV JAVA_HOME /usr/lib/jvm/jre-1.8.0-openjdk
ENV HADOOP_HOME /opt/hadoop-$HADOOP_VERSION
ENV HADOOP_USER_HDFS hdfs
ENV HADOOP_USER_YARN yarn
ENV HADOOP_USER_MAPRED mapred

RUN 	yum -y update \
	&& yum install -y wget openssl net-tools \
	&& yum install -y openssh-server openssh-clients \
	&& yum install -y krb5-workstation \
	&& yum install -y java-1.8.0-openjdk \
	&& groupadd -g 1001 $HADOOP_GROUP \
	&& useradd -d /home/$HADOOP_USER_HDFS -m -u 1001 -g $HADOOP_GROUP -s /bin/bash -p $HADOOP_USER_HDFS $HADOOP_USER_HDFS \
	&& useradd -d /home/$HADOOP_USER_YARN -m -u 1002 -g $HADOOP_GROUP -s /bin/bash -p $HADOOP_USER_YARN $HADOOP_USER_YARN \
	&& useradd -d /home/$HADOOP_USER_MAPRED -m -u 1003 -g $HADOOP_GROUP -s /bin/bash -p $HADOOP_USER_MAPRED $HADOOP_USER_MAPRED \
	&& echo 'export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk' > /etc/profile.d/hadoop.sh  \
	&& echo 'export HDFS_NAMENODE_USER=$HADOOP_USER_HDFS' >> /etc/profile.d/hadoop.sh \
	&& echo 'export HDFS_DATANODE_USER=$HADOOP_USER_HDFS' >> /etc/profile.d/hadoop.sh \
	&& echo 'export HDFS_SECONDARYNAMENODE_USER=$HADOOP_USER_HDFS' >> /etc/profile.d/hadoop.sh \
	&& source /etc/profile.d/hadoop.sh \
# config kerberos 
	&& sed -i 's/EXAMPLE.COM/DEV/' /etc/krb5.conf \
	&& sed -i 's/example.com/dev/' /etc/krb5.conf \
	&& sed -i '/default_realm/s/^#//' /etc/krb5.conf \
	&& sed -i '/DEV/s/^#//' /etc/krb5.conf \
	&& sed -i '/dev/s/^#//' /etc/krb5.conf \
	&& sed -i '/ }/s/^#//' /etc/krb5.conf \
	&& sed -i '/default_ccache_name/s/^ /# /' /etc/krb5.conf \
# config ssh
	&& ssh-keygen -t rsa -P '' -f /etc/ssh/ssh_host_rsa_key \
	&& ssh-keygen -t rsa -P '' -f /etc/ssh/ssh_host_ecdsa_key \
	&& ssh-keygen -t rsa -P '' -f /etc/ssh/ssh_host_ed25519_key \
	&& su - $HADOOP_USER_HDFS -c "ssh-keygen -t rsa -P '' -f /home/$HADOOP_USER_HDFS/.ssh/id_rsa" \
	&& su - $HADOOP_USER_YARN -c "ssh-keygen -t rsa -P '' -f /home/$HADOOP_USER_YARN/.ssh/id_rsa" \
	&& su - $HADOOP_USER_MAPRED -c "ssh-keygen -t rsa -P '' -f /home/$HADOOP_USER_MAPRED/.ssh/id_rsa" \
	&& cp /home/$HADOOP_USER_HDFS/.ssh/id_rsa.pub  /home/$HADOOP_USER_HDFS/.ssh/authorized_keys \
	&& cp /home/$HADOOP_USER_YARN/.ssh/id_rsa.pub  /home/$HADOOP_USER_YARN/.ssh/authorized_keys \
	&& cp /home/$HADOOP_USER_MAPRED/.ssh/id_rsa.pub  /home/$HADOOP_USER_MAPRED/.ssh/authorized_keys \
	&& echo 'StrictHostKeyChecking no' >> /etc/ssh/ssh_config \
# ADD   hadoop-3.2.1.tar.gz  /opt/
# config hadoop
	&& wget -P /tmp  $HADOOP_DOWNLOAD \
	&& tar -zxvf /tmp/hadoop-3.2.1.tar.gz -C /opt/ \
	&& chown -R $HADOOP_USER_HDFS:$HADOOP_GROUP $HADOOP_HOME \
	&& mkdir /mnt/data \
	&& chown $HADOOP_USER_HDFS:$HADOOP_GROUP /mnt/data \
	&& mkdir /mnt/name \
	&& chown $HADOOP_USER_HDFS:$HADOOP_GROUP /mnt/name \
	&& echo export JAVA_HOME=$JAVA_HOME >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh \
	&& echo export HDFS_NAMENODE_USER=$HADOOP_USER_HDFS >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh \
	&& echo export HDFS_DATANODE_USER=$HADOOP_USER_HDFS >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh \
	&& echo export HDFS_SECONDARYNAMENODE_USER=$HADOOP_USER_HDFS >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh \
# core-site.xml
	&& sed -i '19a\  <property><name>hadoop.security.authentication</name><value>kerberos</value></property>' $HADOOP_HOME/etc/hadoop/core-site.xml \
	&& sed -i '19a\  <property><name>hadoop.security.authorization</name><value>true</value></property>' $HADOOP_HOME/etc/hadoop/core-site.xml \
	&& sed -i '19a\  <property><name>hadoop.tmp.dir</name><value>/tmp</value></property>' $HADOOP_HOME/etc/hadoop/core-site.xml \
	&& sed -i '19a\  <property><name>fs.defaultFS</name><value>hdfs://master.dev:9000</value></property>' $HADOOP_HOME/etc/hadoop/core-site.xml \
	&& sed -i '19a\  <property><name>hadoop.security.auth_to_local</name><value>RULE:[2:$1](namenode)s/.*/hdfs/ RULE:[2:$1](secondary)s/.*/hdfs/ RULE:[2:$1](datanode)s/.*/hdfs/ RULE:[2:$1](http)s/.*/hdfs/ RULE:[2:$1](resourcemanager)s/.*/yarn/ RULE:[2:$1](nodemanager)s/.*/yarn/ RULE:[2:$1](jobhistory)s/.*/mapred/ </value></property>' $HADOOP_HOME/etc/hadoop/core-site.xml \
# yarn-site.xml
	&& sed -i '15a\  <property><name>yarn.nodemanager.keytab</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/yarn-site.xml \
	&& sed -i '15a\  <property><name>yarn.nodemanager.principal</name><value>nodemanager/slave.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/yarn-site.xml \
	&& sed -i '15a\  <property><name>yarn.resourcemanager.keytab</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/yarn-site.xml \
	&& sed -i '15a\  <property><name>yarn.resourcemanager.principal</name><value>resourcemanager/master.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/yarn-site.xml \
	&& sed -i '15a\  <property><name>yarn.resourcemanager.hostname</name><value>master.dev</value></property>' $HADOOP_HOME/etc/hadoop/yarn-site.xml \
# ssl
	&& cp $HADOOP_HOME/etc/hadoop/ssl-client.xml.example $HADOOP_HOME/etc/hadoop/ssl-client.xml \
	&& sed -i '23c <value>/mnt/keytab/truststore.jks</value>' $HADOOP_HOME/etc/hadoop/ssl-client.xml \
	&& sed -i '31c <value>123456</value>' $HADOOP_HOME/etc/hadoop/ssl-client.xml \
	&& sed -i '53c <value>/mnt/keytab/keystore.jks</value>' $HADOOP_HOME/etc/hadoop/ssl-client.xml \
	&& sed -i '61c <value>123456</value>' $HADOOP_HOME/etc/hadoop/ssl-client.xml \
	&& sed -i '68c <value>123456</value>' $HADOOP_HOME/etc/hadoop/ssl-client.xml \
	&& cp $HADOOP_HOME/etc/hadoop/ssl-server.xml.example $HADOOP_HOME/etc/hadoop/ssl-server.xml \
	&& sed -i '23c <value>/mnt/keytab/truststore.jks</value>' $HADOOP_HOME/etc/hadoop/ssl-server.xml \
	&& sed -i '30c <value>123456</value>' $HADOOP_HOME/etc/hadoop/ssl-server.xml \
	&& sed -i '52c <value>/mnt/keytab/keystore.jks</value>' $HADOOP_HOME/etc/hadoop/ssl-server.xml \
	&& sed -i '59c <value>123456</value>' $HADOOP_HOME/etc/hadoop/ssl-server.xml \
	&& sed -i '66c <value>123456</value>' $HADOOP_HOME/etc/hadoop/ssl-server.xml \
# config startup script
	&& echo '/usr/sbin/sshd -D &' > /opt/startup.sh \
	&& chmod +x /opt/startup.sh

Hadoop Master结点脚本:

FROM hadoop

ARG HADOOP_VERSION=3.2.1
ENV HADOOP_HOME /opt/hadoop-$HADOOP_VERSION
ENV HADOOP_USER_HDFS hdfs

RUN	sed -i '19a\  <property><name>dfs.http.policy</name><value>HTTPS_ONLY</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.web.authentication.kerberos.principal</name><value>http/master.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.web.authentication.kerberos.keytab</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.webhdfs.enabled</name><value>true</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.web.authentication.kerberos.principal</name><value>http/master.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.web.authentication.kerberos.keytab</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.web.authentication.kerberos.keytab</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.web.authentication.kerberos.principal</name><value>http/master.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.datanode.keytab.file</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.datanode.kerberos.https.principal</name><value>http/slave.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.datanode.kerberos.principal</name><value>datanode/slave.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.datanode.data.dir</name><value>file:/mnt/data/</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.secondary.namenode.keytab.file</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.secondary.namenode.kerberos.principal</name><value>secondary/master.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.namenode.keytab.file</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.namenode.kerberos.https.principal</name><value>http/master.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.namenode.kerberos.principal</name><value>namenode/master.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.namenode.name.dir</name><value>file:/mnt/name/</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.permissions.supergroup</name><value>hadoop</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.block.access.token.enable</name><value>true</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.replication</name><value>3</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& printf "slave1.dev\nslave2.dev\nslave3.dev\nslave4.dev\n" > $HADOOP_HOME/etc/hadoop/workers \
	&& echo 'su - $HADOOP_USER_HDFS -c "$HADOOP_HOME/bin/hdfs namenode -format"' >> /opt/startup.sh \
	&& echo 'su - $HADOOP_USER_HDFS -c $HADOOP_HOME/sbin/start-dfs.sh' >> /opt/startup.sh \
	&& echo '/bin/sh -c "while true; do sleep 60; done"' >> /opt/startup.sh

EXPOSE 8088
EXPOSE 9871

CMD /opt/startup.sh

 


Hadoop Slave节点脚本:


FROM hadoop

RUN	   sed -i '19a\  <property><name>dfs.data.transfer.protection</name><value>integrity</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.http.policy</name><value>HTTPS_ONLY</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.web.authentication.kerberos.keytab</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.web.authentication.kerberos.principal</name><value>http/slave.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.datanode.keytab.file</name><value>/mnt/keytab/hadoop.keytab</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.datanode.kerberos.https.principal</name><value>http/slave.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.datanode.kerberos.principal</name><value>datanode/slave.dev@DEV</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.datanode.data.dir</name><value>file:/mnt/data/</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.block.access.token.enable</name><value>true</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml \
	&& sed -i '19a\  <property><name>dfs.replication</name><value>3</value></property>' $HADOOP_HOME/etc/hadoop/hdfs-site.xml

EXPOSE 9865

CMD /usr/sbin/sshd -D 

使用docker-compose build 构建完成后,要先启动slave节点,然后启动master节点:

docker-compose build
docker run --network br0 --ip 192.168.1.5 -p53:53 -p123:123 --hostname gate --name gate -d gate 
docker run --network br0 --ip 192.168.1.7 -p88:88 -p750:750 --hostname kerberose --dns 192.168.1.5 --name kerberos -v keytab:/mnt/keytab -d kerberos 

docker run --network br0 --ip 192.168.1.21 -p 9865:9865 --hostname slave1 --dns 192.168.1.5 --name slave1 -v keytab:/mnt/keytab:ro -d slave 
docker run --network br0 --ip 192.168.1.22 -p 9866:9865 --hostname slave2 --dns 192.168.1.5 --name slave2 -v keytab:/mnt/keytab:ro -d slave
docker run --network br0 --ip 192.168.1.23 -p 9867:9865 --hostname slave3 --dns 192.168.1.5 --name slave3 -v keytab:/mnt/keytab:ro -d slave
docker run --network br0 --ip 192.168.1.24 -p 9868:9865 --hostname slave4 --dns 192.168.1.5 --name slave4 -v keytab:/mnt/keytab:ro -d slave
docker run --network br0 --ip 192.168.1.25 -p 9869:9865 --hostname slave5 --dns 192.168.1.5 --name slave5 -v keytab:/mnt/keytab:ro -d slave
docker run --network br0 --ip 192.168.1.26 -p 9870:9865 --hostname slave6 --dns 192.168.1.5 --name slave6 -v keytab:/mnt/keytab:ro -d slave
docker run --network br0 --ip 192.168.1.10 -p 9871:9871 -p 8088:8088 --hostname master --dns 192.168.1.5 --name master -v keytab:/mnt/keytab:ro -d master

需要启动多少个slave节点,就执行多少次启动slave命令,访问web 端口

 

 

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值