特别提醒:
- hadoop都创建完成后,进行启动,如果master的namenode启动不来,
- 删除所有节点的data目录
rm -rf /hadoop/tmp/dfs/data
- 格式化下namenode
/usr/local/hadoop-2.7.0/bin/hadoop namenode -format
- 配置SSH要用root用户,即
sudo -i
1.拉取镜像、创建自定义网络
docker pull sequenceiq/hadoop-docker
docker network create --subnet=172.18.0.0/16 mynetwork
2.安装运行
- 创建master节点
docker run --name hadoop1 -d -h master \
--network mynetwork --ip 172.18.0.101 -p 50070:50070 -p 9000:9000 \
--add-host=master:172.18.0.101 --add-host=slave1:172.18.0.102 \
--add-host=slave2:172.18.0.103 sequenceiq/hadoop-docker
参数说明:
-h 为容器设置主机名
–name 设置容器的名称
-d 在后台运行
- 以此方法创建slave1和slave2节点
docker run --name hadoop2 -d -h slave1 \
--network mynetwork --ip 172.18.0.102 --add-host=master:172.18.0.101 \
--add-host=slave1:172.18.0.102 --add-host=slave2:172.18.0.103 \
sequenceiq/hadoop-docker
docker run --name hadoop3 -d -h slave2 \
--network mynetwork --ip 172.18.0.103 --add-host=master:172.18.0.101 \
--add-host=slave1:172.18.0.102 --add-host=slave2:172.18.0.103 \
sequenceiq/hadoop-docker
3.配置
- (1).进入到容器hadoop1
docker exec -it -u root hadoop1 bash
- (2).配置ssh生成秘钥
//启动ssh /etc/init.d/sshd start //生成秘钥 ssh-keygen -t rsa //导入公钥到authorized_keys文件 cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys cat /root/.ssh/authorized_keys
- (3).所有的节点均重复(1)、(2)步骤。然后分别进入每个节点,将其他节点的公钥也都复制到authorized_keys,也就是说每个>authorized_keys 文件中存储的公钥都是3个而且是一样的
//将容器中的文件复制到centos本地 docker cp hadoop1:/root/.ssh/authorized_keys /home/docker/authorized_keys_master docker cp hadoop2:/root/.ssh/authorized_keys /home/docker/authorized_keys_slave1 docker cp hadoop3:/root/.ssh/authorized_keys /home/docker/authorized_keys_slave2 cat /home/docker/authorized_keys_master /home/docker/authorized_keys_slave1 /home/docker/authorized_keys_slave2 > /home/docker/authorized_keys
- (5).将本地的文件复制到容器
docker cp /home/docker/authorized_keys hadoop1:/root/.ssh/authorized_keys docker cp /home/docker/authorized_keys hadoop2:/root/.ssh/authorized_keys docker cp /home/docker/authorized_keys hadoop3:/root/.ssh/authorized_keys
- (6).进入到hadoop1容器中并配置core-site.xml(如果存在该配置项,则忽略)
//编辑文件
vi /usr/local/hadoop-2.7.0/etc/hadoop/core-site.xml
//加入配置
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value>
</property>
- (7).进入到hadoop1容器中并配置hdfs-site.xml注意slave数量要大于等于备份的数量,否者会报错(如果存在该配置项,则忽略)
//编辑文件 vi /usr/local/hadoop-2.7.0/etc/hadoop/hdfs-site.xml
//加入配置 <property> <name>dfs.namenode.secondary.http-address</name> <value>master:50090</value> </property>
- (9).进入到hadoop1容器中并配置slaves
//编辑文件 vi /usr/local/hadoop-2.7.0/etc/hadoop/slaves
//加入配置 slave1 slave2
- (10).进入到hadoop1容器中并配置masters
//编辑文件 vi /usr/local/hadoop-2.7.0/etc/hadoop/masters
//加入配置 master
- (11).进入到hadoop1容器中并配置 mapred-site.xml,指定MapReduce运行在yarn上,在hadoop容器中已经帮我们配置好了
//配置yarn-site.xml
vi /usr/local/hadoop-2.7.0/etc/hadoop/yarn-site.xml
//加入配置
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.aaaress</name>
<value>master:8089</value>
</property>
- (12).将yarn-site.xml这些参数发送到其它节点slave1、slave2
scp /usr/local/hadoop-2.7.0/etc/hadoop/* 172.18.0.102:/usr/local/hadoop-2.7.0/etc/hadoop/ scp /usr/local/hadoop-2.7.0/etc/hadoop/* 172.18.0.103:/usr/local/hadoop-2.7.0/etc/hadoop/
4.运行hadoop
- 在master服务器启动hadoop,从节点会自动启动,
- 在master上格式化namenode
/usr/local/hadoop-2.7.0/bin/hadoop namenode -format
- 在master上启动集群,先停止,然后再启动
/usr/local/hadoop-2.7.0/sbin/stop-all.sh
/usr/local/hadoop-2.7.0/sbin/start-all.sh
- jps 查看进程,查看到说明已启动
jps
5.测试
/usr/local/hadoop-2.7.0/bin/hadoop fs -mkdir /test
/usr/local/hadoop-2.7.0/bin/hadoop fs -ls /
访问主机IP:50070
可以通过界面查看namenode、datanode、存储文件等信息