Docker之Hadoop普通集群搭建

docker搭建下hadoop集群,分布式的搭建与伪分布式的并无多大区别,在伪分布式下,各种配置文件中对于ip或者主机名全都是一致的,而在分布式下只要需要把相应namenode、resourcemanager所在节点对应IP配置正确,在slaves中添加所有datanode节点ip或主机名即可。使用的镜像为hadoop-2.6.0-cdh5.4.5.tar.gz。

在docker之上搭建一个容器为master节点,并在之上运行namenode和resourcemanager节点,其余节点为slaves,运行datanode和nodemanager节点,配置的所有节点都使用同一样的配置文件,所以可以制作成一个docker镜像,然后用同一个镜像运行多个容器,电脑上容器的ip分配是从第一个容器开始就是172.17.0.2,往后创建的容器对应ip递增。所以配置文件中对应的namenode和resourcemanager的ip地址可以设置成172.17.0.2,其余容器ip写到slaves中。

详细的配置文件信息如下:

core-site.xml :

 

<configuration>
  <property>
		<name>fs.defaultFS</name>
		<value>hdfs://172.17.0.2:9000</value>
	</property>
	<property>
		<name>io.file.buffer.size</name>
		<value>131072</value>
	</property>
	<property>
		<name>hadoop.tmp.dir</name>
		<value>file:/data/tmp</value>
	</property>
</configuration

hdfs-site.xml:

 

 

<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/data/hdfs/name</value>
        <final>true</final>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/data/hdfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
        <value>false</value>
    </property>
</configuration>

mapred-site.xml:

 

 

<configuration>
<property>
	<name>mapreduce.framework.name</name>
	<value>yarn</value>
</property>
</configuration>

yarn-site.xml:

 

 

<configuration>
<property>
	<name>yarn.nodemanager.aux-services</name>
	<value>mapreduce_shuffle</value>
</property>
<property>
	<name>yarn.resourcemanager.resource-tracker.address</name>
	<value>172.17.0.2:8031</value>
</property>
<property>
	<name>yarn.resourcemanager.address</name>
	<value>172.17.0.2:8032</value>
</property>
<property>
	<name>yarn.resourcemanager.scheduler.address</name>
	<value>172.17.0.2:8034</value>
</property>
<property>
	<name>yarn.resourcemanager.webapp.address</name>
	<value>172.17.0.2:8088</value>
</property>
</configuration>

slaves:

172.17.0.3
172.17.0.4
172.17.0.5
172.17.0.6

 

详细搭建步骤:

 

1、JDK安装

mkdir /usr/java
tar -zxvf jdk-7u79-linux-x64.tar.gz -C /usr/java

把jdk包解压后,需要设置相关环境变量,但在docker下的环境变量配置与传统方法有所不大一样,可以参考我的上一遍《Docker镜像之Java环境搭建(四)》进行配置。

2、ssh免密码登录

#在用户当前工作路径下,直接输cd并回车进入
cd
mkdir .ssh
ssh-keygen -t rsa
cd .ssh
cp id_rsa.pub authorized_keys
cd
chmod 700 .ssh
chmod 600 .ssh/*
运行以上命令后即可ssh免密码登录,但还存在一个不太方便的问题,第一次访问时候需要输入“yes”,如果不想输入可以把/etc/ssh/ssh_config中的StrictHostKeyChecking去掉注释,并把ask改成no即可。

3、hadoop安装

mkdir /var/hadoop
tar -zxvf hadoop-2.6.0-cdh5.4.5.tar.gz -C /var/hadoop
cd /var/hadoop
mv hadoop-2.6.0-cdh5.4.5/  hadoop-2.6.0

mkdir -p /data/hdfs/name
mkdir -p /data/hdfs/data
mkdir -p /data/tmp
#环境变量设置,以下环境变量的配置,一般情况直接写入到.bashrc文件中,docker中的配置会有所不同
HADOOP_HOME=/var/hadoop/hadoop-2.6.0
PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_HOME  PATH

 

4、配置文件

直接把前面提到的core-site.xml等5个文件复制到$HADOOP_HOME/etc/hadoop目录下即可。

5、格式化namenode

hdfs namenode -format
日志中出现Storage directory /data/hdfs/name has been successfully formatted表明格式化成功。

6、启动hadoop

在master节点(172.17.0.2)启动集群:
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver
jps
启动成功后,在master节点上执行jps就能看到NameNode和ResourceManager、SecondaryNameNode进程,slaves上有DataNode和NodeManager进程。

7、测试

hdfs dfs -mkdir /wordcount
hdfs dfs -put wordcount.txt /wordcount/wordcount.txt
yarn jar /var/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.4.5.jar wordcount /wordcount/wordcount.txt /wordcount/output
通过以上命令可进行计算文档wordcount.txt中的单词个数,如果执行成功则会有如下打印:
17/01/10 17:08:56 INFO mapreduce.Job: Running job: job_1484068103061_0001
17/01/10 17:09:04 INFO mapreduce.Job: Job job_1484068103061_0001 running in uber mode : false
17/01/10 17:09:04 INFO mapreduce.Job:  map 0% reduce 0%
17/01/10 17:09:12 INFO mapreduce.Job:  map 100% reduce 0%
17/01/10 17:09:19 INFO mapreduce.Job:  map 100% reduce 100%
17/01/10 17:09:20 INFO mapreduce.Job: Job job_1484068103061_0001 completed successfully

 

8、整个配置过程中遇到的问题

8.1、start-dfs.sh启动时很多datanode无法启动成功,多尝试几次,或十来次才可以启动成功,发现log错误如下:

2017-01-08 03:36:29,815 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.net.UnknownHostException: 26b72653d296: 26b72653d296: unknown error
	at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
	at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:187)
	at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:207)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2289)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2338)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2515)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2539)
Caused by: java.net.UnknownHostException: 26b72653d296: unknown error
	at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
	at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
	at java.net.InetAddress.getLocalHost(InetAddress.java:1500)

按日志来看,觉得应该是主机名解析有问题而出错,然后直接把slaves中原来保存的主机名全都换成ip地址,问题就可以解决了。

8.2、Start-dfs.sh启动datanode提示所有节点启动成功,但在50070网页上并未发现任何节点,日志发现如下错误:

2017-01-08 05:39:48,880 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-1803144284-192.168.1.1-1483792012201 (Datanode Uuid null) service to 172.17.0.2/172.17.0.2:9000 Datanode denied communication with namenode because hostname cannot be resolved (ip=172.17.0.3, hostname=172.17.0.3): DatanodeRegistration(0.0.0.0, datanodeUuid=c19b9c4d-0e64-43ba-b458-281ae1af4738, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-40010481-40ed-4830-8547-27eabc5af90f;nsid=517203298;c=0)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:904)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:5088)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1141)
	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:93)
	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28293)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

需要在hdfs-site.xml加上如下内容就可以解决:

<property>
        <name>dfs.namenode.datanode.registration.ip-hostname-check</name>
        <value>false</value>
</property>

8.3、不要多次调用hdfs namenode -format进行格式化namenode,中间遇到过一个问题,50070上只显示一个节点,每次刷新发现ip都会发生改变,相应log忘记保存,但最终把/data/hdfs/name、/data/hdfs/data、/data/tmp下的所以文件删除并重新格式化后,重新启动start-dfs.sh,50070上正常显示多节点。后续再格式化,再执行start-dfs.sh发现datanode无法正常启动,日志如下:

2017-01-08 11:27:03,961 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to /172.17.0.2:9000. Exiting. 
java.io.IOException: All specified directories are failed to load.
	at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:479)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1398)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1363)
	at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:228)
	at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:845)
	at java.lang.Thread.run(Thread.java:745)

8.4、运行wordcount用例报错

17/01/09 15:15:05 INFO mapreduce.Job: Job job_1483969899500_0004 failed with state FAILED due to: Ap
plication application_1483969899500_0004 failed 2 times due to Error launching appattempt_1483969899
500_0004_000002. Got exception: java.io.IOException: Failed on local exception: java.io.IOException:
 Couldn't set up IO streams; Host Details : local host is: "6696a6544d4c/172.17.0.2"; destination ho
st is: "3e2a08477956":39523;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
        at org.apache.hadoop.ipc.Client.call(Client.java:1472)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy32.startContainers(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startCo
ntainers(ContainerManagementProtocolPBClientImpl.java:96)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.jav
a:119)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:2
54)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Couldn't set up IO streams
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:786)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
        at org.apache.hadoop.ipc.Client.call(Client.java:1438)
        ... 9 more
Caused by: java.nio.channels.UnresolvedAddressException
        at sun.nio.ch.Net.checkAddress(Net.java:101)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
以上日志 destination host is: "3e2a08477956":39523,觉得只出现了3e2a08477956主机名,而没有对应ip,然而在local host is: "6696a6544d4c/172.17.0.2"中即出现主机名又出现ip,所以觉得是没法解释3e2a08477956主机对应的ip地址,最终尝试在/etc/hosts中添加ip及对应的主机名,测试发现成功运行。








 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值