Docker Ubuntu14.04 Hadoop2.6.0

启动

docker run -it ubuntu:14.04

容器启动起来了,接下来就是安装Java、Hadoop及相关配置了。

换源 阿里Ubuntu14.04源

deb http://mirrors.aliyun.com/ubuntu/ trusty main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ trusty-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ trusty-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ trusty-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ trusty-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ trusty main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ trusty-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ trusty-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ trusty-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ trusty-backports main restricted universe multiverse

Java安装

依次执行如下命令:

sudo apt-get -y install software-properties-common python-software-properties
sudo add-apt-repository ppa:webupd8team/java
sodu apt-get update
apt-get -y install oracle-java8-installer

注意:

  • 这里安装的Java8。
  • 默认使用的是Ubuntu的官方源,如果下载比较慢,请自行修改更新源,不知道如何使用命令行修改的,参考这里

另外,大家可以将装好java的镜像保存为一个副本,他日可以在此基础上构建其他镜像。命令如下:

root@122a2cecdd14:~# exit
docker commit -m "java install" 122a2cecdd14 ubuntu:java

上面命令中122a2cecdd14为当前容器的ID, ubuntu:java是为新的镜像指定一个标识,ubuntu仓库名javaTag

如何获取容器ID:

  • 有个简便的办法找到此ID,就是命令行用户名@后面的那一串字符。这个方法只在容器启动时没有指定hostname时才能用。
  • 使用docker ps列出所有运行的容器,在命令结果中查看

Hadoop安装

渐渐切入正题了O(∩_∩)O~

使用刚刚已经安装了Java的容器镜像启动:

docker run -ti ubuntu:java

启动成功了,我们开始安装Hadoop。这里,我们直接使用wget下载安装文件。

1.先安装wget:

sudo apt-get -y install wget

2.下载并解压安装文件:

root@8ef06706f88d:cd /usr/local
root@8ef06706f88d:/usr/local# wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
root@8ef06706f88d:/usr/local# tar xvzf hadoop-2.6.0.tar.gz
root@8ef06706f88d:/usr/local# mv hadoop-2.6.0 hadoop
root@8ef06706f88d:/usr/local# rm hadoop-2.6.0.tar.gz

注意:这里我们安装的Hadoop版本是2.6.0,如果需要其他版本,请在这里找到链接地址后修改命令即可。

3.配置环境变量

修改~/.bashrc文件。在文件末尾加入下面配置信息:

export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONFIG_HOME=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin

注意:我们使用apt-get安装java,不知道java装在什么地方的话可以使用下面的命令查看:

root@8ef06706f88d:~# update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-8-oracle/jre/bin/java
Nothing to configure.
root@8ef06706f88d:~#   

4.配置Hadoop

下面,我们开始修改Hadoop的配置文件。主要配置core-site.xmlhdfs-site.xmlmapred-site.xml这三个文件。

开始配置之前,执行下面命令:

root@8ef06706f88d:# cd $HADOOP_HOME/
root@8ef06706f88d:/usr/local/haoop# mkdir tmp
root@8ef06706f88d:/usr/local/haoop# cd tmp/
root@8ef06706f88d:/usr/local/haoop/tmp# pwd
/usr/local/haoop/tmp
root@8ef06706f88d:/usr/local/haoop/tmp# cd ../
root@8ef06706f88d:/usr/local/haoop# mkdir namenode
root@8ef06706f88d:/usr/local/haoop# cd namenode/
root@8ef06706f88d:/usr/local/haoop/namenode# pwd
/usr/local/haoop/namenode
root@8ef06706f88d:/usr/local/haoop/namenode# cd ../
root@8ef06706f88d:/usr/local/haoop# mkdir datanode
root@8ef06706f88d:/usr/local/haoop# cd datanode/
root@8ef06706f88d:/usr/local/haoop/datanode# pwd
/usr/local/haoop/datanode
root@8ef06706f88d:/usr/local/haoop/datanode# cd $HADOOP_CONFIG_HOME/
root@8ef06706f88d:/usr/local/haoop/etc/hadoop# cp mapred-site.xml.template mapred-site.xml
root@8ef06706f88d:/usr/local/haoop/etc/hadoop# nano hdfs-site.xml

这里创建了三个目录,后续配置的时候会用到:

  1. tmp:作为Hadoop的临时目录
  2. namenode:作为NameNode的存放目录
  3. datanode:作为DataNode的存放目录

1).core-site.xml配置

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
            <name>hadoop.tmp.dir</name>
            <value>/usr/local/hadoop/tmp</value>
            <description>A base for other temporary directories.</description>
    </property>

    <property>
            <name>fs.default.name</name>
            <value>hdfs://master:9000</value>
            <final>true</final>
            <description>The name of the default file system.  A URI whose
            scheme and authority determine the FileSystem implementation.  The
            uri's scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class.  The uri's authority is used to
            determine the host, port, etc. for a filesystem.</description>
    </property>
</configuration>

注意:

  • hadoop.tmp.dir配置项值即为此前命令中创建的临时目录路径。
  • fs.default.name配置为hdfs://master:9000,指向的是一个Master节点的主机(后续我们做集群配置的时候,自然会配置这个节点,先写在这里)

2).hdfs-site.xml配置

使用命令nano hdfs-site.xml编辑hdfs-site.xml文件:

 <?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
        <final>true</final>
        <description>Default block replication.
        The actual number of replications can be specified when the file is created.
        The default is used if replication is not specified in create time.
        </description>
    </property>

    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/usr/local/hadoop/namenode</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/usr/local/hadoop/datanode</value>
        <final>true</final>
    </property>
</configuration>

注意:

  • 我们后续搭建集群环境时,将配置一个Master节点和两个Slave节点。所以dfs.replication配置为2。
  • dfs.namenode.name.dirdfs.datanode.data.dir分别配置为之前创建的NameNode和DataNode的目录路径

3).mapred-site.xml配置

Hadoop安装文件中提供了一个mapred-site.xml.template,所以我们之前使用了命令cp mapred-site.xml.template mapred-site.xml,创建了一个mapred-site.xml文件。下面使用命令nano mapred-site.xml编辑这个文件:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
        <description>The host and port that the MapReduce job tracker runs
        at.  If "local", then jobs are run in-process as a single map
        and reduce task.
        </description>
    </property>
</configuration>

这里只有一个配置项mapred.job.tracker,我们指向master节点机器。

4)修改JAVA_HOME环境变量

使用命令.nano hadoop-env.sh,修改如下配置:

# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/java-8-oracle

5.格式化 namenode

这是很重要的一步,执行命令hadoop namenode -format

4.安装SSH

搭建集群环境,自然少不了使用SSH。这可以实现无密码访问,访问集群机器的时候很方便。

root@8ef06706f88d:~# sudo apt-get -y install ssh
root@8ef06706f88d:~# service ssh start

SSH装好了以后,由于我们是Docker容器中运行,所以SSH服务不会自动启动。需要我们在容器启动以后,手动通过/usr/sbin/sshd 手动打开SSH服务。未免有些麻烦,为了方便,我们把这个命令加入到~/.bashrc文件中。通过nano ~/.bashrc编辑.bashrc文件(nano没有安装的自行安装,也可用vi),在文件后追加下面内容:

#autorun
/usr/sbin/sshd

5.生成访问密钥

root@8ef06706f88d:/# cd ~/
root@8ef06706f88d:~# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
root@8ef06706f88d:~# cd .ssh
root@8ef06706f88d:~/.ssh# cat id_dsa.pub >> authorized_keys

注意: 这里,我的思路是直接将密钥生成后写入镜像,免得在买个容器里面再单独生成一次,还要相互拷贝公钥,比较麻烦。当然这只是学习使用,实际操作时,应该不会这么搞,因为这样所有容器的密钥都是一样的!!

6.保存镜像副本

这里我们将安装好Hadoop的镜像保存为一个副本。

root@8ef06706f88d:~# exit
king@king:~$ docker commit -m "hadoop install" 8ef06706f88d ubuntu:hadoop

Hadoop分布式集群搭建

重点来了!

按照 hadoop 集群的基本要求,其 中一个是 master 结点,主要是用于运行 hadoop 程序中的 namenode、secondorynamenode 和 jobtracker(新版本名字变了) 任务。用外两个结点均为 slave 结点,其中一个是用于冗余目的,如果没有冗 余,就不能称之为 hadoop 了,所以模拟 hadoop 集群至少要有 3 个结点。

前面已经将Hadoop的镜像构建好了,下面就是使用这个镜像搭建Master节点和Slave节点了:

节点hostnameip用途Docker启动脚本
Mastermaster10.0.0.5

namenode

secondaryNamenode

jobTracker

docker run -ti -h master ubuntu:hadoop
Slaveslave110.0.0.6

datanode

taskTracker

docker run -ti -h slave1 ubuntu:hadoop
Slaveslave210.0.0.7

datanode

taskTracker

docker run -ti -h slave2 ubuntu:hadoop

启动Docker容器

回顾一下,Docker启动容器使用的是run命令:

docker run -ti ubuntu:hadoop

这里有几个问题:

  1. Docker容器中的ip地址是启动之后自动分配的,且不能手动更改
  2. hostname、hosts配置在容器内修改了,只能在本次容器生命周期内有效。如果容器退出了,重新启动,这两个配置将被还原。且这两个配置无法通过commit命令写入镜像

我们搭建集群环境的时候,需要指定节点的hostname,以及配置hosts。hostname可以使用Docker run命令的h参数直接指定。但hosts解析有点麻烦,虽然可以使用run--link参数配置hosts解析信息,但我们搭建集群时要求两台机器互相能够ping通,其中一个容器没有启动,那么ip不知道,所以--link参数对于我们的这个场景不实用。要解决这个问题,大概需要专门搭建一个域名解析服务,即使用--dns参数(参考这里)。

我们这里只为学习,就不整那么复杂了,就手动修改hosts吧。只不过每次都得改,我Docker知识浅薄,一时还没有解决这个问题。相信肯定有更好的办法。如果有高人能指定一下,感激不尽!!

启动master容器

docker run -ti -h master ubuntu:hadoop

启动slave1容器

docker run -ti -h slave1 ubuntu:hadoop

启动slave2容器

docker run -ti -h slave2 ubuntu:hadoop

配置hosts

  1. 通过ifconfig命令获取各节点ip。环境不同获取的ip可能不一样,例如我本机获取的ip如下: 
    • master:10.0.0.5
    • slave1:10.0.0.6
    • slave2:10.0.0.7
  2. 使用sudo nano /etc/hosts命令将如下配置写入各节点的hosts文件,注意修改ip地址:

    10.0.0.5        master
    10.0.0.6        slave1
    10.0.0.7        slave2
    

配置slaves

下面我们来配置哪些节点是slave。在较老的Hadoop版本中有一个masters文件和一个slaves文件,但新版本中只有slaves文件了。

在master节点容器中执行如下命令:

root@master:~# cd $HADOOP_CONFIG_HOME/
root@master:~/soft/apache/hadoop/hadoop-2.6.0/etc/hadoop# nano slaves 

将如下slave节点的hostname信息写入该文件:

slave1
slave2

启动Hadoop

在master节点上执行start-all.sh命令,启动Hadoop。

激动人心的一刻……

 

如果看到如下信息,则说明启动成功了:

root@master:~/soft/apache/hadoop/hadoop-2.6.0/etc/hadoop# start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /root/soft/apache/hadoop/hadoop-2.6.0/logs/hadoop-root-namenode-master.out
slave1: starting datanode, logging to /root/soft/apache/hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-slave1.out
slave2: starting datanode, logging to /root/soft/apache/hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-slave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /root/soft/apache/hadoop/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /root/soft/apache/hadoop/hadoop-2.6.0/logs/yarn--resourcemanager-master.out
slave1: starting nodemanager, logging to /root/soft/apache/hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-slave1.out
slave2: starting nodemanager, logging to /root/soft/apache/hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-slave2.out

在个节点上执行jps命令,结果如下:

master节点

root@master:~/soft/apache/hadoop/hadoop-2.6.0/etc/hadoop# jps
1223 Jps
992 SecondaryNameNode
813 NameNode
1140 ResourceManager

slave1节点

root@slave1:~/soft/apache/hadoop/hadoop-2.6.0/etc/hadoop# jps
258 NodeManager
352 Jps
159 DataNode

slave2节点

root@slave2:~/soft/apache/hadoop/hadoop-2.6.0/etc/hadoop# jps
371 Jps
277 NodeManager
178 DataNode

下面,我们在master节点上通过命令hdfs dfsadmin -report查看DataNode是否正常启动:

root@master:~/soft/apache/hadoop/hadoop-2.6.0/etc/hadoop# hdfs dfsadmin -report
Configured Capacity: 167782006784 (156.26 GB)
Present Capacity: 58979344384 (54.93 GB)
DFS Remaining: 58979295232 (54.93 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (2):

Name: 10.0.0.7:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 83891003392 (78.13 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 54401331200 (50.67 GB)
DFS Remaining: 29489647616 (27.46 GB)
DFS Used%: 0.00%
DFS Remaining%: 35.15%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Feb 28 07:27:05 UTC 2015


Name: 10.0.0.6:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 83891003392 (78.13 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 54401331200 (50.67 GB)
DFS Remaining: 29489647616 (27.46 GB)
DFS Used%: 0.00%
DFS Remaining%: 35.15%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sat Feb 28 07:27:05 UTC 2015

还可以通过Web页面看到查看DataNode和NameNode的状态:http://10.0.0.5:50070/ (由于我宿主机器上没有配置master的hosts解析,所以只能用ip地址访问,大家将ip改为各自的master节点容器的ip即可):

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值