官方安装文档地址如下:
http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0/hadoop-project-dist/hadoop-common/SingleCluster.html
环境描述:
- CentOS7.3
安装CentOS虚拟机可参考https://blog.csdn.net/babyxue/article/details/80970526 - JDK1.8并配置Java环境变量
- Hadoop 2.6.0-cdh5.7.0
- ssh
- rsync
1.下载、安装Hadoop
[root@localhost ~]# cd /usr/local/src/
[root@localhost /usr/local/src]# wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0.tar.gz
[root@localhost /usr/local/src]# tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz -C /usr/local/
注:如果在Linux上下载得很慢的话,可以在windows的迅雷上使用这个链接进行下载。然后再上传到Linux中,这样就会快一些。
解压完后,进入到解压后的目录下,可以看到hadoop的目录结构如下:
[root@localhost /usr/local/src]# cd /usr/local/hadoop-2.6.0-cdh5.7.0/
[root@localhost /usr/local/hadoop-2.6.0-cdh5.7.0]# ls
bin cloudera examples include libexec NOTICE.txt sbin src
bin-mapreduce1 etc examples-mapreduce1 lib LICENSE.txt README.txt share
[root@localhost /usr/local/hadoop-2.6.0-cdh5.7.0]#
简单说明一下其中几个目录存放的东西:
- bin:存放可执行文件;
- etc:存放配置文件;
- sbin:存放服务的启动命令;
- share:存放jar包与文档。
2.配置
2.1.配置Java环境变量
以上就算是把hadoop给安装好了,接下来就是编辑配置文件,把JAVA_HOME配置一下:
[root@localhost /usr/local/hadoop-2.6.0-cdh5.7.0]# cd etc/
[root@localhost /usr/local/hadoop-2.6.0-cdh5.7.0/etc]# cd hadoop
[root@localhost /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8/ # 根据你的环境变量进行修改
2.2.配置核心配置文件
由于我们要进行的是单节点伪分布式环境的搭建,所以还需要配置两个配置文件,分别是core-site.xml以及hdfs-site.xml,如下:
[root@localhost /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim core-site.xml # 增加如下内容
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.77.130:8020</value> #指定默认的访问地址以及端口号
</property>
<property>
<name>hadoop.tmp.dir</name> #指定临时文件所存放的目录
<value>/data/tmp/</value>
</property>
</configuration>
[root@localhost /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# mkdir /data/tmp/
[root@localhost /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim hdfs-site.xml # 增加如下内容
<configuration>
<property>
<name>dfs.replication</name> # 指定只产生一个副本
<value>1</value>
</property>
</configuration>
2.3.配置SSH免密登录
然后配置一下密钥对,设置本地免密登录,搭建伪分布式的话这一步是必须的:
[root@localhost ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:c2:41:89:65:bd:04:9e:3e:3f:f9:a7:51:cd:e9:cf:1e root@localhost
The key's randomart image is:
+--[ DSA 1024]----+
| o=+ |
| .+..o |
| +. . |
| o .. o . |
| = S . + |
| + . . . |
| + . .E |
| o .. o.|
| oo .+|
+-----------------+
[root@localhost ~]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[root@localhost ~]# ssh localhost
ssh_exchange_identification: read: Connection reset by peer
[root@localhost ~]#
如上,测试本地免密登录的时候报了个ssh_exchange_identification: read: Connection reset by peer错误,于是着手排查错误,发现是/etc/hosts.allow文件里限制了IP,于是修改一下该配置文件即可,如下:
[root@localhost ~]# vim /etc/hosts.allow # 修改为 sshd: ALL
[root@localhost ~]# service sshd restart
[root@localhost ~]# ssh localhost # 测试登录成功
Last login: Sat Mar 24 21:56:38 2018 from localhost
[root@localhost ~]# logout
Connection to localhost closed.
[root@localhost ~]#
2.4.启动
接下来就可以启动HDFS了,不过在启动之前需要先格式化文件系统:
[root@localhost ~]# /usr/local/hadoop-2.6.0-cdh5.7.0/bin/hdfs namenode -format
注:只有第一次启动才需要格式化
使用服务启动脚本启动服务:
[root@localhost ~]# /usr/local/hadoop-2.6.0-cdh5.7.0/sbin/start-dfs.sh
18/03/24 21:59:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [192.168.77.130]
192.168.77.130: namenode running as process 8928. Stop it first.
localhost: starting datanode, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-datanode-localhost.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is 63:74:14:e8:15:4c:45:13:9e:16:56:92:6a:8c:1a:84.
Are you sure you want to continue connecting (yes/no)? yes # 第一次启动会询问是否连接节点
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-secondarynamenode-localhost.out
18/03/24 21:59:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@localhost ~]# jps # 检查是否有以下几个进程,如果少了一个都是不成功的
8928 NameNode
9875 Jps
9578 DataNode
9757 SecondaryNameNode
[root@localhost ~]# netstat -lntp |grep java # 检查端口
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 9757/java
tcp 0 0 192.168.77.130:8020 0.0.0.0:* LISTEN 8928/java
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 8928/java
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 9578/java
tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 9578/java
tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 9578/java
tcp 0 0 127.0.0.1:53703 0.0.0.0:* LISTEN 9578/java
[root@localhost ~]#
然后将Hadoop的安装目录配置到环境变量中,方便之后使用它的命令:
[root@localhost ~]# vim ~/.bash_profile # 增加以下内容export HADOOP_HOME=/usr/local/hadoop-2.6.0-cdh5.7.0/export PATH=$HADOOP_HOME/bin:$PATH
[root@localhost ~]# source !$source ~/.bash_profile
[root@localhost ~]#
确认服务启动成功后,使用浏览器访问192.168.77.130:50070,会访问到如下页面:
点击Live Nodes查看活着的节点:
如上,可以看到节点的信息。到此,我们伪分布式的hadoop集群就搭建完成了。
注:若无法访问到页面,请检查防火墙状态,可以参考https://blog.csdn.net/Borntodieee/article/details/80564476