hadoop的集群搭建

Hadoop是一个由Apache基金会所开发的分布式系统基础架构。
用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力进行高速运算和存储。
Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(highthroughput)来访问应用程序的数据,适合那些有着超大数据集(large dataset)的应用程序。HDFS放宽了(relax)POSIX的要求,可以以流的形式访问(streaming access)文件系统中的数据。
Hadoop的框架最核心的设计就是:HDFS和MapReduce。HDFS为海量的数据提供了存储,则MapReduce为海量的数据提供了计算。

单点

新建用户hadoop并切换到hadoop用户:

[root@server1 ~]# useradd -u 800 hadoop
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ ls
hadoop-2.7.3.tar.gz  jdk-7u79-linux-x64.tar.gz

解压jdk并制作软链接:

[hadoop@server1 ~]$ tar zxf jdk-7u79-linux-x64.tar.gz 
[hadoop@server1 ~]$ ln -s jdk1.7.0_79/ java

解压hadoop并设定变量:

[hadoop@server1 ~]$ tar zxf hadoop-2.7.3.tar.gz 
[hadoop@server1 ~]$ cd hadoop-2.7.3
[hadoop@server1 hadoop-2.7.3]$ vim etc/hadoop/hadoop-env.sh 

这里写图片描述

设定全局变量:

[hadoop@server1 ~]$ vim .bash_profile 
[hadoop@server1 ~]$ source .bash_profile 

这里写图片描述

创建input目录:

[hadoop@server1 ~]$ cd hadoop-2.7.3
[hadoop@server1 hadoop-2.7.3]$ cp etc/hadoop/*.xml input/
[hadoop@server1 hadoop-2.7.3]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'

[hadoop@server1 input]$ cd ..
[hadoop@server1 hadoop-2.7.3]$ cd output/
[hadoop@server1 output]$ ls
part-r-00000  _SUCCESS
[hadoop@server1 output]$ cat part-r-00000 
1	dfsadmin

hadoop配置文件的设定:

[hadoop@server1 hadoop-2.7.3]$ vim etc/hadoop/hdfs-site.xml 

这里写图片描述

[hadoop@server1 hadoop-2.7.3]$ vim etc/hadoop/hadoop-env.sh 

这里写图片描述

[hadoop@server1 hadoop]$ cd etc/hadoop/
[hadoop@server1 hadoop]$ vim core-site.xml 

这里写图片描述

[hadoop@server1 hadoop]$ vim slaves 

这里写图片描述

设定免密连接:

[hadoop@server1 ~]$ ssh-keygen 
[hadoop@server1 ~]$ cd .ssh/
[hadoop@server1 .ssh]$ cp id_rsa.pub authorized_keys 

测试免密:

[hadoop@server1 .ssh]$ ssh 172.25.17.1
[hadoop@server1 .ssh]$ ssh localhost
[hadoop@server1 .ssh]$ ssh server1
[hadoop@server1 .ssh]$ ssh 0.0.0.0

初始化:

[hadoop@server1 .ssh]$ cd
[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ bin/hdfs namenode -format

开启服务:

[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
server1: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server1.out
172.25.17.1: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-server1.out

查看服务状态:

[hadoop@server1 hadoop]$ jps
1882 NameNode
1975 DataNode
2267 Jps
2158 SecondaryNameNode

浏览器访问:
这里写图片描述

设定共享目录:

[hadoop@server1 hadoop]$ bin/hdfs  dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs  dfs -mkdir /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -put input/
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2018-08-26 15:35 input

浏览器访问:
这里写图片描述

设定主从集群

server1停掉服务并切换到超级用户

[hadoop@server1 hadoop]$ sbin/stop-dfs.sh 
Stopping namenodes on [server1]
server1: stopping namenode
172.25.17.1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
[hadoop@server1 hadoop]$ exit
logout

在server1,server2和servre3端安装nfs-utils服务:

[root@server1 ~]# yum install nfs-utils -y

编辑文件:

[root@server1 ~]# vim /etc/exports 

这里写图片描述
开启服务:

[root@server1 ~]# /etc/init.d/rpcbind start
Starting rpcbind:                                          [  OK  ]
[root@server1 ~]# /etc/init.d/nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

刷新挂载:

[root@server1 ~]# showmount -e
Export list for server1:
/home/hadoop *

同样在server2和server3端安装服务并建立用户,之后开启服务,将servre1端的目录挂载:

[root@server2 ~]# yum install nfs-utils rpc-bind -y
[root@server2 ~]# /etc/init.d/rpcbind start
Starting rpcbind:                                          [  OK  ]
[root@server2 ~]# /etc/init.d/nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
[root@server2 ~]# useradd -u 800 hadoop
[root@server2 ~]# id hadoop
uid=800(hadoop) gid=800(hadoop) groups=800(hadoop)
[root@server2 ~]# mount 172.25.17.1:/home/hadoop /home/hadoop
[root@server2 ~]# su - hadoop
[hadoop@server2 ~]$ ls
hadoop  hadoop-2.7.3  hadoop-2.7.3.tar.gz  java  jdk1.7.0_79  jdk-7u79-linux-x64.tar.gz

ssh测试:

[hadoop@server1 ~]$ ssh 172.25.17.2
[hadoop@server1 ~]$ ssh 172.25.17.3

sevrer1端切换到hadoop用户并编辑文件:

[hadoop@server1 ~]$ cd hadoop/etc/hadoop/
[hadoop@server1 hadoop]$ vim hdfs-site.xml 

这里写图片描述

设定从设备:

[hadoop@server1 hadoop]$ vim slaves 

这里写图片描述
重新进行服务初始化:

[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ bin/hdfs namenode -format

开启服务:

[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
server1: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server1.out
172.25.17.2: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server2.out
172.25.17.3: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server3.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-server1.out

查看服务状态:

[hadoop@server1 hadoop]$ jps
3626 Jps
3329 NameNode
3517 SecondaryNameNode

重新初始化后就要重新建立共享目录:

[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -put input/
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount input output

查看:

[hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/*

节点的添加

要给集群添加节点server4,需要在server4端同样地安装nsf-utils,开启服务,建立用户并挂载目录:

[root@server4 ~]# yum install nfs-utils -y
[root@server4 ~]# /etc/init.d/rpcbind start
[root@server4 ~]# /etc/init.d/nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
[root@server4 ~]# useradd -u 800 hadoop
[root@server4 ~]# mount 172.25.17.1:/home/hadoop/ /home/hadoop/

切换到hadoop用户并编辑文件slave:

[root@server4 ~]# su - hadoop
[hadoop@server4 ~]$ ls
hadoop  hadoop-2.7.3  hadoop-2.7.3.tar.gz  java  jdk1.7.0_79  jdk-7u79-linux-x64.tar.gz
[hadoop@server4 ~]$ cd hadoop/etc/hadoop/
[hadoop@server4 hadoop]$ vim slaves 

将节点server4写入:
这里写图片描述

开启数据节点:

[hadoop@server4 ~]$ cd hadoop
[hadoop@server4 hadoop]$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server4.out

**效果查看:**成功添加server4

[hadoop@server4 hadoop]$ bin/hdfs dfsadmin -report
Configured Capacity: 58780667904 (54.74 GB)
Present Capacity: 52842000384 (49.21 GB)
DFS Remaining: 52841705472 (49.21 GB)
DFS Used: 294912 (288 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (3):

Name: 172.25.17.2:50010 (server2)
Hostname: server2
Decommission Status : Normal
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 135168 (132 KB)
Non DFS Used: 1958719488 (1.82 GB)
DFS Remaining: 17634701312 (16.42 GB)
DFS Used%: 0.00%
DFS Remaining%: 90.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Aug 26 17:02:10 CST 2018


Name: 172.25.17.3:50010 (server3)
Hostname: server3
Decommission Status : Normal
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 135168 (132 KB)
Non DFS Used: 1954742272 (1.82 GB)
DFS Remaining: 17638678528 (16.43 GB)
DFS Used%: 0.00%
DFS Remaining%: 90.02%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Aug 26 17:02:07 CST 2018


Name: 172.25.17.4:50010 (server4)
Hostname: server4                                                 <----------------在这里 
Decommission Status : Normal
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2025205760 (1.89 GB)
DFS Remaining: 17568325632 (16.36 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.66%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Aug 26 17:02:09 CST 2018

在浏览器中也可以查看到节点数量变为3个:
这里写图片描述
这里写图片描述

节点删除

为了测试效果,在server1端将一个大文加加到文件系统中:

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ dd if=/dev/zero of=bigfile bs=1M count=300
300+0 records in
300+0 records out
314572800 bytes (315 MB) copied, 2.26836 s, 139 MB/s
[hadoop@server1 hadoop]$ bin/hdfs dfs -put bigfile 

浏览器查看文件成功加入到文件系统:
这里写图片描述

查看节点信息:
可以发现server3端存有302.49M的数据

[hadoop@server1 hadoop]$ bin/hdfs dfsadmin -report
Configured Capacity: 58780667904 (54.74 GB)
Present Capacity: 52842024974 (49.21 GB)
DFS Remaining: 52207644672 (48.62 GB)
DFS Used: 634380302 (604.99 MB)
DFS Used%: 1.20%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (3):

Name: 172.25.17.2:50010 (server2)
Hostname: server2
Decommission Status : Normal
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 46637056 (44.48 MB)
Non DFS Used: 1958719488 (1.82 GB)
DFS Remaining: 17588199424 (16.38 GB)
DFS Used%: 0.24%
DFS Remaining%: 89.77%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Aug 26 17:09:19 CST 2018


Name: 172.25.17.3:50010 (server3)
Hostname: server3
Decommission Status : Normal
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 317181952 (302.49 MB)                      <--------------在这里
Non DFS Used: 1954742272 (1.82 GB)
DFS Remaining: 17321631744 (16.13 GB)
DFS Used%: 1.62%
DFS Remaining%: 88.40%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Aug 26 17:09:19 CST 2018


Name: 172.25.17.4:50010 (server4)
Hostname: server4
Decommission Status : Normal
Configured Capacity: 19593555968 (18.25 GB)
DFS Used: 270561294 (258.03 MB)
Non DFS Used: 2025181170 (1.89 GB)
DFS Remaining: 17297813504 (16.11 GB)
DFS Used%: 1.38%
DFS Remaining%: 88.28%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Aug 26 17:09:18 CST 2018

编辑slave文件将sevrer3删除:
这里写图片描述

新建文件hosta-exclude:

[hadoop@server1 hadoop]$ vim hosts-exclude

写入被删除的节点ip:
这里写图片描述

编辑文件etc/hadoop/hdfs-site.xml:

[hadoop@server1 hadoop]$ vim etc/hadoop/hdfs-site.xml 

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
       <name>dfs.replication</name>
              <value>2</value>
                   </property>
<property>
       <name>dfs.hosts.exclude</name>
       <value>/home/hadoop/hadoop/etc/hadoop/hosts-exclude</value>
</property>

</configuration>

刷新:

[hadoop@server1 hadoop]$ bin/hdfs dfsadmin -refreshNodes
Refresh nodes successful

查看集群状态:

[hadoop@server1 hadoop]$ bin/hdfs dfsadmin -report

server3正在迁移:
这里写图片描述

这里写图片描述

迁移完成,并且server3的数据转移到server4上:
这里写图片描述

之后在server3端停掉服务:

[hadoop@server3 hadoop]$ sbin/hadoop-daemon.sh stop datanode
stopping datanode

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值