hadoop集群搭建

一、前期的准备工作
1、虚拟机的安装和centos的安装(CentOS-7-x86_64-Minimal-1611),读者可自行安装。设置一个管理的账号(master),密码(hadoop)
hadoop的版本是(2.7.5)
2、静态ip的设置

[master@localhost ~]$ vim /etc/sysconfig/network-scripts/ifcfg-eno16777736 

注意:ifcfg-eno16777736可能不一样,下面为修改的地方

TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=4bee876c-9cf5-4ed0-94c1-cdf8439a08e0
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.146.101
NETMASK=255.255.255.0
GATEWAY=192.168.146.2
DNS1=1.2.4.8

3、解决ifconfig的问题:

sudo yum install net-tools

4、安装vim

       1sudo yum -y install vim
       2、配置显示行号(注意全英文的会出现错误)
        sudo vim /etc/vimrc 
        #显示行号
        set nu
        #设置缩进
        set tabstop=4

5、使用winscp工具将hadoop、jdk、scala等工具包放进虚拟机中
6、关闭防火墙

[master@namenode jdk1.8.0_60]$ sudo systemctl stop firewalld
[sudo] password for master: 
[master@namenode jdk1.8.0_60]$ sudo systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

7、jdk的安装和配置

  • 解压jdktar -zxvf jdk-8u60-linux-x64.tar.gz
  • 建立软链
[master@namenode jdk1.8.0_60]$ sudo ln -sf /home/master/software/jdk1.8.0_60 /opt/jdk
  • 配置
 77 export JAVA_HOME=/opt/jdk
 78 export PATH=$PATH:$JAVA_HOME/bin
  • 测试
[master@localhost jdk-1.8]$ source /etc/profile

8、scala的配置

[master@localhost software]$ tar -zxvf scala-2.12.1.tgz 
[master@namenode scala-2.12.1]$ sudo ln -sf /home/master/software/scala-2.12.1 /opt/scala

修改etc/profile
 77 export JAVA_HOME=/opt/jdk
 78 export SCALA_HOME=/opt/scala
 79 export PATH=$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH

9、hadoop的安装与配置

  • 解压与配置tar -zxvf hadoop-2.7.3.tar.gz

设置软链
[master@namenode hadoop-2.7.3]$ sudo ln -sf  /home/master/software/hadoop-2.7.3 /opt/hadoop
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
  • 测试
[master@namenode hadoop-2.7.3]$ hadoop version
Hadoop 2.7.3
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using /home/master/software/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar

二、开始安装和配置
1、ip地址的修改(克隆后需要修改ip地址)

192.168.146.101 namenode
192.168.146.102 datanode1
192.168.146.103 datanode2
192.168.146.104 SecondNamenode

2、修改ip和主机名的映射关系(每个节点都需要配置)

sudo vim /etc/hosts
  1 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
  2 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
  3 192.168.146.101 namenode
  4 192.168.146.102 datanode1
  5 192.168.146.103 datanode2
  6 192.168.146.104 SecondNamenode

3、测试是否映射成功

[master@namenode opt]$ ping SecondNamenode
PING SecondNamenode (192.168.146.104) 56(84) bytes of data.
64 bytes from SecondNamenode (192.168.146.104): icmp_seq=1 ttl=64 time=5.33 ms
64 bytes from SecondNamenode (192.168.146.104): icmp_seq=2 ttl=64 time=0.890 ms
64 bytes from SecondNamenode (192.168.146.104): icmp_seq=3 ttl=64 time=0.695 ms
64 bytes from SecondNamenode (192.168.146.104): icmp_seq=4 ttl=64 time=0.951 ms
^C
--- SecondNamenode ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3011ms
rtt min/avg/max/mdev = 0.695/1.967/5.334/1.946 ms

4、ssh免密钥登录(以下是一个节点的设置,其他节点均按照这个步骤去做)

[master@datanode2 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/master/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/master/.ssh/id_rsa.
Your public key has been saved in /home/master/.ssh/id_rsa.pub.
The key fingerprint is:
0a:9b:0b:20:c9:1b:2a:99:69:1b:86:0a:8b:b2:0a:92 master@datanode2
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|..               |
|+o  .   S        |
|+*o  + .         |
|E=. o .          |
|@.o. .           |
|Oo  .            |
+-----------------+
[master@datanode2 ~]$ ssh-copy-id namenode
The authenticity of host 'namenode (192.168.146.101)' can't be established.
ECDSA key fingerprint is 87:50:6b:6d:d4:5e:ea:19:d2:05:ad:51:ac:62:03:dd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
master@namenode's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'namenode'"
and check to make sure that only the key(s) you wanted were added.

[master@datanode2 ~]$ ssh-copy-id datanode1
The authenticity of host 'datanode1 (192.168.146.102)' can't be established.
ECDSA key fingerprint is 87:50:6b:6d:d4:5e:ea:19:d2:05:ad:51:ac:62:03:dd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
master@datanode1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'datanode1'"
and check to make sure that only the key(s) you wanted were added.

[master@datanode2 ~]$ ssh-copy-id datanode2
The authenticity of host 'datanode2 (192.168.146.103)' can't be established.
ECDSA key fingerprint is 87:50:6b:6d:d4:5e:ea:19:d2:05:ad:51:ac:62:03:dd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
master@datanode2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'datanode2'"
and check to make sure that only the key(s) you wanted were added.

[master@datanode2 ~]$ ssh-copy-id SecondNamenode
The authenticity of host 'secondnamenode (192.168.146.104)' can't be established.
ECDSA key fingerprint is 87:50:6b:6d:d4:5e:ea:19:d2:05:ad:51:ac:62:03:dd.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
master@secondnamenode's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'SecondNamenode'"
and check to make sure that only the key(s) you wanted were added.

5、测试ssh时候成功


[master@datanode2 ~]$ ssh namenode
Last login: Sat Apr 22 22:36:40 2017 from secondnamenode
[master@namenode ~]$ ssh datanode1
Last login: Sat Apr 22 22:30:42 2017 from namenode
[master@datanode1 ~]$ ssh SecondNamenode
Last login: Sat Apr 22 22:33:00 2017 from namenode

6、此处最好拍摄快照,可以恢复。
三、hadoop内容文件的配置
1、修改slaves文件(/opt/hadoop/etc/hadoop),增加slave主机名

  1 datanode1
  2 datanode2
  3 SecondNamenode

2、修改core-site.xml文件

 20   <!-- 指定hdfs的nameservice为ns1 -->
 21         <property>
 22                 <name>fs.defaultFS</name>
 23                 <value>hdfs://namenode:9000</value>
 24         </property>
 25         <!-- Size of read/write buffer used in SequenceFiles. -->
 26         <property>
 27          <name>io.file.buffer.size</name>
 28          <value>131072</value>
 29        </property>
 30         <!-- 指定hadoop临时目录,自行创建 -->
 31         <property>
 32                 <name>hadoop.tmp.dir</name>
 33                 <value>/home/master/hadoop/tmp</value>
 34         </property>
#注意vlaue的目录

3、修改hdfs-site.xml文件

 19 <configuration>
 20 <property>
 21   <name>dfs.permissions.enabled</name>
 22   <value>false</value>
 23 </property>
 24 <property>
 25   <name>dfs.blocksize</name>
 26   <value>32m</value>
 27   <description>
 28       The default block size for new files, in bytes.
 29       You can use the following suffix (case insensitive):
 30       k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.),
 31       Or provide complete size in bytes (such as 134217728 for 128 MB).
 32   </description>
 33 </property>
 34 
 35     <property>
 36         <name>dfs.nameservices</name>
 37         <value>hadoop-cluster-sulei</value>
 38     </property>
 39     <property>
 40         <name>dfs.replication</name>
 41         <value>3</value>
 42     </property>
 43     <property>
 44         <name>dfs.namenode.name.dir</name>
 45         <value>/home/master/hadoop/hdfs/name</value>
 46     </property>
 47     <property>
 48         <name>dfs.namenode.checkpoint.dir</name>
 49         <value>/home/master/hadoop/hdfs/snn</value>
 50     </property>
 51     <property>
 52         <name>dfs.namenode.checkpoint.edits.dir</name>
 53         <value>/home/master/hadoop/hdfs/snn</value>
 54     </property>
 55     <property>
 56         <name>dfs.datanode.data.dir</name>
 57         <value>/home/master/hadoop/hdfs/dn</value>
 58     </property>
 59     <property>
 60         <name>dfs.namenode.secondary.http-address</name>
 61         <value>192.168.146.104:50090</value>
 62     </property>
 63 </configuration>
#192.168.146.104:为SecondNamenode的ip

4、修改yarn-site.xml文件

 18     <property>
 19         <name>yarn.resourcemanager.hostname</name>
 20         <value>192.168.146.101</value>
 21     </property>
 22     <property>
 23         <name>yarn.nodemanager.aux-services</name>
 24         <value>mapreduce_shuffle</value>
 25     </property>
 26     <property>
 27         <name>yarn.nodemanager.local-dirs</name>
 28         <value>/home/master/hadoop/yarn/nm</value>
 29     </property>
#192.168.146.101:yarn集群的resourcemanager节点。可以和namenode相同,也可以不同,他们没有必然联系

5、修改 mapred-site.xml.template文件

 19 <configuration>
 20     <property>
 21         <name>mapreduce.framework.name</name>
 22         <value>yarn</value>
 23     </property>
 24 </configuration>

6、修改hadoop-env.sh文件
找到export JAVA_HOME={JAVA_HOME}

export JAVA_HOME=/opt/jdk

7、格式化(最好在每个节点上都进行一次下面的操作)

[master@datanode1 hadoop]$ hdfs namenode -format

8、测试jps,在/opt/hadoop/sbin下进行

[master@namenode sbin]$ ./start-all.sh
[master@namenode sbin]$ jps

结果如图:
namenode
namenode的图
datanode1
datanode1
datanode2
datanode2
SecondNamenode
SecondNamenode

9、结果测试:

在浏览器中输入192.168.146.101:50070

这里写图片描述

在浏览器中输入:192.168.146.101:8088

这里写图片描述
这里写图片描述

参考链接:http://www.cnblogs.com/boyzgw/p/6227425.html

http://blog.csdn.net/dream_an/article/details/52946840
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值