1、下载
Centos7
Jdk-1.7 or 1.8
hadoop-2.7.2.tar.gz
2、准备
主机规划
家庭IP |
公司IP |
主机名 |
用户 |
作用 |
192.168.116.134 |
10.169.60.36 |
cancer01 |
hadoop |
namenode resourcemanager zkfc |
192.168.116.136 |
10.169.60.47 |
cancer02 |
hadoop |
namenode resourcemanager zkfc |
192.168.116.135 |
10.169.60.187 |
cancer03 |
hadoop |
datanode nodemanager journalnode quorumpeermain |
192.168.116.131 |
10.169.60.127 |
cancer04 |
hadoop |
datanode nodemanager journalnode quorumpeermain |
192.168.116.128 |
10.169.60.111 |
cancer05 |
hadoop |
datanode nodemanager journalnode quorumpeermain |
添加用户、组,在每台机器上
useradd hadoop
passwd hadoop
设置主机名,在每台机器上
vim /etc/sysconfig/network
vim /etc/hosts
hostnamectl set-hostname cancer01
hostnamectl set-hostname cancer02
hostnamectl set-hostname cancer03
hostnamectl set-hostname cancer04
hostnamectl set-hostname cancer05
每台机器增加host
127.0.0.1 localhost
10.169.60.36 cancer01
10.169.60.47 cancer02
10.169.60.187 cancer03
10.169.60.127 cancer04
10.169.60.111 cancer05
192.168.116.134 cancer01
192.168.116.136 cancer02
192.168.116.135 cancer03
192.168.116.131 cancer04
192.168.116.128 cancer05
每台机器均关闭防火墙
systemctl stop firewalld.service centos7停止firewall
systemctl disable firewalld.service centos7禁止firewall开机启动
每台机器均禁用Transparent Hugepage
查看状态
cat /sys/kernel/mm/transparent_hugepage/enabled
返回结果
[always] madvise never
永久关闭
vim /etc/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
或者直接运行下面命令:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
重启机器
查看状态
cat /sys/kernel/mm/transparent_hugepage/enabled
返回结果
always madvise [never]
给hadoop用户授权sudo,在每台机器上
vim /etc/sudoers
hadoop ALL=(ALL) ALL
3、配置ssh
打通01,02,03,04,05机器之间的SSH无密码登陆
修改sshd配置,每台机器上
vim /etc/ssh/sshd_config
放开2行注释
RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
切换用户
su hadoop
查看是否安装ssh,每台机器上
rpm -qa | grep ssh
可以通过以下命令安装ssh:
apt-get install openssh-server
yum install ssh
在每台机器上进入hadoop用户目录,使用命令生成公钥和私钥(连续回车,不设置密码)
ssh-keygen -t rsa
在01机器上生成authorized_keys文件
scp ~/.ssh/id_rsa.pub hadoop@cancer01:/home/hadoop/.ssh/authorized_keys
将其他四台机器的id_rsa.pub文件内容手动拷贝到01机器上的authorized_keys文件中
在01机器上把authorized_keys文件复制到其他要访问的机器的hadoop用户目录下.ssh目录
scp ~/.ssh/authorized_keys hadoop@cancer02:/home/hadoop/.ssh/authorized_keys
scp ~/.ssh/authorized_keys hadoop@cancer03:/home/hadoop/.ssh/authorized_keys
scp ~/.ssh/authorized_keys hadoop@cancer04:/home/hadoop/.ssh/authorized_keys
scp ~/.ssh/authorized_keys hadoop@cancer05:/home/hadoop/.ssh/authorized_keys
访问授权,每台机器上
chmod 600 .ssh/authorized_keys
在各台机器上检测是否可以不需要密码登陆
ssh localhost
ssh hadoop@cancer01
ssh hadoop@cancer02
ssh hadoop@cancer03
ssh hadoop@cancer04
ssh hadoop@cancer05
问题:The authenticity of host 'cancer04 (192.168.1.116)' can't be established.ECDSA key fingerprint is 86:c2:6b:12:68:b0:f8:5d:9b:96:35:e0:f1:8e:75:3a.Are you sure you want to continue connecting (yes/no)? yes。
解决:ssh -o StrictHostKeyChecking=no 192.168.116.128
4、安装jdk,在每台机器上
下载jdk-8u101-linux-x64.rpm,使用rz命令上传。
1、安装前,最好先删除Linux自带的OpenJDK:
(1)运行java -version,会发现Linux自带的OpenJDK,运行rpm -qa | grep jdk,找出自带的OpenJDK名称;
(2)运行rpm -e --nodeps OpenJDK名称,删除OpenJDK;
2、下载jdk-8u20-linux-x64.rpm,运行rpm -ivh jdk-8u20-linux-x64.rpm
3、运行vim /etc/profile,在文件末尾输入以下几行:
export JAVA_HOME=/usr/java/jdk1.8.0_101
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
4、运行source /etc/profile,使文件生效;
5、运行java -version,查看返回结果。
或yum安装
wget http://download.oracle.com/otn-pub/java/jdk/8u72-b15/jdk-8u72-linux-x64.rpm?AuthParam=1453706601_fb0540fefe22922be611d401fbbf4d75
通过yum 进行安装
yum localinstall jdk-8u72-linux-x64.rpm
设置JAVA_HOME环境变量
vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_101
5、安装zookeeper
上传zookeeper-3.4.9.tar.gz到cancer03、cancer04、cancer05。
su hadoop
在每台机器上解压
tar -xvf zookeeper-3.4.9.tar.gz
cd /usr/local/zookeeper-3.4.9/conf
cp zoo_sample.cfg zoo.cfg
修改dataDir
sudo vim zoo.cfg
data