1、修改主机名称
vim /etc/hosts
重启
2、修改该hosts文件,添加主机跟ip的映射关系
虚拟机网络host-only
这个必须注释掉
#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
添加以下的地址关系
192.168.1.101 COLBY-NN-101
192.168.1.102 COLBY-NN-102
192.168.1.111 COLBY-DN-111
192.168.1.112 COLBY-DN-112
192.168.1.113 COLBY-DN-113
3、安装JDK
/usr/local/jdk1.8.0_171
vi /etc/profile
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:JRE_HOME/lib
分发JDK
scp -r /usr/local/jdk COLBY-NN-102:/usr/local/
scp -r /usr/local/jdk COLBY-DN-111:/usr/local/
scp -r /usr/local/jdk COLBY-DN-112:/usr/local/
scp -r /usr/local/jdk COLBY-DN-113:/usr/local/
4、ssh免密码登录
三台机器分别执行:ssh-keygen -t rsa
COLBY-NN-101
COLBY-NN-102
COLBY-DN-111
COLBY-DN-112
COLBY-DN-113
在101跟102上面分别执行
ssh-copy-id COLBY-NN-101
ssh-copy-id COLBY-NN-102
ssh-copy-id COLBY-DN-111
ssh-copy-id COLBY-DN-112
ssh-copy-id COLBY-DN-113
5、安装zookeeper
cp zoo_sample.cfg zoo.cfg
修改dataDir=/app/bigdata/zookeeper/tmp
mkdir -p /app/bigdata/zookeeper/tmp
在最后添加(zk三台服务器就够了):
server.1=COLBY-NN-101:2888:3888
server.2=COLBY-NN-102:2888:3888
server.3=COLBY-DN-111:2888:3888
再创建一个空文件
touch /app/bigdata/zookeeper/tmp/myid
最后向该文件写入ID
echo 1 > /app/bigdata/zookeeper/tmp/myid
1.3将配置好的zookeeper拷贝到其他节点(首先分别在COLBY-NN-102、COLBY-DN-111根目录下创建一个app目录:mkdir /app)
scp -r /app/bigdata/zookeeper/ COLBY-NN-102:/app/bigdata/
scp -r /app/bigdata/zookeeper/ COLBY-DN-111:/app/bigdata/
注意:修改COLBY-NN-101、COLBY-DN-111对应/app/zookeeper-3.4.10/tmp/myid内容
COLBY-NN-101
echo 2 > /app/bigdata/zookeeper-3.4.10/tmp/myid
COLBY-DN-111
echo 3 > /app/bigdata/zookeeper-3.4.10/tmp/myid
scp -r /etc/profile COLBY-NN-102:/etc/profile
scp -r /etc/profile COLBY-DN-111:/etc/profile
scp -r /app/bigdata/zookeeper/conf/zoo.cfg COLBY-NN-102:/app/bigdata/zookeeper/conf/
scp -r /app/bigdata/zookeeper/conf/zoo.cfg COLBY-DN-111:/app/bigdata/zookeeper/conf/
三台服务分别执行
zkServer.sh start
执行状态
zkServer.sh status
配置hadoop
core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml、hadoop-env.sh、workers
拷贝hadoop文件
scp -r /app/bigdata/hadoop COLBY-NN-102:/app/bigdata/
scp -r /app/bigdata/hadoop COLBY-DN-111:/app/bigdata/
scp -r /app/bigdata/hadoop COLBY-DN-112:/app/bigdata/
scp -r /app/bigdata/hadoop COLBY-DN-113:/app/bigdata/
scp -r /app/bigdata/hadoop/etc/hadoop/* COLBY-NN-102:/app/bigdata/hadoop/etc/hadoop/
scp -r /app/bigdata/hadoop/etc/hadoop/* COLBY-DN-111:/app/bigdata/hadoop/etc/hadoop/
scp -r /app/bigdata/hadoop/hdfs/name/* COLBY-NN-102:/app/bigdata/hadoop/hdfs/name/
scp -r /app/bigdata/hadoop/tmp/ COLBY-NN-102:/app/bigdata/hadoop/
在每台服务器上面启动journalnode
hdfs --daemon start journalnode
然后格式化
hdfs namenode -format
再格式化
hdfs zkfc –formatZK
hdfs zkfc -formatZK
3.1新的命令
hdfs --daemon start journalnode(新)
hadoop-daemon.sh start journalnode(2版本的)
==============&
hadoop-3.1.0双NameNode集群安装笔记-colby陈伦
最新推荐文章于 2022-03-24 17:48:35 发布