Hadoop高可用环境搭建简版

Hadoop高可用环境精简版

配置文件放在这
具体文件在这 提取码: faka

配置网络

ping通主机,外网

vi /etc/sysconfig/network-scripts/ifcfg-ens

IPADDR=192.168.***.***
NETMASK=255.255.***.0
GATEWAY=192.168.***.***
DNS1=8.8.8.8		

service network restart

下面连接Xshell操作

安装MySQL

ll /etc/my.cnf
rpm -qa | grep mariadb

rpm -e  mariadb-libs
rpm -e  mariadb-libs postfix

ll /etc/my.cnf
cd /opt/soft
rpm -ivh perl/perl-*
rpm -ivh  MySQL-server-5.1.73-1.glibc23.x86_64.rpm 
rpm -ivh  MySQL-server-5.1.73-1.glibc23.x86_64.rpm 
service mysql start

chkconfig mysql on
mysql -uroot
show database;
set password for 'root'@'localhost'=password('123');
grant all privileges on *.* to 'root'@'master' identified by '123';
flush privileges;

时间同步

3台主机master,slave01,slave02分别安装NTP服务

yum install ntp -y  

systemctl start ntpd.service

systemctl enable ntpd.service 

service ntpd status 

基本设置

vi /etc/hostname
删除后修改:
master

vi /etc/hosts
追加如下:
192.168.100.200 master
192.168.100.201 slave01
192.168.100.202 slave02

禁用防火墙

systemctl stop firewalld

systemctl disable firewalld

systemctl status firewalld

在 /opt 目录下创建soft文件上传文件

Xftp操作----/etc/profile.d替换hadoop-eco.sh以节省后续操作

解压JDK配置环境变量

cd /opt
tar -zxvf soft/jdk-8u212-linux-x64.tar.gz 
mv jdk1.8.0_212/ jdk

vi /etc/profile.d/hadoop-eco.sh
JAVA_HOME=/opt/jdk	 
PATH=$JAVA_HOME/bin:$PATH

source /etc/profile.d/hadoop-eco.sh
java -version

解压HADOOP配置环境变量

cd /opt
tar -zxvf soft/hadoop-2.7.3.tar.gz 
mv hadoop-2.7.3/ hadoop

vi /etc/profile.d/hadoop-eco.sh
添加:
HADOOP_HOME=/opt/hadoop
PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

source /etc/profile.d/hadoop-eco.sh
mkdir -p hadoop-record/{name,secondary,data,tmp}

使用 Xftp吧Hadoop配置文件全部覆盖拷入 /opt/hadoop/etc/hadoop 共7个

hadoop version

克隆虚拟机

下面操作从后往前开启虚拟机并修改每台机器的IP和主机名

vi /etc/hostname
vi /etc/sysconfig/network-scripts/ifcfg-e
rm -f /etc/udev/rules.d/**.rules
service network restart
ip a
reboot

SSH免密

ssh-keygen -t rsa
ssh-copy-id -i root@slave01
ssh root@slave01bash

解压ZOOKEEPER配置环境变量

cd /opt
tar -zxvf soft/zookeeper-3.4.13.tar.gz 
mv zookeeper-3.4.13/ zookeeper

vi /etc/profile.d/hadoop-eco.sh
追加:
ZOOKEEPER_HOME=/opt/zookeeper
PATH=$PATH:$ZOOKEEPER_HOME/bin

source /etc/profile.d/hadoop-eco.sh
cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
vi /opt/zookeeper/conf/zoo.cfg
追加:
dataDir=/opt/zookeeper/data
server.100=master:2888:3888
server.101=slave01:2888:3888
server.102=slave02:2888:3888

mkdir -m 777 zookeeper/data
touch zookeeper/data/myid
echo 100 > zookeeper/data/myid

拷贝zookeeper

scp -r /opt/zookeeper slave01:/opt
scp -r /opt/zookeeper slave02:/opt
scp /etc/profile.d/hadoop-eco.sh slave01:/etc/profile.d/
scp /etc/profile.d/hadoop-eco.sh slave02:/etc/profile.d/
slave01:
echo 101 > /opt/zookeeper/data/myid
source /etc/profile.d/hadoop-eco.sh

slave02:
echo 102 > /opt/zookeeper/data/myid
source /etc/profile.d/hadoop-eco.sh

cat /opt/zookeeper/data/myid 

解压HBASE配置环境变量

master:
cd /opt
tar -zxvf soft/hbase-1.3.1-bin.tar.gz 
mv hbase-1.3.1/ hbase

vi /etc/profile.d/hadoop-eco.sh 
追加:
export HBASE_HOME=/opt/hbase
export PATH=$HBASE_HOME/bin:$PATH

source /etc/profile.d/hadoop-eco.sh
mkdir /opt/hbase/tmp   ###待定

使用 Xftp把HBase配置文件全部覆盖拷入 /opt/hbase/conf 共3个

拷贝Hbase

scp -r /opt/hbase slave01:/opt/
scp -r /opt/hbase slave02:/opt/
scp /etc/profile.d/hadoop-eco.sh  slave01:/etc/profile.d/
scp /etc/profile.d/hadoop-eco.sh  slave02:/etc/profile.d/
slave01:
source /etc/profile.d/hadoop-eco.sh
chown -R root:root /opt/hbase/

slave02:
source /etc/profile.d/hadoop-eco.sh
chown -R root:root /opt/hbase/

hbase version
注:此处没有启动Hadoop和hbase在虚拟机中拍摄快照

初始化Hadoop集群

master/slave01/slave02:
zkServer.sh start
zkServer.sh status
hadoop-daemon.sh start journalnode

master:
hdfs namenode -format
scp -r /opt/hadoop-record/tmp/ slave01:/opt/hadoop-record/
hdfs zkfc -formatZK
start-dfs.sh
start-yarn.sh

slave02:
yarn-daemon.sh start resourcemanager

示例:

[root@master opt]# jps
3266 DFSZKFailoverController
2659 QuorumPeerMain
2727 JournalNode
3368 ResourceManager
3640 Jps
2957 NameNode

[root@slave01 opt]# jps
3088 Jps
2611 JournalNode
2773 DataNode
2983 NodeManager
2537 QuorumPeerMain
2683 NameNode
2893 DFSZKFailoverController

[root@slave02 ~]# jps
2689 NodeManager
2434 QuorumPeerMain
2502 JournalNode
2841 Jps
2572 DataNode
测试Hadoop有namenode节点的机器IP:
192.168.100.201:50070

启动 HBase

查看和关闭Hadoop安全模式:
hadoop dfsadmin -safemode get
hdfs dfsadmin -safemode leave

master:
start-hbase.sh

示例:

[root@master opt]# jps
3632 Jps
2946 ResourceManager
2323 JournalNode
3368 HMaster
2554 NameNode
2862 DFSZKFailoverController
2255 QuorumPeerMain

[root@slave01 ~]# jps
2338 JournalNode
3058 Jps
2500 DataNode
2612 DFSZKFailoverController
2264 QuorumPeerMain
2410 NameNode
2702 NodeManager
2831 HRegionServer

[root@slave02 ~]# jps
2371 DataNode
2488 NodeManager
2235 QuorumPeerMain
2924 Jps
2669 HRegionServer
2303 JournalNode
测试:
192.168.100.200:16010/master-status

关闭和启动集群

Hadoop集群启动顺序zookeepeer->hadoop->hbase

master/slave01/slave02:
zkServer.sh start
zkServer.sh status
hadoop-daemon.sh start journalnode

master:
start-all.sh
start-hbase.sh
hbase-daemon.sh start thrift

mysql -uroot -p
hbase shell

Hadoop集群关闭顺序hbase->hadoop->zookeepeer

master:
stop-hbase.sh
stop-all.sh

master/slave01/slave02:
zkServer.sh stop
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值