一 、LVS
1,准备3台虚拟机
192.168.9.100(VIP) 192.168.9.12(RS) 192.168.9.13 (RS)
2,先配置3台虚拟机的网络:
etho0 ,配置在一个网段
DIP,RIP在一个网段
3,配置lvs的VIP (重启失效)
ifconfig etho:0 192.168.9.100/24
echo "1" > /proc/sys/net/ipv4/ip_forward
4,调整RS的响应,通告级别(每一台RS都配):(重启失效)
echo 1 > /proc/sys/net/ipv4/conf/ens33/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/ens33/arp_announce
5,配置RS的VIP(每一台RS都配)(重启失效)
ifconfig lo:8 192.168.9.100 netmask 255.255.255.255
6,启动RS(realserver)上的httpd
yum install httpd -y
/var/www/html
vi index.html
from ooxxip
service httpd start
客户端验证:RIP:80能显示
VIP:80不能显示
7,LVS---ipvsadm
yum install ipvsadm -y
ipvsadm -A -t 192.168.9.100:80 -s rr
ipvsadm -a -t 192.168.9.100:80 -r 192.168.9.12 -g
ipvsadm -a -t 192.168.9.100:80 -r 192.168.9.13 -g
ipvsadm -ln
浏览器刷新:访问vip
ipvsadm -lnc
netstat -natp
二、keepalived
1.可以不用安装ipvsadm -lnc
keepalived (ipvsadm,自身高可用)
yum install keepalived
service keepalived start 启动
/etc/keepalived/keepalived.conf
tail /var/log/message 查看日志
三、lvs+keepalived高可用
1.两台lvs服务器 安装keepalived高可用
yum install keepalived ipvsadm
vim keepalived.conf
vrrp_instance VI_1 {
...
virtual_ipaddress{
192.168.9.100/24 dev eth0 label eth0:3
}
...
}
// 只留一个virtual_server 如下:
virtual_server 192.168.9.100 80 {
delay_loop 3
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.9.12 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.9.13 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
HA:
NN-1 | NN-2 | DN | ZK | ZKFC | JNN | |
node01 | * | * | * | |||
node02 | * | * | * | * | * | |
node03 | * | * | * | |||
node04 | * | * |
四、hadoop3.1.2 (4台)
环境变量配置 HADOOP_HOME
1.伪分布式
vi hadoop-env.sh
export JAVA_HOME=
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_ZKFC_USER=root
export HDFS_JOURNALNODE_USER=root
vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:9820</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/xiang/hadoop/full</value>
</property>
<property>
<name>hadoop.HTTP.STATICUSER.USER</name>
<value>root</value>
</property>
</configuration>
vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node01:9868</value>
</property>
</configuration>
vi works
node01
2.全分布式
vi hadoop-env.sh 同上
vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.namenode.mycluster</name>
<value>namenode1,namenode2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.namenode1</name>
<value>node01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.namenode2</name>
<value>node02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.namenode1</name>
<value>node01:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.namenode2</name>
<value>node02:9870</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/xiang/hadoop/ha/journalnode</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/xiang/hadoop/ha</value>
</property>
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node02:2181,node03:2181,node04:2181</value>
</property>
</configuration>
vi works
node02
node03
node04
五、zookeeper
环境变量配置 ZOOKEEPER_HOME