RHCS

RHCS:

1.du&dh
    du:统计文件系统,用户层面(文加目录|链接等)
    df:读磁盘块

2.读取大小相同的一个大文件和多个小文件所需要的时间哪个快
3.ospf|lvs(4层)
4.kvm|qemu|libvirtd 工具的区别和差异
    kvm:CPU|内存
    qemu:IO
    libvirtd:虚拟化管理器
5.ls -S 按照大小排序
6.怎么看内存使用量:free -m(解释参数|buff:写缓存|cache:读缓存 )

高可用

作用:解决单点故障

RHCS|Pacemaker

1.RHCS

    http://wzlinux.blog.51cto.com/8021085/1725373
conga->luci(manager) web https://ip:8084 |  ricci:1111(agent)

#配置yum源
[root@server1 ~]# cat /etc/yum.repos.d/rhel-source.repo 
[Server]
name=Red Hat Enterprise Linux Server
baseurl=http://172.25.66.250/rhel6.5
gpgcheck=0

[HighAvailability]
name=Red Hat Enterprise Linux HighAvailability
baseurl=http://172.25.66.250/rhel6.5/HighAvailability
gpgcheck=0

[LoadBalancer]
name=Red Hat Enterprise Linux LoadBalancer
baseurl=http://172.25.66.250/rhel6.5/LoadBalancer
gpgcheck=0

[ResilientStorage]
name=Red Hat Enterprise Linux ResilientStorage
baseurl=http://172.25.66.250/rhel6.5/ResilientStorage
gpgcheck=0

[ScalableFileSystem]
name=Red Hat Enterprise Linux ScalableFileSystem
baseurl=http://172.25.66.250/rhel6.5/ScalableFileSystem
gpgcheck=0
[root@server1 ~]#
***scp到server2

#server1|server2同步

安装ricci|设置开机自启动|初始化密码

[root@server1 ~]# yum install ricci -y
[root@server1 ~]# chkconfig ricci --list
ricci           0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@server1 ~]# chkconfig ricci on       #设置开机自起
[root@server1 ~]# chkconfig ricci --list
ricci           0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@server1 ~]# 
[root@server1 ~]# echo westos | passwd --stdin ricci   #初始化密码
Changing password for user ricci.
passwd: all authentication tokens updated successfully.
[root@server1 ~]# /etc/init.d/ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@server1 ~]#



#server1|server2随便一个节点安装luci
[root@server1 ~]# yum install luci -y
[root@server1 ~]# /etc/init.d/luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `server1' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
    (none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Start luci...                                              [  OK  ]
Point your web browser to https://server1:8084 (or equivalent) to access luci
[root@server1 ~]# chkconfig --list luci
luci            0:off   1:off   2:off   3:off   4:off   5:off   6:off
[root@server1 ~]# chkconfig  luci on
[root@server1 ~]# chkconfig --list luci
luci            0:off   1:off   2:on    3:on    4:on    5:on    6:off
[root@server1 ~]# 

****浏览器添加集群服务(Node)
    https://server1:8084(此时需要有本地解析)
    参考图|或者文档6.5



#查看集群信息
[root@server1 ~]# clustat  #安装完成之后,才有此命令
Cluster Status for lucci @ Sat Sep 23 10:43:53 2017
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1                                      1 Online, Local
 server2                                      2 Online

[root@server1 ~]# 

Fence Device

emc存储|isasi|nfs
NFS|Fence|IOE
Fence:主要解决脑裂

#物理机上配置fence
[root@foundation66 ~]# rpm -qa|grep fence
fence-virtd-multicast-0.3.2-2.el7.x86_64
fence-virtd-libvirt-0.3.2-2.el7.x86_64
fence-virtd-0.3.2-2.el7.x86_64
[root@foundation66 ~]# ll -d /etc/cluster/
drwxr-xr-x. 2 root root 26 May 23 14:58 /etc/cluster/
[root@foundation66 ~]# fence_virtd -c
[root@foundation66 ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.00142596 s, 89.8 kB/s
[root@foundation66 ~]# file /etc/cluster/fence_xvm.key 
/etc/cluster/fence_xvm.key: data
[root@foundation66 ~]# systemctl restart fence_virtd.service 
[root@foundation66 ~]# netstat -anulp | grep :1229
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           14721/fence_virtd   
[root@foundation66 ~]# scp /etc/cluster/fence_xvm.key root@172.25.66.1:/etc/cluster/
[root@foundation66 ~]# scp /etc/cluster/fence_xvm.key root@172.25.66.2:/etc/cluster/
[root@foundation66 ~]# 
*****图形给server1|server2添加fence
    先添加fence device
    然后给server1|server2添加fence



#虚拟机
[root@server1 cluster]# cat /etc/cluster/cluster.conf   #查看fence信息
[root@server1 ~]# fence_node server2   #踢掉server2
fence server2 success
[root@server1 ~]# 

*****Failover Domains|Resources|Service group

测试

关闭掉两端httpd|集群生效后,自动启动
    **会自动添加VIP,自动开启httpd(根据在apache里面定义的资源顺序,决定哪个优先)
[root@server1 ~]# clustat 
Cluster Status for lucci @ Sat Sep 23 14:09:28 2017
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1                                      1 Online, Local, rgmanager
 server2                                      2 Online, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:Apache                  server1                         started       
[root@server1 ~]# /etc/init.d/httpd stop
Stopping httpd:                                            [  OK  ]
[root@server1 ~]# clustat 
Cluster Status for lucci @ Sat Sep 23 14:11:18 2017
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1                                      1 Online, Local, rgmanager
 server2                                      2 Online, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:Apache                  server2                         started  

[root@server2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:61:92:98 brd ff:ff:ff:ff:ff:ff
    inet 172.25.66.2/24 brd 172.25.66.255 scope global eth1
    inet 172.25.66.100/24 scope global secondary eth1
    inet6 fe80::5054:ff:fe61:9298/64 scope link 
       valid_lft forever preferred_lft forever
[root@server2 ~]# 
***当其中一个宕机之后,会发生VIP和服务自动迁移,但是需要一点点时间

添加共享存储
开启Server3,添加虚拟磁盘

*服务端
[root@server3 ~]# fdisk -l
[root@server3 ~]# yum install scsi-* -y
[root@server3 tgt]# vim /etc/tgt/targets.conf

38

Server1|Server2上
[root@server1 ~]# yum install iscsi-* -y
[root@server1 ~]# /etc/init.d/iscsi start
[root@server1 ~]# iscsiadm -m discovery -t st -p 172.25.66.3
Starting iscsid:                                           [  OK  ]
172.25.66.3:3260,1 iqn.2017-09.com.example:server.target1
[root@server1 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2017-09.com.example:server.target1, portal: 172.25.66.3,3260] (multiple)
Login to [iface: default, target: iqn.2017-09.com.example:server.target1, portal: 172.25.66.3,3260] successful.

[root@server1 ~]# fdisk -l
*如果导入错误重新修改之后,需要重启客户端iscsi

ext4本地文件系统,不支持集群化,节点不同步

LV集群化

[root@server2 ~]# pvcreate /dev/sda 
  Physical volume "/dev/sda" successfully created
[root@server2 ~]# vgcreate md0 /dev/sda 
  Clustered volume group "md0" successfully created
[root@server2 ~]# lvcreate -L 2G -n lv0 md0 
  Logical volume "lv0" created
[root@server2 ~]# lvextend  -L +2G /dev/md0/lv0 
  Extending logical volume lv0 to 4.00 GiB
  Logical volume lv0 successfully resized
[root@server2 ~]# 


#查看同步
[root@server1 ~]# pvs
  PV         VG       Fmt  Attr PSize PFree
  /dev/vda2  VolGroup lvm2 a--  8.51g    0 
[root@server1 ~]# pvs
  PV         VG       Fmt  Attr PSize PFree
  /dev/sda   md0      lvm2 a--  8.00g 8.00g
  /dev/vda2  VolGroup lvm2 a--  8.51g    0 
[root@server1 ~]# pvs
  PV         VG       Fmt  Attr PSize PFree
  /dev/sda   md0      lvm2 a--  8.00g 6.00g
  /dev/vda2  VolGroup lvm2 a--  8.51g    0 
[root@server1 ~]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----   7.61g                                             
  lv_swap VolGroup -wi-ao---- 920.00m                                             
  lv0     md0      -wi-a-----   2.00g                                             
[root@server1 ~]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----   7.61g                                             
  lv_swap VolGroup -wi-ao---- 920.00m                                             
  lv0     md0      -wi-a-----   4.00g                                             
[root@server1 ~]# 


***本地文件系统
#两边同时格式化,然后挂载查看
[root@server2 ~]# mkfs.ext4 /dev/md0/lv0 

resource添加共享资源

先禁用apache---添加虚拟IP--添加共享存储--添加httpa

[root@server1 ~]# clusvcadm -d Apache
Local machine disabling service:Apache...Success
[root@server1 ~]# clusvcadm -e Apache
Local machine trying to enable service:Apache...Success
service:Apache is now running on server1
[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1092548   6362268  15% /
tmpfs                           510200   25656    484544   6% /dev/shm
/dev/vda1                       495844   33469    436775   8% /boot
/dev/mapper/md0-lv0            4128448  139256   3779480   4% /var/www/html
[root@server1 ~]# vim /var/www/html/index.html
[root@server1 ~]# clustat 
Cluster Status for lucci @ Sat Sep 23 16:31:14 2017
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1                                      1 Online, Local, rgmanager
 server2                                      2 Online, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:Apache                  server1                         started       
[root@server1 ~]# ifdown eth0

共享存储实现

[root@server2 ~]# clustat 
Cluster Status for lucci @ Sat Sep 23 16:31:43 2017
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 server1                                      1 Offline
 server2                                      2 Online, Local, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:Apache                  server2                         starting      
[root@server2 ~]# 

gfs2集群文件系统

先移除本地文件系统(先在组里面remove,然后再移除)
[root@server1 ~]# mkfs.gfs2 -j 3 -p lock_dlm -t lucci:mygfs2 /dev/md0/lv0 
This will destroy any data on /dev/md0/lv0.
It appears to contain: symbolic link to `../dm-2'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/md0/lv0
Blocksize:                 4096
Device Size                4.00 GB (1048576 blocks)
Filesystem Size:           4.00 GB (1048575 blocks)
Journals:                  3
Resource Groups:           16
Locking Protocol:          "lock_dlm"
Lock Table:                "lucci:mygfs2"
UUID:                      69d19ccc-1022-1c13-6c10-852a0e46a1a7
[root@server1 ~]# mount /dev/md0/lv0 /mnt/
[root@server1 mnt]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root   7853764 1090504   6364312  15% /
tmpfs                           510200   31816    478384   7% /dev/shm
/dev/vda1                       495844   33469    436775   8% /boot
/dev/mapper/md0-lv0            4193856  397148   3796708  10% /mnt

正常关闭集群

leave
    clustat
delete
    会删除配置文件并且屏蔽掉开机自启动项
iscsiadm -m node -u
iscsadm -m node -o delete
删除配置文件
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值