How To Install and Configure DRBD 9(不使用LVM)

How To Install and Configure DRBD 9 Cluster on RHEL7 / CentOS7

written by Lotfi Waderni September 22, 2017

 

 

What is DRBD (Distributed Replicated Block Device)?

DRBD (Distributed Replicated Block Device) is a Linux-based software component to mirror or replicate individual storage devices (such as hard disks or partitions) from one node to the other(s) over a network connection. DRBD makes it possible to maintain consistency of data among multiple systems in a network. DRBD also ensures high availability (HA) for Linux applications.
DRBD supports three distinct replication modes, allowing three degrees of replication synchronicity.

  • Protocol A: Asynchronous replication protocol.
  • Protocol B: Memory synchronous (semi-synchronous) replication protocol.
  • Protocol C: Synchronous replication protocol.

In this tutorial, we are going to create and configure a DRBD Cluster Across two servers. Both servers have an empty disk attached /dev/sdb.

Enviroment

Servers

drbd1

192.168.111.132

CentOS 7

drbd2

192.168.111.190

CentOS 7

Disable SELinux on the system

# setenforce 0

永久关闭

可以修改配置文件/etc/selinux/config,将其中SELINUX设置为disabled。

 

 

[root@localhost ~]# cat /etc/selinux/config  

  

# This file controls the state of SELinux on the system. 

# SELINUX= can take one of these three values: 

#     enforcing - SELinux security policy is enforced. 

#     permissive - SELinux prints warnings instead of enforcing. 

#     disabled - No SELinux policy is loaded. 

#SELINUX=enforcing 

SELINUX=disabled 

# SELINUXTYPE= can take one of three two values: 

#     targeted - Targeted processes are protected, 

#     minimum - Modification of targeted policy. Only selected processes are protected.  

#     mls - Multi Level Security protection. 

SELINUXTYPE=targeted

 

# sestatus 

SELinux status:                 disabled

Installing DRBD

1. In order to install DRBD, you will need to enable the ELRepo repository on both nodes, because this software package is not distributed through the standard CentOS and Red Hat Enterprise Linux repositories.
Use the following commands to import GPG key and install ELRepo repository on both nodes:

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

2. Run the following command on both nodes to install the DRBD software and the all necessaries kernel modules

# yum install drbd90-utils kmod-drbd90

– Once the installation is completed, you will need to check whether the kernel module is loaded correctly, using this command:

# lsmod | grep -i drbd

If it is not loaded automatically, you can load the module to the kernel on both nodes, using the follow command:

# modprobe drbd

Note that modprobe command will take care of loading the kernel module for the time being on your current session. However, in order for it to be loaded during boot, you have to make use of the systemd-modules-load service by creating a file inside /etc/modulesload.d/ so that the DRBD module is loaded properly each time the system boots:

# echo drbd > /etc/modules-load.d/drbd.conf

Configuring DRBD

After having successfully installed DRBD on both nodes, we need to modify the DRBD global and common settings by editing the file /etc/drbd.d/global_common.conf.
1. Let’s backup the original settings on both nodes with the following command:

# mv /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig

2. Create a new global_common.conf file on both nodes with the following contents:

# vi /etc/drbd.d/global_common.conf

global {

 usage-count no;

}

common {

 net {

  protocol C;

 }

}

3. Next, we will need to create a new configuration file called /etc/drbd.d/drbd0.res for the new resource named drbd0, with the following
contents:

# vi /etc/drbd.d/drbd0.res

resource drbd0 {

        disk /dev/sdb;

        device /dev/drbd0;

        meta-disk internal;

        on drbd1 {

               address 192.168.111.132:7789;

        }

        on drbd2 {

               address 192.168.111.190:7789;

        }

}

In the above resource file, we created a new resource drbd0 where 192.168.111.132 and 192.168.111.190 are the IP addresses of our two nodes, and 7789 is the port used for communication, using the disk /dev/sdb to create the new device /dev/drbd0.

4. Initialize the meta data storage on each nodes by executing the following command on both nodes

# drbdadm create-md drbd0

5.Starting and Enabling the DRBD Daemon on both nodes.

# systemctl start drbd

# systemctl enable drbd

6. Lets define the DRBD Primary node as first node “drbd1”.

# drbdadm up drbd0

# drbdadm primary drbd0

Note:
if you get any error to make the node primary, use the following command to forcefully make the node as primary:

# drbdadm primary drbd0 --force

7. On the Secondary node “drbd2” run the following command to start the drbd0

# drbdadm up drbd0

8. You can check the current status of the synchronization while it’s being performed. The cat /proc/drbd command displays the creation and synchronization progress of the resource, as shown here:

# cat /proc/drbd

9. Adjust the firewall using the following commands:

# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="ip_address" port port="7789" protocol="tcp" accept'

# firewall-cmd --reload

Test the DRBD

In order to test the DRBD functionality we need to Create a file system, mount the volume and write some data on primary node “drbd1” and finally switch the primary node to “drbd2”

– Run the following command on the primary node to create an xfs filesystem on /dev/drbd0 and mount it to the mnt directory, using the following commands:

# mkfs.xfs /dev/drbd0

# mount /dev/drbd0 /mnt

– Create some data using the following command:

# touch /mnt/file{1..5}

# ls -l /mnt/

total 0

-rw-r--r--. 1 root root 0 Sep 22 21:43 file1

-rw-r--r--. 1 root root 0 Sep 22 21:43 file2

-rw-r--r--. 1 root root 0 Sep 22 21:43 file3

-rw-r--r--. 1 root root 0 Sep 22 21:43 file4

-rw-r--r--. 1 root root 0 Sep 22 21:43 file5

– Let’s now switch primary mode “drbd1” to second node “drbd2” to check the data replication works or not.

First, we have to unmount the volume drbd0 on the first drbd cluster node “drbd1”.

# umount /mnt

Change the primary node to secondary node on the first drbd cluster node “drbd1”

# drbdadm secondary drbd0

Change the secondary node to primary node on the second drbd cluster node “drbd2”

# drbdadm primary drbd0

Mount the volume and check the data available or not.

# mount /dev/drbd0 /mnt

# ls -l  /mnt

total 0

-rw-r--r--. 1 root root 0 Sep 22 21:43 file1

-rw-r--r--. 1 root root 0 Sep 22 21:43 file2

-rw-r--r--. 1 root root 0 Sep 22 21:43 file3

-rw-r--r--. 1 root root 0 Sep 22 21:43 file4

-rw-r--r--. 1 root root 0 Sep 22 21:43 file5

We hope this tutorial was enough Helpful. If you need more information, or have any questions, just comment below and we will be glad to assist you!

 

Last login: Tue Oct  8 21:40:12 2019 from 192.168.111.1
[root@drbd1 ~]# drbdadm status
drbd0 role:Secondary
  disk:UpToDate
  drbd2 role:Primary
    peer-disk:UpToDate

[root@drbd1 ~]#
 

 

#  drbdsetup status --verbose --statistics
drbd0 node-id:1 role:Secondary suspended:no
    write-ordering:flush
  volume:0 minor:0 disk:UpToDate quorum:yes
      size:153087388 read:2120 written:0 al-writes:0 bm-writes:0 upper-pending:0
      lower-pending:0 al-suspended:no blocked:no
  drbd1 node-id:0 connection:Connected role:Secondary congested:no
      ap-in-flight:0 rs-in-flight:0
    volume:0 replication:Established peer-disk:UpToDate
        resync-suspended:no
        received:0 sent:0 out-of-sync:0 pending:0 unacked:0

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值