在libvirt中使用SanLock

测试环境请参考之前的文章。

安装软件包(Test1和Test2节点)

$ yum install -y qemu-kvm qemu-img virt-manager libvirt 
$ yum install -y libvirt-python python-virtinst libvirt-client 
$ yum install -y virt-install virt-viewer libvirt-lock-sanlock

挂载NFS目录(Test1和Test2节点)

  • 注释掉之前增加的自动挂载目录:
$ vi /etc/fstab
#192.168.195.131:/mnt/nfs    /mnt/nfs   nfs   defaults   1 1
  • 新增新的挂在目录:
$ echo "192.168.195.131:/mnt/nfs /var/lib/libvirt/sanlock nfs hard,nointr 0 0" >> /etc/fstab
  • 重新挂载NFS目录,如果失败,则重启节点:
$ umount /mnt/nfs
$ mount /var/lib/libvirt/sanlock
  • 修改目录权限:
$ chown -R sanlock:sanlock /var/lib/libvirt/sanlock

说明

这里也可以修改libvirt中sanlock租约的路径为之前的NFS目录,而不用修改之前的mount路径:

$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/disk_lease_dir "/mnt/nfs"

配置Host ID

  • 设置Test1节点使用Host ID 1:
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/host_id 1
  • 设置Test2节点使用Host ID 2:
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/host_id 2

修改libvirt的sanlock配置(Test1和Test2节点)

  • 使用自动租约:
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/auto_disk_leases 1

此选项会使libvirt在disk_lease_dir目录(默认为"/var/lib/libvirt/sanlock")根据磁盘的“路径”的MD5值创建Resources租约。

  • 修改用户组和用户:
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/group sanlock
$ augtool -s set /files/etc/libvirt/qemu-sanlock.conf/user sanlock
  • 配置libvirt使用sanlock:
$ augtool -s set /files/etc/libvirt/qemu.conf/lock_manager sanlock
  • 重启服务,使配置生效:
$ systemctl restart libvirtd.service
  • 检查平配置是否成功,如果成功则会在NFS根目录看到__LIBVIRT__DISKS__文件。
$ ls /var/lib/libvirt/sanlock/
__LIBVIRT__DISKS__

准备虚拟机镜像(Test1或Test2节点)

  • 下载并写入磁盘镜像文件到共享存储中:
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/mapper/raw
  • 测试虚拟机镜像:
$ yum install -y tigervnc
$ qemu-system-x86_64 -vnc :2 -monitor stdio -hda /dev/mapper/raw -m 256M
$ vncviewer :2

最后关闭虚拟机。

创建虚拟机(Test1和Test2节点)

$ vi test_sanlock.xml
<domain type='kvm'>
	<name>test_sanlock</name>
	<memory>262144</memory>
	<vcpu>1</vcpu>

	<os>
		<type arch='x86_64' machine='pc'>hvm</type>
		<boot dev='hd'/>
	</os>

	<devices>
		<disk type='file' device='disk'>
			<driver name='qemu' type='qcow2'/>
			<source file='/dev/mapper/raw'/>
			<target dev='hda' bus='ide'/>
		</disk>
		<input type='tablet' bus='usb'/>
		<input type='mouse' bus='ps2'/>

		<graphics type='vnc' port='-1' listen = '0.0.0.0' autoport='yes' keymap='en-us'/>
	</devices>
</domain>
$ virt-xml-validate test_sanlock.xml
$ virsh define test_sanlock.xml
$ virsh list --all

在两节点上同时使用相同磁盘(/dev/mapper/raw)的虚拟机

  1. 在Test1节点上运行虚拟机。
  • 启动虚拟机:
$ virsh start test_sanlock
域 test_sanlock 已开始
  • 查看虚拟机:
$ virsh vncdisplay test_sanlock
$ vncviewer :0
  • 查看锁目录:
$ ls -l /var/lib/libvirt/sanlock/
-rw------- 1 sanlock sanlock 1048576 4月  20 16:24 7edb5b6820e56426339607637d18e871
-rw-rw---- 1 sanlock sanlock 1048576 4月  20 16:35 __LIBVIRT__DISKS__
  • 查看锁信息:
$ sanlock direct dump /var/lib/libvirt/sanlock/__LIBVIRT__DISKS__
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__       322ee34a-aa6e-4224-971f-5612072ca6c0.Test1 0000018489 0001 0001
00000512                   __LIBVIRT__DISKS__       d9441ec2-e9a1-4657-add9-728148e11f40.Test2 0000018468 0002 0001

$ sanlock direct dump /var/lib/libvirt/sanlock/7edb5b6820e56426339607637d18e871
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__                 7edb5b6820e56426339607637d18e871 0000017710 0001 0001 1
  1. 在Test2节点上运行虚拟机。

直接启动时,无法启动虚拟机:

$ virsh start test_sanlock
错误:开始域 test_sanlock 失败
错误:resource busy: 请求锁失败:错误 -243
  1. 关闭Test1节点上的虚拟机后,再次在Test2上启动虚拟机。
$ virsh start test_sanlock
域 test_sanlock 已开始

启动成功。

  • 查看锁信息:
$ sanlock direct dump /var/lib/libvirt/sanlock/__LIBVIRT__DISKS__
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__       322ee34a-aa6e-4224-971f-5612072ca6c0.Test1 0000018694 0001 0001
00000512                   __LIBVIRT__DISKS__       d9441ec2-e9a1-4657-add9-728148e11f40.Test2 0000018673 0002 0001

$ sanlock direct dump /var/lib/libvirt/sanlock/7edb5b6820e56426339607637d18e871
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__                 7edb5b6820e56426339607637d18e871 0000018630 0002 0001 2

磁盘的所有权已经被Test2获取。

  1. 再次再Test1上启动虚拟机。
$ virsh start test_sanlock
错误:开始域 test_sanlock 失败
错误:resource busy: 请求锁失败:错误 -243

启动失败,符合预期。

在共享存储的LVM卷上测试多个磁盘设备

创建libvirt的LVM存储池(Test1或Test2节点)

  • 创建存储池配置文件:
$ vi pool_sanlock.xml
<pool type="logical">
	<name>storage</name>
	<source>
		<device path="/dev/mapper/lvm"/>
	</source>
	<target>
	<path>/storage</path>
	</target>
</pool>
  • 定义存储池:
$ virsh pool-define pool_sanlock.xml

$ virsh pool-list --all
 名称               状态     自动开始
-------------------------------------------
 storage              不活跃  否
  • 构建存储池:
$ virsh pool-build storage
构建池 storage

$ vgdisplay
  --- Volume group ---
  VG Name               storage
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               100.00 GiB
  PE Size               4.00 MiB
  Total PE              25599
  Alloc PE / Size       0 / 0   
  Free  PE / Size       25599 / 100.00 GiB
  VG UUID               AC7lkm-ve65-Wy87-WTzU-BESp-tw2U-s0FviB
  • 启用存储池:
$ virsh pool-start storage
池 storage 已启动

$ pool-list --all
 名称               状态     自动开始
-------------------------------------------
 storage              活动     否

  • 任选一个节点创建卷:
$ virsh vol-create-as --pool storage --name test1 --capacity 500M
$ virsh vol-create-as --pool storage --name test2 --capacity 500M
$ virsh vol-create-as --pool storage --name test3 --capacity 500M
$ virsh vol-create-as --pool storage --name test4 --capacity 500M
  • 在另外的节点上更新LVM并激活卷:
$ pvscan --cache
$ lvchange -ay storage
  • 初始化系统卷:
$ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/storage/test1
$ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/storage/test2
$ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/storage/test3
$ dd if=cirros-0.3.4-x86_64-disk.img of=/dev/storage/test4

创建虚拟机(Test1和Test2节点)

  • 创建test1_sanlock虚拟机:
$ vi test1_sanlock.xml
<domain type='kvm'>
    <name>test1_sanlock</name>
    <memory>262144</memory>
    <vcpu>1</vcpu>

    <os>
        <type arch='x86_64' machine='pc'>hvm</type>
        <boot dev='hd'/>
    </os>

    <devices>
        <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2'/>
            <source file='/dev/storage/test1'/>
            <target dev='hda' bus='ide'/>
        </disk>
        <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2'/>
            <source file='/dev/storage/test2'/>
            <target dev='hdb' bus='ide'/>
        </disk>
        <input type='tablet' bus='usb'/>
        <input type='mouse' bus='ps2'/>

        <graphics type='vnc' port='-1' listen = '0.0.0.0' autoport='yes' keymap='en-us'/>
    </devices>
</domain>
  • 创建test2_sanlock虚拟机:
$ vi test2_sanlock.xml
<domain type='kvm'>
    <name>test2_sanlock</name>
    <memory>262144</memory>
    <vcpu>1</vcpu>

    <os>
        <type arch='x86_64' machine='pc'>hvm</type>
        <boot dev='hd'/>
    </os>

    <devices>
        <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2'/>
            <source file='/dev/storage/test3'/>
            <target dev='hda' bus='ide'/>
        </disk>
        <disk type='file' device='disk'>
            <driver name='qemu' type='qcow2'/>
            <source file='/dev/storage/test4'/>
            <target dev='hdb' bus='ide'/>
        </disk>
        <input type='tablet' bus='usb'/>
        <input type='mouse' bus='ps2'/>

        <graphics type='vnc' port='-1' listen = '0.0.0.0' autoport='yes' keymap='en-us'/>
    </devices>
</domain>
  • 定义两个虚拟机:
$ virt-xml-validate test1_sanlock.xml
test1_sanlock.xml validates

$ virt-xml-validate test2_sanlock.xml
test2_sanlock.xml validates

$ virsh define test1_sanlock.xml
定义域 test1_sanlock(从 test1_sanlock.xml)

$ virsh define test2_sanlock.xml
定义域 test2_sanlock(从 test2_sanlock.xml)

$ virsh list --all
 Id    名称                         状态
----------------------------------------------------
 -     test1_sanlock                  关闭
 -     test2_sanlock                  关闭
 -     test_sanlock                   关闭

运行虚拟机

  • 在Test1节点上运行这两个虚拟机:
$ virsh start test1_sanlock
域 test1_sanlock 已开始

$ virsh start test2_sanlock
域 test2_sanlock 已开始

启动成功。

  • 查看租约文件:
$ ll /var/lib/libvirt/sanlock/
-rw------- 1 sanlock sanlock 1048576 4月  20 19:55 1199371e4095b4aeb587631d5e61ea06
-rw------- 1 sanlock sanlock 1048576 4月  20 19:54 3f8518ff5358a6757f0a5918e3ec7be2
-rw------- 1 sanlock sanlock 1048576 4月  20 19:55 51d27a61a6a3dd58637b6e00bb719cae
-rw------- 1 sanlock sanlock 1048576 4月  20 19:54 6cfae8f6c4541a92e4aa52f14e9977a5
-rw------- 1 sanlock sanlock 1048576 4月  20 16:40 7edb5b6820e56426339607637d18e871
-rw-rw---- 1 sanlock sanlock 1048576 4月  20 19:56 __LIBVIRT__DISKS__
  • 查看锁信息:
$ sanlock direct dump /var/lib/libvirt/sanlock/__LIBVIRT__DISKS__
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__       322ee34a-aa6e-4224-971f-5612072ca6c0.Test1 0000030443 0001 0001
00000512                   __LIBVIRT__DISKS__       d9441ec2-e9a1-4657-add9-728148e11f40.Test2 0000030419 0002 0001

$ direct dump /var/lib/libvirt/sanlock/1199371e4095b4aeb587631d5e61ea06
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__                 1199371e4095b4aeb587631d5e61ea06 0000030379 0001 0001 1

$ sanlock direct dump /var/lib/libvirt/sanlock/3f8518ff5358a6757f0a5918e3ec7be2
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__                 3f8518ff5358a6757f0a5918e3ec7be2 0000030322 0001 0001 1

$ sanlock direct dump /var/lib/libvirt/sanlock/51d27a61a6a3dd58637b6e00bb719cae
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__                 51d27a61a6a3dd58637b6e00bb719cae 0000030379 0001 0001 1

$ sanlock direct dump /var/lib/libvirt/sanlock/6cfae8f6c4541a92e4aa52f14e9977a5
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__                 6cfae8f6c4541a92e4aa52f14e9977a5 0000030322 0001 0001 1

# 这个是/dev/mapper/raw的锁,目前运行在Test2节点上。
sanlock direct dump /var/lib/libvirt/sanlock/7edb5b6820e56426339607637d18e871
  offset                            lockspace                                         resource  timestamp  own  gen lver
00000000                   __LIBVIRT__DISKS__                 7edb5b6820e56426339607637d18e871 0000018630 0002 0001 2
  • 在Test2节点上运行两个虚拟机:
$ virsh start test1_sanlock
错误:开始域 test1_sanlock 失败
错误:resource busy: 请求锁失败:错误 -243

$ virsh start test2_sanlock
错误:开始域 test2_sanlock 失败
错误:resource busy: 请求锁失败:错误 -243

运行失败,符合预期。

总结

  • 所有libvirt的所有资源全部在__LIBVIRT__DISKS__这个Lockspace中,且一个磁盘文件对应一个锁,占用1M空间。

转载于:https://my.oschina.net/LastRitter/blog/1539257

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值