centos8中lvm丢失PV找回方法步骤

机器重启后vmstore卷丢失。通过pvscan、vgscan、lvscan均不到vmstore卷。
先执行fdisk -l,查看磁盘分区情况,发现/dev/sdb /dev/sdc两个磁盘均已挂载在/dev/mapper目录下。
第一步,先删除mapper

[root@node103 ~]# pvscan
  PV /dev/sdd2   VG onn              lvm2 [277.87 GiB / 54.18 GiB free]
  PV /dev/sda    VG gluster_vg_sda   lvm2 [558.91 GiB / 0    free]
  Total: 2 [836.78 GiB] / in use: 2 [836.78 GiB] / in no VG: 0 [0   ]
[root@node103 ~]# vgcfgrestore vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF                                                                                                                               .
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s                                                                                                                               .
  Cannot restore Volume Group vmstore with 2 PVs marked as missing.
  Restore failed.
[root@node103 ~]# cd /etc/lvm/backup/
[root@node103 backup]# ls
gluster_vg_sda  onn  vmstore
[root@node103 backup]# cd vmstore
-bash: cd: vmstore: Not a directory
[root@node103 backup]# ls
gluster_vg_sda  onn  vmstore
[root@node103 backup]# pvcreate --uuid 'm8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF' --restorefile                                                                                                                   /etc/lvm/backup/vmstore /dev/sdb
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Cannot use /dev/sdb: device is a multipath component
[root@node103 backup]# fdisk -l
Disk /dev/sdd: 278.9 GiB, 299439751168 bytes, 584843264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe6ac2d12

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sdd1  *       2048   2099199   2097152     1G 83 Linux
/dev/sdd2       2099200 584843263 582744064 277.9G 8e Linux LVM


Disk /dev/sda: 558.9 GiB, 600127266816 bytes, 1172123568 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdb: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes




Disk /dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: 169 GiB, 181466562560 bytes,                                                                                                                   354426880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-swap: 15.7 GiB, 16840130560 bytes, 32890880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/350014ee7aaad2514: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/350014ee7000283fc: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn-var_log_audit: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var_log: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var_crash: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-tmp: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-home: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/gluster_vg_sda-gluster_lv_engine: 100 GiB, 107374182400 bytes, 209715200 sec                                                                                                                  tors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/gluster_vg_sda-gluster_lv_data: 480 GiB, 515396075520 bytes, 1006632960 sect                                                                                                                  ors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
[root@node103 backup]# blkid
/dev/mapper/onn-var_crash: UUID="00050fc2-5319-46da-b6c2-b0557f602845" BLOCK_SIZE="512" TYPE=                                                                                                                  "xfs"
/dev/sdd2: UUID="DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW" TYPE="LVM2_member" PARTUUID="e6ac2d1                                                                                                                  2-02"
/dev/sdd1: UUID="840d0f88-c5f0-4a98-a7dd-73ebf347a85b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="                                                                                                                  e6ac2d12-01"
/dev/sda: UUID="kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8" TYPE="LVM2_member"
/dev/sdc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_member"
/dev/sdb: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_member"
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: UUID="8bef5e42-0fd1-4595-8b3c-0791                                                                                                                  27a2c237" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-swap: UUID="18284c2e-3596-4596-b0e8-59924944c417" TYPE="swap"
/dev/mapper/350014ee7aaad2514: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_membe                                                                                                                  r"
/dev/mapper/350014ee7000283fc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_membe                                                                                                                  r"
/dev/mapper/onn-var_log_audit: UUID="928237b1-7c78-42d0-afca-dfd362225740" BLOCK_SIZE="512" T                                                                                                                  YPE="xfs"
/dev/mapper/onn-var_log: UUID="cb6aafa3-9a97-4dc5-81f6-d2da3b0fdcd9" BLOCK_SIZE="512" TYPE="x                                                                                                                  fs"
/dev/mapper/onn-var: UUID="47b2e20c-f623-404d-9a12-3b6a31b71096" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-tmp: UUID="599eda13-d184-4a59-958c-93e37bc6fdaa" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-home: UUID="ef67a11a-c786-42d9-89d0-ee518b50fce5" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_engine: UUID="3b99a374-baa4-4783-8da8-8ecbfdb227a4" BLO                                                                                                                  CK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_data: UUID="eb5ef30a-cb48-4491-99c6-3e6c6bfe3f69" BLOCK                                                                                                                  _SIZE="512" TYPE="xfs"

 [root@node103 backup]# dmsetup remove 350014ee7aaad2514  
 [root@node103 backup]# dmsetup remove 350014ee7000283fc 

可以通过 lsblk blkid 等命令查看磁盘uuid
第二步,查看vmstore卷,没有找到

[root@node103 ~]# pvscan
  PV /dev/sdd2   VG onn              lvm2 [277.87 GiB / 54.18 GiB free]
  PV /dev/sda    VG gluster_vg_sda   lvm2 [558.91 GiB / 0    free]
  Total: 2 [836.78 GiB] / in use: 2 [836.78 GiB] / in no VG: 0 [0   ]
[root@node103 ~]# vgcfgrestore vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF            .
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s            .
  Cannot restore Volume Group vmstore with 2 PVs marked as missing.
  Restore failed.
[root@node103 ~]# cd /etc/lvm/backup/
[root@node103 backup]# ls
gluster_vg_sda  onn  vmstore[root@node103 backup]# more vmstore
# Generated by LVM2 version 2.03.12(2)-RHEL8 (2021-05-19): Tue Jun  7 10:42:40 2022

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'lvcreate -l 100%VG -n lv_store vmstore'"

creation_host = "node103"       # Linux node103 4.18.0-365.el8.x86_64 #1 SMP Thu Feb 10 16:11
:23 UTC 2022 x86_64
creation_time = 1654569760      # Tue Jun  7 10:42:40 2022

vmstore {
        id = "HmgGJN-YzQf-8Kks-k36u-3kU1-yBRj-bMM6ME"
        seqno = 2
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF"
                        device = "/dev/sdb"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1758174768   # 838.363 Gigabytes
                        pe_start = 2048
                        pe_count = 214620       # 838.359 Gigabytes
                }

                pv1 {
                        id = "pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s"
                        device = "/dev/sdc"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1758174768   # 838.363 Gigabytes
                        pe_start = 2048
                        pe_count = 214620       # 838.359 Gigabytes
                }
        }

        logical_volumes {

                lv_store {
                        id = "KP7KUD-n7ZO-XMmY-d6De-dd8j-dR15-jdPF4X"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        creation_time = 1654569760      # 2022-06-07 10:42:40 +0800
                        creation_host = "node103"
                        segment_count = 2

                        segment1 {
                                start_extent = 0
                                extent_count = 214620   # 838.359 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv0", 0
                                ]
                        }
                        segment2 {
                                start_extent = 214620
                                extent_count = 214620   # 838.359 Gigabytes

                                type = "striped"
                                stripe_count = 1        # linear

                                stripes = [
                                        "pv1", 0
                                ]
                        }
                }
        }

}
[root@node103 backup]#

[root@node103 backup]# lsblk
NAME                                        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                                           8:0    0 558.9G  0 disk
├─gluster_vg_sda-gluster_lv_engine          253:14   0   100G  0 lvm   /gluster_bricks/engine
├─gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tmeta
│                                           253:15   0     3G  0 lvm
│ └─gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool
│                                           253:17   0 452.9G  0 lvm
│   ├─gluster_vg_sda-gluster_thinpool_gluster_vg_sda
│   │                                       253:18   0 452.9G  1 lvm
│   └─gluster_vg_sda-gluster_lv_data        253:19   0   480G  0 lvm   /gluster_bricks/data
└─gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tdata
                                            253:16   0 452.9G  0 lvm
  └─gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool
                                            253:17   0 452.9G  0 lvm
    ├─gluster_vg_sda-gluster_thinpool_gluster_vg_sda
    │                                       253:18   0 452.9G  1 lvm
    └─gluster_vg_sda-gluster_lv_data        253:19   0   480G  0 lvm   /gluster_bricks/data
sdb                                           8:16   0 838.4G  0 disk
└─350014ee7aaad2514                         253:5    0 838.4G  0 mpath
sdc                                           8:32   0 838.4G  0 disk
└─350014ee7000283fc                         253:6    0 838.4G  0 mpath
sdd                                           8:48   0 278.9G  0 disk
├─sdd1                                        8:49   0     1G  0 part  /boot
└─sdd2                                        8:50   0 277.9G  0 part
  ├─onn-pool00_tmeta                        253:0    0     1G  0 lvm
  │ └─onn-pool00-tpool                      253:2    0   206G  0 lvm
  │   ├─onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1
  │   │                                     253:3    0   169G  0 lvm   /
  │   ├─onn-pool00                          253:7    0   206G  1 lvm
  │   ├─onn-var_log_audit                   253:8    0     2G  0 lvm   /var/log/audit
  │   ├─onn-var_log                         253:9    0     8G  0 lvm   /var/log
  │   ├─onn-var_crash                       253:10   0    10G  0 lvm   /var/crash
  │   ├─onn-var                             253:11   0    15G  0 lvm   /var
  │   ├─onn-tmp                             253:12   0     1G  0 lvm   /tmp
  │   └─onn-home                            253:13   0     1G  0 lvm   /home
  ├─onn-pool00_tdata                        253:1    0   206G  0 lvm
  │ └─onn-pool00-tpool                      253:2    0   206G  0 lvm
  │   ├─onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1
  │   │                                     253:3    0   169G  0 lvm   /
  │   ├─onn-pool00                          253:7    0   206G  1 lvm
  │   ├─onn-var_log_audit                   253:8    0     2G  0 lvm   /var/log/audit
  │   ├─onn-var_log                         253:9    0     8G  0 lvm   /var/log
  │   ├─onn-var_crash                       253:10   0    10G  0 lvm   /var/crash
  │   ├─onn-var                             253:11   0    15G  0 lvm   /var
  │   ├─onn-tmp                             253:12   0     1G  0 lvm   /tmp
  │   └─onn-home                            253:13   0     1G  0 lvm   /home
  └─onn-swap                                253:4    0  15.7G  0 lvm   [SWAP]
sr0                                          11:0    1  1024M  0 rom
[root@node103 backup]# vgstore
-bash: vgstore: command not found
[root@node103 backup]# vgscan
  Found volume group "onn" using metadata type lvm2
  Found volume group "gluster_vg_sda" using metadata type lvm2
[root@node103 backup]#
[root@node103 backup]# pv
-bash: pv: command not found
[root@node103 backup]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdd2
  VG Name               onn
  PV Size               277.87 GiB / not usable 3.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              71135
  Free PE               13871
  Allocated PE          57264
  PV UUID               DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW

  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               gluster_vg_sda
  PV Size               558.91 GiB / not usable 1.96 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              143081
  Free PE               0
  Allocated PE          143081
  PV UUID               kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8

[root@node103 backup]# vgdisplay
  --- Volume group ---
  VG Name               onn
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  35
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                11
  Open LV               8
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               277.87 GiB
  PE Size               4.00 MiB
  Total PE              71135
  Alloc PE / Size       57264 / <223.69 GiB
  Free  PE / Size       13871 / 54.18 GiB
  VG UUID               mhgstB-bHti-qcij-ngzI-AkR3-1VNC-vnjlSd

  --- Volume group ---
  VG Name               gluster_vg_sda
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               558.91 GiB
  PE Size               4.00 MiB
  Total PE              143081
  Alloc PE / Size       143081 / 558.91 GiB
  Free  PE / Size       0 / 0
  VG UUID               la8gWm-TG0a-OqT8-VgUG-Px30-E1RU-Hz2N8t

[root@node103 backup]# df -h
Filesystem                                                 Size  Used Avail Use% Mounted on
devtmpfs                                                    16G     0   16G   0% /dev
tmpfs                                                       16G  4.0K   16G   1% /dev/shm
tmpfs                                                       16G   26M   16G   1% /run
tmpfs                                                       16G     0   16G   0% /sys/fs/cgro                                                                                                                  up
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1  169G  5.7G  164G   4% /
/dev/mapper/gluster_vg_sda-gluster_lv_engine               100G   99G  1.3G  99% /gluster_bri                                                                                                                  cks/engine
/dev/mapper/gluster_vg_sda-gluster_lv_data                 480G   14G  467G   3% /gluster_bri                                                                                                                  cks/data
/dev/sdd1                                                 1014M  351M  664M  35% /boot
/dev/mapper/onn-tmp                                       1014M   40M  975M   4% /tmp
/dev/mapper/onn-var                                         15G  364M   15G   3% /var
/dev/mapper/onn-var_log                                    8.0G  333M  7.7G   5% /var/log
/dev/mapper/onn-var_crash                                   10G  105M  9.9G   2% /var/crash
/dev/mapper/onn-home                                      1014M   40M  975M   4% /home
/dev/mapper/onn-var_log_audit                              2.0G   56M  2.0G   3% /var/log/aud                                                                                                                  it
ovirt106.com:/data                                         200G   14G  187G   7% /rhev/data-c                                                                                                                  enter/mnt/glusterSD/ovirt106.com:_data
ovirt106.com:/engine                                       100G  100G  216M 100% /rhev/data-c                                                                                                                  enter/mnt/glusterSD/ovirt106.com:_engine
ovirt106.com:/gs_store                                     1.7T  529G  1.2T  32% /rhev/data-c                                                                                                                  enter/mnt/glusterSD/ovirt106.com:_gs__store
tmpfs                                                      3.2G     0  3.2G   0% /run/user/0
[root@node103 backup]# blkid
/dev/mapper/onn-var_crash: UUID="00050fc2-5319-46da-b6c2-b0557f602845" BLOCK_SIZE="512" TYPE=                                                                                                                  "xfs"
/dev/sdd2: UUID="DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW" TYPE="LVM2_member" PARTUUID="e6ac2d1                                                                                                                  2-02"
/dev/sdd1: UUID="840d0f88-c5f0-4a98-a7dd-73ebf347a85b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="                                                                                                                  e6ac2d12-01"
/dev/sda: UUID="kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8" TYPE="LVM2_member"
/dev/sdc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_member"
/dev/sdb: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_member"
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: UUID="8bef5e42-0fd1-4595-8b3c-0791                                                                                                                  27a2c237" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-swap: UUID="18284c2e-3596-4596-b0e8-59924944c417" TYPE="swap"
/dev/mapper/350014ee7aaad2514: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_membe                                                                                                                  r"
/dev/mapper/350014ee7000283fc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_membe                                                                                                                  r"
/dev/mapper/onn-var_log_audit: UUID="928237b1-7c78-42d0-afca-dfd362225740" BLOCK_SIZE="512" T                                                                                                                  YPE="xfs"
/dev/mapper/onn-var_log: UUID="cb6aafa3-9a97-4dc5-81f6-d2da3b0fdcd9" BLOCK_SIZE="512" TYPE="x                                                                                                                  fs"
/dev/mapper/onn-var: UUID="47b2e20c-f623-404d-9a12-3b6a31b71096" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-tmp: UUID="599eda13-d184-4a59-958c-93e37bc6fdaa" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-home: UUID="ef67a11a-c786-42d9-89d0-ee518b50fce5" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_engine: UUID="3b99a374-baa4-4783-8da8-8ecbfdb227a4" BLO                                                                                                                  CK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_data: UUID="eb5ef30a-cb48-4491-99c6-3e6c6bfe3f69" BLOCK                                                                                                                  _SIZE="512" TYPE="xfs"
[root@node103 backup]# lvdisplay
  --- Logical volume ---
  LV Name                pool00
  VG Name                onn
  LV UUID                alhaPW-6MOs-8TaT-nf4f-DaBI-t1gn-OUxq1t
  LV Write Access        read/write (activated read only)
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:25 +0800
  LV Pool metadata       pool00_tmeta
  LV Pool data           pool00_tdata
  LV Status              available
  # open                 0
  LV Size                206.00 GiB
  Allocated pool data    2.15%
  Allocated metadata     1.89%
  Current LE             52737
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:7

  --- Logical volume ---
  LV Path                /dev/onn/var_log_audit
  LV Name                var_log_audit
  VG Name                onn
  LV UUID                R6oVXy-Bzcb-EkZu-rHPt-323t-yRT7-RAYrCg
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:27 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Mapped size            0.97%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:8

  --- Logical volume ---
  LV Path                /dev/onn/var_log
  LV Name                var_log
  VG Name                onn
  LV UUID                sv1pji-SQgd-Pfv0-Ryng-26ht-gRYB-zZSmni
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:28 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                8.00 GiB
  Mapped size            3.19%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:9

  --- Logical volume ---
  LV Path                /dev/onn/var_crash
  LV Name                var_crash
  VG Name                onn
  LV UUID                aFv0hR-7UXQ-sCjz-P1rF-kG9Y-dAne-ND8MXz
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:28 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Mapped size            0.11%
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:10

  --- Logical volume ---
  LV Path                /dev/onn/var
  LV Name                var
  VG Name                onn
  LV UUID                zaejSr-0aHo-dFoZ-i7Ew-hJG6-C0WU-QB11Jh
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:29 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Mapped size            3.31%
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:11

  --- Logical volume ---
  LV Path                /dev/onn/tmp
  LV Name                tmp
  VG Name                onn
  LV UUID                1SxaTq-XZZB-Al7q-kKm2-7GZM-i6uW-Qrl7Kf
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:30 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            1.20%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:12

  --- Logical volume ---
  LV Path                /dev/onn/home
  LV Name                home
  VG Name                onn
  LV UUID                zTY0Ka-V7QS-CaOt-22KN-YeVI-4fzx-tv5Tb4
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:30 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            1.04%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:13

  --- Logical volume ---
  LV Path                /dev/onn/root
  LV Name                root
  VG Name                onn
  LV UUID                sULbd5-cDnn-d2I5-kqnN-ZxJJ-lglL-mRvHue
  LV Write Access        read only
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:31 +0800
  LV Pool name           pool00
  LV Status              NOT available
  LV Size                169.00 GiB
  Current LE             43265
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/onn/swap
  LV Name                swap
  VG Name                onn
  LV UUID                IegeqC-UWJ8-zybL-mcGi-PPVp-qwc6-JQWSLz
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:31 +0800
  LV Status              available
  # open                 2
  LV Size                15.68 GiB
  Current LE             4015
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4

  --- Logical volume ---
  LV Path                /dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0
  LV Name                ovirt-node-ng-4.4.10.2-0.20220303.0
  VG Name                onn
  LV UUID                32AE3J-8DaJ-VoY7-wFHr-89bD-UNSN-viKlJs
  LV Write Access        read/write
  LV Creation host, time node103, 2022-05-13 22:52:28 +0800
  LV Pool name           pool00
  LV Thin origin name    root
  LV Status              NOT available
  LV Size                169.00 GiB
  Current LE             43265
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1
  LV Name                ovirt-node-ng-4.4.10.2-0.20220303.0+1
  VG Name                onn
  LV UUID                239gDL-tE5w-2lEj-pLP5-XWof-o0a8-W2RFEu
  LV Write Access        read/write
  LV Creation host, time node103, 2022-05-13 22:52:32 +0800
  LV Pool name           pool00
  LV Thin origin name    ovirt-node-ng-4.4.10.2-0.20220303.0
  LV Status              available
  # open                 1
  LV Size                169.00 GiB
  Mapped size            2.00%
  Current LE             43265
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

  --- Logical volume ---
  LV Name                gluster_thinpool_gluster_vg_sda
  VG Name                gluster_vg_sda
  LV UUID                vCGnuS-5q64-7DVT-geDw-487S-WmeE-dd8sUk
  LV Write Access        read/write (activated read only)
  LV Creation host, time node103, 2022-05-31 11:09:45 +0800
  LV Pool metadata       gluster_thinpool_gluster_vg_sda_tmeta
  LV Pool data           gluster_thinpool_gluster_vg_sda_tdata
  LV Status              available
  # open                 0
  LV Size                452.91 GiB
  Allocated pool data    2.19%
  Allocated metadata     0.57%
  Current LE             115945
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:18

  --- Logical volume ---
  LV Path                /dev/gluster_vg_sda/gluster_lv_engine
  LV Name                gluster_lv_engine
  VG Name                gluster_vg_sda
  LV UUID                g77xyh-CZGH-dZdu-lUp0-Guh2-WyTi-w1ct4E
  LV Write Access        read/write
  LV Creation host, time node103, 2022-05-31 11:08:51 +0800
  LV Status              available
  # open                 1
  LV Size                100.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:14

  --- Logical volume ---
  LV Path                /dev/gluster_vg_sda/gluster_lv_data
  LV Name                gluster_lv_data
  VG Name                gluster_vg_sda
  LV UUID                rHtxS7-jd1O-HzHu-54KP-9D8p-6Msq-LLNXCf
  LV Write Access        read/write
  LV Creation host, time node103, 2022-05-31 11:11:15 +0800
  LV Pool name           gluster_thinpool_gluster_vg_sda
  LV Status              available
  # open                 1
  LV Size                480.00 GiB
  Mapped size            2.06%
  Current LE             122880
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:19

[root@node103 backup]# df -h
Filesystem                                                 Size  Used Avail Use% Mounted on
devtmpfs                                                    16G     0   16G   0% /dev
tmpfs                                                       16G  4.0K   16G   1% /dev/shm
tmpfs                                                       16G   26M   16G   1% /run
tmpfs                                                       16G     0   16G   0% /sys/fs/cgro                                                                                                                  up
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1  169G  5.7G  164G   4% /
/dev/mapper/gluster_vg_sda-gluster_lv_engine               100G   99G  1.3G  99% /gluster_bri                                                                                                                  cks/engine
/dev/mapper/gluster_vg_sda-gluster_lv_data                 480G   14G  467G   3% /gluster_bri                                                                                                                  cks/data
/dev/sdd1                                                 1014M  351M  664M  35% /boot
/dev/mapper/onn-tmp                                       1014M   40M  975M   4% /tmp
/dev/mapper/onn-var                                         15G  364M   15G   3% /var
/dev/mapper/onn-var_log                                    8.0G  334M  7.7G   5% /var/log
/dev/mapper/onn-var_crash                                   10G  105M  9.9G   2% /var/crash
/dev/mapper/onn-home                                      1014M   40M  975M   4% /home
/dev/mapper/onn-var_log_audit                              2.0G   56M  2.0G   3% /var/log/aud                                                                                                                  it
ovirt106.com:/data                                         200G   14G  187G   7% /rhev/data-c                                                                                                                  enter/mnt/glusterSD/ovirt106.com:_data
ovirt106.com:/engine                                       100G  100G  216M 100% /rhev/data-c                                                                                                                  enter/mnt/glusterSD/ovirt106.com:_engine
ovirt106.com:/gs_store                                     1.7T  529G  1.2T  32% /rhev/data-c                                                                                                                  enter/mnt/glusterSD/ovirt106.com:_gs__store
tmpfs                                                      3.2G     0  3.2G   0% /run/user/0
[root@node103 backup]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

The old LVM2_member signature will be removed by a write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x798b6755.

Command (m for help): p
Disk /dev/sdb: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x798b6755

Command (m for help): q

[root@node103 backup]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdd2
  VG Name               onn
  PV Size               277.87 GiB / not usable 3.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              71135
  Free PE               13871
  Allocated PE          57264
  PV UUID               DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW

  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               gluster_vg_sda
  PV Size               558.91 GiB / not usable 1.96 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              143081
  Free PE               0
  Allocated PE          143081
  PV UUID               kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8

[root@node103 backup]# pvs
  PV         VG             Fmt  Attr PSize   PFree
  /dev/sda   gluster_vg_sda lvm2 a--  558.91g     0
  /dev/sdd2  onn            lvm2 a--  277.87g 54.18g
[root@node103 backup]# pvcreate /dev/sdb
  Cannot use /dev/sdb: device is a multipath component
[root@node103 backup]# device is a multipath component^C
[root@node103 backup]# vi /etc/multipath.conf
[root@node103 backup]# lvm
lvm> pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdd2
  VG Name               onn
  PV Size               277.87 GiB / not usable 3.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              71135
  Free PE               13871
  Allocated PE          57264
  PV UUID               DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW

  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               gluster_vg_sda
  PV Size               558.91 GiB / not usable 1.96 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              143081
  Free PE               0
  Allocated PE          143081
  PV UUID               kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8

lvm> pvs
  PV         VG             Fmt  Attr PSize   PFree
  /dev/sda   gluster_vg_sda lvm2 a--  558.91g     0
  /dev/sdd2  onn            lvm2 a--  277.87g 54.18g
lvm> vgreduce --removemissing vmstore
  Volume group "vmstore" not found
  Cannot process volume group vmstore
lvm> pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdd2
  VG Name               onn
  PV Size               277.87 GiB / not usable 3.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              71135
  Free PE               13871
  Allocated PE          57264
  PV UUID               DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW

  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               gluster_vg_sda
  PV Size               558.91 GiB / not usable 1.96 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              143081
  Free PE               0
  Allocated PE          143081
  PV UUID               kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8

lvm> lvm
lvm> exit
  Exiting.

第三步 从/etc/lvm/archive目录下对应的VG(vgroot)文件(VolumeGroupName_xxxx.vg)确认物理卷的UUID,使用 pvcreate 命令的 --uuid 和 --restorefile 参数来恢复物理卷
恢复LVM
vgcfgrestore datavg// 恢复datavg的vg信息
vgchange -ay vg_data 激活一下vg
lvsdisplay
##不出意外,那家伙又无情的回来了

# vmstore应该替换为你的vg的名字,这里备份了你的pv,lv的信息,我们可以利用他恢复
more /etc/lvm/backup/vmstore
# iidlad-LH1W-azxX-vtY5-54Z8-KXLK-YLp8Ik 替换为你的pv的UUID,从上面的文件中获取。恢复PV
pvcreate /dev/sdb -u iidlad-LH1W-azxX-vtY5-54Z8-KXLK-YLp8Ik --restorefile /etc/lvm/backup/vmstore

[root@node103 backup]# pvcreate /dev/sdb -u m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF --restoref                                                                                                                  ile /etc/lvm/backup/vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Cannot use /dev/sdb: device is a multipath component
[root@node103 backup]# Couldn't find devicepvs^C
[root@node103 backup]# pvscan
  PV /dev/sdd2   VG onn              lvm2 [277.87 GiB / 54.18 GiB free]
  PV /dev/sda    VG gluster_vg_sda   lvm2 [558.91 GiB / 0    free]
  Total: 2 [836.78 GiB] / in use: 2 [836.78 GiB] / in no VG: 0 [0   ]
[root@node103 backup]# lvs
  LV                                    VG             Attr       LSize   Pool                                                                                                                                              Origin                              Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data                       gluster_vg_sda Vwi-aot--- 480.00g gluster_thinpool_gl                                                                                                                  uster_vg_sda                                     2.06
  gluster_lv_engine                     gluster_vg_sda -wi-ao---- 100.00g                                                                                                                                      
  gluster_thinpool_gluster_vg_sda       gluster_vg_sda twi-aot--- 452.91g                                                                                                                                                                                       2.19   0.57
  home                                  onn            Vwi-aotz--   1.00g pool00                                                                                                                                                                                1.04
  ovirt-node-ng-4.4.10.2-0.20220303.0   onn            Vwi---tz-k 169.00g pool00                                                                                                                                            root
  ovirt-node-ng-4.4.10.2-0.20220303.0+1 onn            Vwi-aotz-- 169.00g pool00                                                                                                                                            ovirt-node-ng-4.4.10.2-0.20220303.0 2.00
  pool00                                onn            twi-aotz-- 206.00g                                                                                                                                                                                       2.15   1.89
  root                                  onn            Vri---tz-k 169.00g pool00                                                                                                                               
  swap                                  onn            -wi-ao----  15.68g                                                                                                                                      
  tmp                                   onn            Vwi-aotz--   1.00g pool00                                                                                                                                                                                1.20
  var                                   onn            Vwi-aotz--  15.00g pool00                                                                                                                                                                                3.31
  var_crash                             onn            Vwi-aotz--  10.00g pool00                                                                                                                                                                                0.11
  var_log                               onn            Vwi-aotz--   8.00g pool00                                                                                                                                                                                3.21
  var_log_audit                         onn            Vwi-aotz--   2.00g pool00                                                                                                                                                                                0.97
[root@node103 backup]# lvdisplay
  --- Logical volume ---
  LV Name                pool00
  VG Name                onn
  LV UUID                alhaPW-6MOs-8TaT-nf4f-DaBI-t1gn-OUxq1t
  LV Write Access        read/write (activated read only)
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:25 +0800
  LV Pool metadata       pool00_tmeta
  LV Pool data           pool00_tdata
  LV Status              available
  # open                 0
  LV Size                206.00 GiB
  Allocated pool data    2.15%
  Allocated metadata     1.89%
  Current LE             52737
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:7

  --- Logical volume ---
  LV Path                /dev/onn/var_log_audit
  LV Name                var_log_audit
  VG Name                onn
  LV UUID                R6oVXy-Bzcb-EkZu-rHPt-323t-yRT7-RAYrCg
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:27 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Mapped size            0.97%
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:8

  --- Logical volume ---
  LV Path                /dev/onn/var_log
  LV Name                var_log
  VG Name                onn
  LV UUID                sv1pji-SQgd-Pfv0-Ryng-26ht-gRYB-zZSmni
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:28 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                8.00 GiB
  Mapped size            3.21%
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:9

  --- Logical volume ---
  LV Path                /dev/onn/var_crash
  LV Name                var_crash
  VG Name                onn
  LV UUID                aFv0hR-7UXQ-sCjz-P1rF-kG9Y-dAne-ND8MXz
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:28 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Mapped size            0.11%
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:10

  --- Logical volume ---
  LV Path                /dev/onn/var
  LV Name                var
  VG Name                onn
  LV UUID                zaejSr-0aHo-dFoZ-i7Ew-hJG6-C0WU-QB11Jh
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:29 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Mapped size            3.31%
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:11

  --- Logical volume ---
  LV Path                /dev/onn/tmp
  LV Name                tmp
  VG Name                onn
  LV UUID                1SxaTq-XZZB-Al7q-kKm2-7GZM-i6uW-Qrl7Kf
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:30 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            1.20%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:12

  --- Logical volume ---
  LV Path                /dev/onn/home
  LV Name                home
  VG Name                onn
  LV UUID                zTY0Ka-V7QS-CaOt-22KN-YeVI-4fzx-tv5Tb4
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:30 +0800
  LV Pool name           pool00
  LV Status              available
  # open                 1
  LV Size                1.00 GiB
  Mapped size            1.04%
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:13

  --- Logical volume ---
  LV Path                /dev/onn/root
  LV Name                root
  VG Name                onn
  LV UUID                sULbd5-cDnn-d2I5-kqnN-ZxJJ-lglL-mRvHue
  LV Write Access        read only
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:31 +0800
  LV Pool name           pool00
  LV Status              NOT available
  LV Size                169.00 GiB
  Current LE             43265
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/onn/swap
  LV Name                swap
  VG Name                onn
  LV UUID                IegeqC-UWJ8-zybL-mcGi-PPVp-qwc6-JQWSLz
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2022-05-13 22:43:31 +0800
  LV Status              available
  # open                 2
  LV Size                15.68 GiB
  Current LE             4015
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4

  --- Logical volume ---
  LV Path                /dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0
  LV Name                ovirt-node-ng-4.4.10.2-0.20220303.0
  VG Name                onn
  LV UUID                32AE3J-8DaJ-VoY7-wFHr-89bD-UNSN-viKlJs
  LV Write Access        read/write
  LV Creation host, time node103, 2022-05-13 22:52:28 +0800
  LV Pool name           pool00
  LV Thin origin name    root
  LV Status              NOT available
  LV Size                169.00 GiB
  Current LE             43265
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1
  LV Name                ovirt-node-ng-4.4.10.2-0.20220303.0+1
  VG Name                onn
  LV UUID                239gDL-tE5w-2lEj-pLP5-XWof-o0a8-W2RFEu
  LV Write Access        read/write
  LV Creation host, time node103, 2022-05-13 22:52:32 +0800
  LV Pool name           pool00
  LV Thin origin name    ovirt-node-ng-4.4.10.2-0.20220303.0
  LV Status              available
  # open                 1
  LV Size                169.00 GiB
  Mapped size            2.00%
  Current LE             43265
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

  --- Logical volume ---
  LV Name                gluster_thinpool_gluster_vg_sda
  VG Name                gluster_vg_sda
  LV UUID                vCGnuS-5q64-7DVT-geDw-487S-WmeE-dd8sUk
  LV Write Access        read/write (activated read only)
  LV Creation host, time node103, 2022-05-31 11:09:45 +0800
  LV Pool metadata       gluster_thinpool_gluster_vg_sda_tmeta
  LV Pool data           gluster_thinpool_gluster_vg_sda_tdata
  LV Status              available
  # open                 0
  LV Size                452.91 GiB
  Allocated pool data    2.19%
  Allocated metadata     0.57%
  Current LE             115945
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:18

  --- Logical volume ---
  LV Path                /dev/gluster_vg_sda/gluster_lv_engine
  LV Name                gluster_lv_engine
  VG Name                gluster_vg_sda
  LV UUID                g77xyh-CZGH-dZdu-lUp0-Guh2-WyTi-w1ct4E
  LV Write Access        read/write
  LV Creation host, time node103, 2022-05-31 11:08:51 +0800
  LV Status              available
  # open                 1
  LV Size                100.00 GiB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:14

  --- Logical volume ---
  LV Path                /dev/gluster_vg_sda/gluster_lv_data
  LV Name                gluster_lv_data
  VG Name                gluster_vg_sda
  LV UUID                rHtxS7-jd1O-HzHu-54KP-9D8p-6Msq-LLNXCf
  LV Write Access        read/write
  LV Creation host, time node103, 2022-05-31 11:11:15 +0800
  LV Pool name           gluster_thinpool_gluster_vg_sda
  LV Status              available
  # open                 1
  LV Size                480.00 GiB
  Mapped size            2.06%
  Current LE             122880
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:19

[root@node103 backup]# lvs
  LV                                    VG             Attr       LSize   Pool                                                                                                                                              Origin                              Data%  Meta%  Move Log Cpy%Sync Convert
  gluster_lv_data                       gluster_vg_sda Vwi-aot--- 480.00g gluster_thinpool_gl                                                                                                                  uster_vg_sda                                     2.06
  gluster_lv_engine                     gluster_vg_sda -wi-ao---- 100.00g                                                                                                                                      
  gluster_thinpool_gluster_vg_sda       gluster_vg_sda twi-aot--- 452.91g                                                                                                                                                                                       2.19   0.57
  home                                  onn            Vwi-aotz--   1.00g pool00                                                                                                                                                                                1.04
  ovirt-node-ng-4.4.10.2-0.20220303.0   onn            Vwi---tz-k 169.00g pool00                                                                                                                                            root
  ovirt-node-ng-4.4.10.2-0.20220303.0+1 onn            Vwi-aotz-- 169.00g pool00                                                                                                                                            ovirt-node-ng-4.4.10.2-0.20220303.0 2.00
  pool00                                onn            twi-aotz-- 206.00g                                                                                                                                                                                       2.15   1.89
  root                                  onn            Vri---tz-k 169.00g pool00                                                                                                                               
  swap                                  onn            -wi-ao----  15.68g                                                                                                                                      
  tmp                                   onn            Vwi-aotz--   1.00g pool00                                                                                                                                                                                1.20
  var                                   onn            Vwi-aotz--  15.00g pool00                                                                                                                                                                                3.31
  var_crash                             onn            Vwi-aotz--  10.00g pool00                                                                                                                                                                                0.11
  var_log                               onn            Vwi-aotz--   8.00g pool00                                                                                                                                                                                3.21
  var_log_audit                         onn            Vwi-aotz--   2.00g pool00                                                                                                                                                                                0.97
[root@node103 backup]# blkid
/dev/mapper/onn-var_crash: UUID="00050fc2-5319-46da-b6c2-b0557f602845" BLOCK_SIZE="512" TYPE=                                                                                                                  "xfs"
/dev/sdd2: UUID="DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW" TYPE="LVM2_member" PARTUUID="e6ac2d1                                                                                                                  2-02"
/dev/sdd1: UUID="840d0f88-c5f0-4a98-a7dd-73ebf347a85b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="                                                                                                                  e6ac2d12-01"
/dev/sda: UUID="kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8" TYPE="LVM2_member"
/dev/sdc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_member"
/dev/sdb: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_member"
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: UUID="8bef5e42-0fd1-4595-8b3c-0791                                                                                                                  27a2c237" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-swap: UUID="18284c2e-3596-4596-b0e8-59924944c417" TYPE="swap"
/dev/mapper/350014ee7aaad2514: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_membe                                                                                                                  r"
/dev/mapper/350014ee7000283fc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_membe                                                                                                                  r"
/dev/mapper/onn-var_log_audit: UUID="928237b1-7c78-42d0-afca-dfd362225740" BLOCK_SIZE="512" T                                                                                                                  YPE="xfs"
/dev/mapper/onn-var_log: UUID="cb6aafa3-9a97-4dc5-81f6-d2da3b0fdcd9" BLOCK_SIZE="512" TYPE="x                                                                                                                  fs"
/dev/mapper/onn-var: UUID="47b2e20c-f623-404d-9a12-3b6a31b71096" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-tmp: UUID="599eda13-d184-4a59-958c-93e37bc6fdaa" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-home: UUID="ef67a11a-c786-42d9-89d0-ee518b50fce5" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_engine: UUID="3b99a374-baa4-4783-8da8-8ecbfdb227a4" BLO                                                                                                                  CK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_data: UUID="eb5ef30a-cb48-4491-99c6-3e6c6bfe3f69" BLOCK                                                                                                                  _SIZE="512" TYPE="xfs"
[root@node103 backup]# vgcfgrestore vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Cannot restore Volume Group vmstore with 2 PVs marked as missing.
  Restore failed.
[root@node103 backup]# vgreduce --removemissing vgname
  Volume group "vgname" not found
  Cannot process volume group vgname
[root@node103 backup]# vgreduce --removemissing vmstore
  Volume group "vmstore" not found
  Cannot process volume group vmstore
[root@node103 backup]# pvscan
  PV /dev/sdd2   VG onn              lvm2 [277.87 GiB / 54.18 GiB free]
  PV /dev/sda    VG gluster_vg_sda   lvm2 [558.91 GiB / 0    free]
  Total: 2 [836.78 GiB] / in use: 2 [836.78 GiB] / in no VG: 0 [0   ]
[root@node103 backup]# lvscan
  ACTIVE            '/dev/onn/pool00' [206.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log_audit' [2.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log' [8.00 GiB] inherit
  ACTIVE            '/dev/onn/var_crash' [10.00 GiB] inherit
  ACTIVE            '/dev/onn/var' [15.00 GiB] inherit
  ACTIVE            '/dev/onn/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/onn/home' [1.00 GiB] inherit
  inactive          '/dev/onn/root' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/swap' [15.68 GiB] inherit
  inactive          '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1' [169.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_thinpool_gluster_vg_sda' [452.91 GiB] inher                                                                                                                   it
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_engine' [100.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_data' [480.00 GiB] inherit
[root@node103 backup]# blkid
/dev/mapper/onn-var_crash: UUID="00050fc2-5319-46da-b6c2-b0557f602845" BLOCK_SIZE="512" TYPE                                                                                                                   ="xfs"
/dev/sdd2: UUID="DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW" TYPE="LVM2_member" PARTUUID="e6ac2d                                                                                                                   12-02"
/dev/sdd1: UUID="840d0f88-c5f0-4a98-a7dd-73ebf347a85b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID=                                                                                                                   "e6ac2d12-01"
/dev/sda: UUID="kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8" TYPE="LVM2_member"
/dev/sdc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_member"
/dev/sdb: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_member"
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: UUID="8bef5e42-0fd1-4595-8b3c-079                                                                                                                   127a2c237" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-swap: UUID="18284c2e-3596-4596-b0e8-59924944c417" TYPE="swap"
/dev/mapper/350014ee7aaad2514: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_memb                                                                                                                   er"
/dev/mapper/350014ee7000283fc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_memb                                                                                                                   er"
/dev/mapper/onn-var_log_audit: UUID="928237b1-7c78-42d0-afca-dfd362225740" BLOCK_SIZE="512"                                                                                                                    TYPE="xfs"
/dev/mapper/onn-var_log: UUID="cb6aafa3-9a97-4dc5-81f6-d2da3b0fdcd9" BLOCK_SIZE="512" TYPE="                                                                                                                   xfs"
/dev/mapper/onn-var: UUID="47b2e20c-f623-404d-9a12-3b6a31b71096" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-tmp: UUID="599eda13-d184-4a59-958c-93e37bc6fdaa" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-home: UUID="ef67a11a-c786-42d9-89d0-ee518b50fce5" BLOCK_SIZE="512" TYPE="xfs                                                                                                                   "
/dev/mapper/gluster_vg_sda-gluster_lv_engine: UUID="3b99a374-baa4-4783-8da8-8ecbfdb227a4" BL                                                                                                                   OCK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_data: UUID="eb5ef30a-cb48-4491-99c6-3e6c6bfe3f69" BLOC                                                                                                                   K_SIZE="512" TYPE="xfs"

使用dmsetup 删除mapper

[root@node103 ~]# pvscan
  PV /dev/sdd2   VG onn              lvm2 [277.87 GiB / 54.18 GiB free]
  PV /dev/sda    VG gluster_vg_sda   lvm2 [558.91 GiB / 0    free]
  Total: 2 [836.78 GiB] / in use: 2 [836.78 GiB] / in no VG: 0 [0   ]
[root@node103 ~]# vgscan
  Found volume group "onn" using metadata type lvm2
  Found volume group "gluster_vg_sda" using metadata type lvm2
[root@node103 ~]# pvcreate /dev/sdb -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF  --restorefil                                                                                                                   e /etc/lvm/backup/vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Cannot use /dev/sdb: device is a multipath component
[root@node103 ~]# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) help
  align-check TYPE N                        check partition N for TYPE(min|opt) alignment
  help [COMMAND]                           print general help, or help on COMMAND
  mklabel,mktable LABEL-TYPE               create a new disklabel (partition table)
  mkpart PART-TYPE [FS-TYPE] START END     make a partition
  name NUMBER NAME                         name partition NUMBER as NAME
  print [devices|free|list,all|NUMBER]     display the partition table, available devices,
        free space, all found partitions, or a particular partition
  quit                                     exit program
  rescue START END                         rescue a lost partition near START and END
  resizepart NUMBER END                    resize partition NUMBER
  rm NUMBER                                delete partition NUMBER
  select DEVICE                            choose the device to edit
  disk_set FLAG STATE                      change the FLAG on selected device
  disk_toggle [FLAG]                       toggle the state of FLAG on selected device
  set NUMBER FLAG STATE                    change the FLAG on partition NUMBER
  toggle [NUMBER [FLAG]]                   toggle the state of FLAG on partition NUMBER
  unit UNIT                                set the default unit to UNIT
  version                                  display the version number and copyright
        information of GNU Parted
(parted) p
Error: /dev/sdb: unrecognised disk label
Model: WD WD9001BKHG (scsi)
Disk /dev/sdb: 900GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
(parted) q
[root@node103 ~]# fdisk -l
Disk /dev/sdd: 278.9 GiB, 299439751168 bytes, 584843264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe6ac2d12

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sdd1  *       2048   2099199   2097152     1G 83 Linux
/dev/sdd2       2099200 584843263 582744064 277.9G 8e Linux LVM


Disk /dev/sda: 558.9 GiB, 600127266816 bytes, 1172123568 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdb: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes




Disk /dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: 169 GiB, 181466562560 bytes,                                                                                                                    354426880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-swap: 15.7 GiB, 16840130560 bytes, 32890880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/350014ee7aaad2514: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/350014ee7000283fc: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn-var_log_audit: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var_log: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var_crash: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-tmp: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-home: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/gluster_vg_sda-gluster_lv_engine: 100 GiB, 107374182400 bytes, 209715200 se                                                                                                                   ctors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/gluster_vg_sda-gluster_lv_data: 480 GiB, 515396075520 bytes, 1006632960 sec                                                                                                                   tors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
[root@node103 ~]# ll /dev/mapper/350014ee7aaad2514
lrwxrwxrwx. 1 root root 7 Jun  9 11:59 /dev/mapper/350014ee7aaad2514 -> ../dm-5
[root@node103 ~]# ll /dev/mapper/350014ee7aaad2514
lrwxrwxrwx. 1 root root 7 Jun  9 11:59 /dev/mapper/350014ee7aaad2514 -> ../dm-5
[root@node103 ~]# ll /dev/mapper/Volume-lv_store
ls: cannot access '/dev/mapper/Volume-lv_store': No such file or directory
[root@node103 ~]# dmsetup remove 350014ee7aaad2514
[root@node103 ~]# dmsetup remove 350014ee7000283fc

利用uuid恢复PV
vgcfgrestore datavg// 恢复datavg的vg信息
vgchange -ay vg_data 激活一下vg
lvsdisplay
##不出意外,那家伙又无情的回来了

在这里插入代码片
[root@node103 ~]# pvcreate /dev/sdb -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF  --restorefil                                                                                                                   e /etc/lvm/backup/vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Can't initialize physical volume "/dev/sdb" of volume group "vmstore" without -ff
  /dev/sdb: physical volume not initialized.
[root@node103 ~]# fdisk -l
Disk /dev/sdd: 278.9 GiB, 299439751168 bytes, 584843264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe6ac2d12

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sdd1  *       2048   2099199   2097152     1G 83 Linux
/dev/sdd2       2099200 584843263 582744064 277.9G 8e Linux LVM


Disk /dev/sda: 558.9 GiB, 600127266816 bytes, 1172123568 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdb: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 838.4 GiB, 900185481216 bytes, 1758174768 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes




Disk /dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: 169 GiB, 181466562560 bytes,                                                                                                                    354426880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-swap: 15.7 GiB, 16840130560 bytes, 32890880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/onn-var_log_audit: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var_log: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var_crash: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-var: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-tmp: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/onn-home: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes


Disk /dev/mapper/gluster_vg_sda-gluster_lv_engine: 100 GiB, 107374182400 bytes, 209715200 se                                                                                                                   ctors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/gluster_vg_sda-gluster_lv_data: 480 GiB, 515396075520 bytes, 1006632960 sec                                                                                                                   tors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
[root@node103 ~]# pvcreate /dev/sdb -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF  --restorefil                                                                                                                   e /etc/lvm/backup/vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Can't initialize physical volume "/dev/sdb" of volume group "vmstore" without -ff
  /dev/sdb: physical volume not initialized.
[root@node103 ~]# lsblk
NAME                                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                           8:0    0 558.9G  0 disk
├─gluster_vg_sda-gluster_lv_engine          253:14   0   100G  0 lvm  /gluster_bricks/engine
├─gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tmeta
│                                           253:15   0     3G  0 lvm
│ └─gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool
│                                           253:17   0 452.9G  0 lvm
│   ├─gluster_vg_sda-gluster_thinpool_gluster_vg_sda
│   │                                       253:18   0 452.9G  1 lvm
│   └─gluster_vg_sda-gluster_lv_data        253:19   0   480G  0 lvm  /gluster_bricks/data
└─gluster_vg_sda-gluster_thinpool_gluster_vg_sda_tdata
                                            253:16   0 452.9G  0 lvm
  └─gluster_vg_sda-gluster_thinpool_gluster_vg_sda-tpool
                                            253:17   0 452.9G  0 lvm
    ├─gluster_vg_sda-gluster_thinpool_gluster_vg_sda
    │                                       253:18   0 452.9G  1 lvm
    └─gluster_vg_sda-gluster_lv_data        253:19   0   480G  0 lvm  /gluster_bricks/data
sdb                                           8:16   0 838.4G  0 disk
sdc                                           8:32   0 838.4G  0 disk
sdd                                           8:48   0 278.9G  0 disk
├─sdd1                                        8:49   0     1G  0 part /boot
└─sdd2                                        8:50   0 277.9G  0 part
  ├─onn-pool00_tmeta                        253:0    0     1G  0 lvm
  │ └─onn-pool00-tpool                      253:2    0   206G  0 lvm
  │   ├─onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1
  │   │                                     253:3    0   169G  0 lvm  /
  │   ├─onn-pool00                          253:7    0   206G  1 lvm
  │   ├─onn-var_log_audit                   253:8    0     2G  0 lvm  /var/log/audit
  │   ├─onn-var_log                         253:9    0     8G  0 lvm  /var/log
  │   ├─onn-var_crash                       253:10   0    10G  0 lvm  /var/crash
  │   ├─onn-var                             253:11   0    15G  0 lvm  /var
  │   ├─onn-tmp                             253:12   0     1G  0 lvm  /tmp
  │   └─onn-home                            253:13   0     1G  0 lvm  /home
  ├─onn-pool00_tdata                        253:1    0   206G  0 lvm
  │ └─onn-pool00-tpool                      253:2    0   206G  0 lvm
  │   ├─onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1
  │   │                                     253:3    0   169G  0 lvm  /
  │   ├─onn-pool00                          253:7    0   206G  1 lvm
  │   ├─onn-var_log_audit                   253:8    0     2G  0 lvm  /var/log/audit
  │   ├─onn-var_log                         253:9    0     8G  0 lvm  /var/log
  │   ├─onn-var_crash                       253:10   0    10G  0 lvm  /var/crash
  │   ├─onn-var                             253:11   0    15G  0 lvm  /var
  │   ├─onn-tmp                             253:12   0     1G  0 lvm  /tmp
  │   └─onn-home                            253:13   0     1G  0 lvm  /home
  └─onn-swap                                253:4    0  15.7G  0 lvm  [SWAP]
sr0                                          11:0    1  1024M  0 rom
[root@node103 ~]# blkid
/dev/mapper/onn-var_crash: UUID="00050fc2-5319-46da-b6c2-b0557f602845" BLOCK_SIZE="512" TYPE                                                                                                                   ="xfs"
/dev/sdd2: UUID="DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW" TYPE="LVM2_member" PARTUUID="e6ac2d                                                                                                                   12-02"
/dev/sdd1: UUID="840d0f88-c5f0-4a98-a7dd-73ebf347a85b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID=                                                                                                                   "e6ac2d12-01"
/dev/sda: UUID="kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8" TYPE="LVM2_member"
/dev/sdb: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_member"
/dev/sdc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_member"
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: UUID="8bef5e42-0fd1-4595-8b3c-079                                                                                                                   127a2c237" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-swap: UUID="18284c2e-3596-4596-b0e8-59924944c417" TYPE="swap"
/dev/mapper/onn-var_log_audit: UUID="928237b1-7c78-42d0-afca-dfd362225740" BLOCK_SIZE="512"                                                                                                                    TYPE="xfs"
/dev/mapper/onn-var_log: UUID="cb6aafa3-9a97-4dc5-81f6-d2da3b0fdcd9" BLOCK_SIZE="512" TYPE="                                                                                                                   xfs"
/dev/mapper/onn-var: UUID="47b2e20c-f623-404d-9a12-3b6a31b71096" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-tmp: UUID="599eda13-d184-4a59-958c-93e37bc6fdaa" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-home: UUID="ef67a11a-c786-42d9-89d0-ee518b50fce5" BLOCK_SIZE="512" TYPE="xfs                                                                                                                   "
/dev/mapper/gluster_vg_sda-gluster_lv_engine: UUID="3b99a374-baa4-4783-8da8-8ecbfdb227a4" BL                                                                                                                   OCK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_data: UUID="eb5ef30a-cb48-4491-99c6-3e6c6bfe3f69" BLOC                                                                                                                   K_SIZE="512" TYPE="xfs"
[root@node103 ~]# pvcreate /dev/sdb -u  "m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF"  --restoref                                                                                                                   ile /etc/lvm/backup/vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Can't initialize physical volume "/dev/sdb" of volume group "vmstore" without -ff
  /dev/sdb: physical volume not initialized.
[root@node103 ~]# pvcreate /dev/sdb
  Can't initialize physical volume "/dev/sdb" of volume group "vmstore" without -ff
  /dev/sdb: physical volume not initialized.


[root@node103 ~]# pvcreate --help
  pvcreate - Initialize physical volume(s) for use by LVM

  pvcreate PV ...
        [ -f|--force ]
        [ -M|--metadatatype lvm2 ]
        [ -u|--uuid String ]
        [ -Z|--zero y|n ]
        [    --dataalignment Size[k|UNIT] ]
        [    --dataalignmentoffset Size[k|UNIT] ]
        [    --bootloaderareasize Size[m|UNIT] ]
        [    --labelsector Number ]
        [    --pvmetadatacopies 0|1|2 ]
        [    --metadatasize Size[m|UNIT] ]
        [    --metadataignore y|n ]
        [    --norestorefile ]
        [    --setphysicalvolumesize Size[m|UNIT] ]
        [    --reportformat basic|json ]
        [    --restorefile String ]
        [ COMMON_OPTIONS ]

  Common options for lvm:
        [ -d|--debug ]
        [ -h|--help ]
        [ -q|--quiet ]
        [ -v|--verbose ]
        [ -y|--yes ]
        [ -t|--test ]
        [    --commandprofile String ]
        [    --config String ]
        [    --driverloaded y|n ]
        [    --nolocking ]
        [    --lockopt String ]
        [    --longhelp ]
        [    --profile String ]
        [    --version ]
        [    --devicesfile String ]
        [    --devices PV ]

  Use --longhelp to show all options and advanced commands.
[root@node103 ~]# pvcreate /dev/sdb /dev/sdc -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF -upH                                                                                                                   lCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s  --restorefile /etc/lvm/backup/vmstore
  Option -u/--uuid may not be repeated.
  Error during parsing of command line.
[root@node103 ~]# pvcreate /dev/sdb  -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF   --restoref                                                                                                                   ile /etc/lvm/backup/vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Can't initialize physical volume "/dev/sdb" of volume group "vmstore" without -ff
  /dev/sdb: physical volume not initialized.
[root@node103 ~]# pvcreate /dev/sdb  -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF   --restoref                                                                                                                   ile /etc/lvm/backup/vmstore -ff
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
Really INITIALIZE physical volume "/dev/sdb" of volume group "vmstore" [y/n]? y
  WARNING: Forcing physical volume creation on /dev/sdb of volume group "vmstore"
  Physical volume "/dev/sdb" successfully created.
[root@node103 ~]# pvcreate /dev/sd^C -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF   --restoref                                                                                                                   ile /etc/lvm/backup/vmstore -ff
[root@node103 ~]# cd /etc/lvm/backup/
[root@node103 backup]# ls
gluster_vg_sda  onn  vmstore
[root@node103 backup]# more vmstore
# Generated by LVM2 version 2.03.12(2)-RHEL8 (2021-05-19): Tue Jun  7 10:42:40 2022

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'lvcreate -l 100%VG -n lv_store vmstore'"

creation_host = "node103"       # Linux node103 4.18.0-365.el8.x86_64 #1 SMP Thu Feb 10 16:1
1:23 UTC 2022 x86_64
creation_time = 1654569760      # Tue Jun  7 10:42:40 2022

vmstore {
        id = "HmgGJN-YzQf-8Kks-k36u-3kU1-yBRj-bMM6ME"
        seqno = 2
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF"
                        device = "/dev/sdb"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1758174768   # 838.363 Gigabytes
                        pe_start = 2048
                        pe_count = 214620       # 838.359 Gigabytes
                }

                pv1 {
                        id = "pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s"
                        device = "/dev/sdc"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1758174768   # 838.363 Gigabytes
                        pe_start = 2048
                        pe_count = 214620       # 838.359 Gigabytes
                }
        }

        logical_volumes {

                lv_store {
                        id = "KP7KUD-n7ZO-XMmY-d6De-dd8j-dR15-jdPF4X"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        creation_time = 1654569760      # 2022-06-07 10:42:40 +0800
[root@node103 backup]# pvcreate /dev/sdc  -u  pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s   --res                                                                                                                   torefile /etc/lvm/backup/vmstore -ff
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
Really INITIALIZE physical volume "/dev/sdc" of volume group "vmstore" [y/n]? y
  WARNING: Forcing physical volume creation on /dev/sdc of volume group "vmstore"
  Physical volume "/dev/sdc" successfully created.
[root@node103 backup]# pvscan
  PV /dev/sdd2   VG onn              lvm2 [277.87 GiB / 54.18 GiB free]
  PV /dev/sda    VG gluster_vg_sda   lvm2 [558.91 GiB / 0    free]
  PV /dev/sdb                        lvm2 [838.36 GiB]
  PV /dev/sdc                        lvm2 [838.36 GiB]
  Total: 4 [2.45 TiB] / in use: 2 [836.78 GiB] / in no VG: 2 [<1.64 TiB]

重点步骤

[root@node103 backup]# vgcfgrestore vmstore
  Restored volume group vmstore.
[root@node103 backup]# pvs -o pv_name,uuid
  PV         PV UUID
  /dev/sda   kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8
  /dev/sdb   m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF
  /dev/sdc   pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s
  /dev/sdd2  DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW
[root@node103 backup]# lvscan
  ACTIVE            '/dev/onn/pool00' [206.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log_audit' [2.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log' [8.00 GiB] inherit
  ACTIVE            '/dev/onn/var_crash' [10.00 GiB] inherit
  ACTIVE            '/dev/onn/var' [15.00 GiB] inherit
  ACTIVE            '/dev/onn/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/onn/home' [1.00 GiB] inherit
  inactive          '/dev/onn/root' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/swap' [15.68 GiB] inherit
  inactive          '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1' [169.00 GiB] inherit
  inactive          '/dev/vmstore/lv_store' [<1.64 TiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_thinpool_gluster_vg_sda' [452.91 GiB] inher                                                                                                                   it
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_engine' [100.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_data' [480.00 GiB] inherit
[root@node103 backup]# lvchange -ay /dev/vmstore/lv_store

激活卷


[root@node103 backup]# lvchange -ay /dev/vmstore/lv_store
[root@node103 backup]# lvscan
  ACTIVE            '/dev/onn/pool00' [206.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log_audit' [2.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log' [8.00 GiB] inherit
  ACTIVE            '/dev/onn/var_crash' [10.00 GiB] inherit
  ACTIVE            '/dev/onn/var' [15.00 GiB] inherit
  ACTIVE            '/dev/onn/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/onn/home' [1.00 GiB] inherit
  inactive          '/dev/onn/root' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/swap' [15.68 GiB] inherit
  inactive          '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1' [169.00 GiB] inherit
  ACTIVE            '/dev/vmstore/lv_store' [<1.64 TiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_thinpool_gluster_vg_sda' [452.91 GiB] inher                                                                                                                   it
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_engine' [100.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_data' [480.00 GiB] inherit
[root@node103 backup]# ^C
[root@node103 backup]# ^C
[root@node103 backup]# dmsetup remove 350014ee7aaad2514^C                                                                                                                                                      [root@node103 backup]# lsblk
[root@node103 ~]# pvcreate --help
  pvcreate - Initialize physical volume(s) for use by LVM

  pvcreate PV ...
        [ -f|--force ]
        [ -M|--metadatatype lvm2 ]
        [ -u|--uuid String ]
        [ -Z|--zero y|n ]
        [    --dataalignment Size[k|UNIT] ]
        [    --dataalignmentoffset Size[k|UNIT] ]
        [    --bootloaderareasize Size[m|UNIT] ]
        [    --labelsector Number ]
        [    --pvmetadatacopies 0|1|2 ]
        [    --metadatasize Size[m|UNIT] ]
        [    --metadataignore y|n ]
        [    --norestorefile ]
        [    --setphysicalvolumesize Size[m|UNIT] ]
        [    --reportformat basic|json ]
        [    --restorefile String ]
        [ COMMON_OPTIONS ]

  Common options for lvm:
        [ -d|--debug ]
        [ -h|--help ]
        [ -q|--quiet ]
        [ -v|--verbose ]
        [ -y|--yes ]
        [ -t|--test ]
        [    --commandprofile String ]
        [    --config String ]
        [    --driverloaded y|n ]
        [    --nolocking ]
        [    --lockopt String ]
        [    --longhelp ]
        [    --profile String ]
        [    --version ]
        [    --devicesfile String ]
        [    --devices PV ]

  Use --longhelp to show all options and advanced commands.

[root@node103 ~]# pvcreate /dev/sdb  -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF   --restorefile /etc/lvm/backup/vmstore
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
  Can't initialize physical volume "/dev/sdb" of volume group "vmstore" without -ff
  /dev/sdb: physical volume not initialized.
[root@node103 ~]# pvcreate /dev/sdb  -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF   --restorefile /etc/lvm/backup/vmstore -ff
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
Really INITIALIZE physical volume "/dev/sdb" of volume group "vmstore" [y/n]? y
  WARNING: Forcing physical volume creation on /dev/sdb of volume group "vmstore"
  Physical volume "/dev/sdb" successfully created.
[root@node103 ~]# pvcreate /dev/sd^C -u  m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF   --restorefile /etc/lvm/backup/vmstore -ff
[root@node103 ~]# cd /etc/lvm/backup/
[root@node103 backup]# ls
gluster_vg_sda  onn  vmstore
[root@node103 backup]# more vmstore
# Generated by LVM2 version 2.03.12(2)-RHEL8 (2021-05-19): Tue Jun  7 10:42:40 2022

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'lvcreate -l 100%VG -n lv_store vmstore'"

creation_host = "node103"       # Linux node103 4.18.0-365.el8.x86_64 #1 SMP Thu Feb 10 16:1
1:23 UTC 2022 x86_64
creation_time = 1654569760      # Tue Jun  7 10:42:40 2022

vmstore {
        id = "HmgGJN-YzQf-8Kks-k36u-3kU1-yBRj-bMM6ME"
        seqno = 2
        format = "lvm2"                 # informational
        status = ["RESIZEABLE", "READ", "WRITE"]
        flags = []
        extent_size = 8192              # 4 Megabytes
        max_lv = 0
        max_pv = 0
        metadata_copies = 0

        physical_volumes {

                pv0 {
                        id = "m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF"
                        device = "/dev/sdb"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1758174768   # 838.363 Gigabytes
                        pe_start = 2048
                        pe_count = 214620       # 838.359 Gigabytes
                }

                pv1 {
                        id = "pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s"
                        device = "/dev/sdc"     # Hint only

                        status = ["ALLOCATABLE"]
                        flags = []
                        dev_size = 1758174768   # 838.363 Gigabytes
                        pe_start = 2048
                        pe_count = 214620       # 838.359 Gigabytes
                }
        }

        logical_volumes {

                lv_store {
                        id = "KP7KUD-n7ZO-XMmY-d6De-dd8j-dR15-jdPF4X"
                        status = ["READ", "WRITE", "VISIBLE"]
                        flags = []
                        creation_time = 1654569760      # 2022-06-07 10:42:40 +0800
[root@node103 backup]# pvcreate /dev/sdc  -u  pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s   --restorefile /etc/lvm/backup/vmstore -ff
  WARNING: Couldn't find device with uuid m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF.
  WARNING: Couldn't find device with uuid pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s.
Really INITIALIZE physical volume "/dev/sdc" of volume group "vmstore" [y/n]? y
  WARNING: Forcing physical volume creation on /dev/sdc of volume group "vmstore"
  Physical volume "/dev/sdc" successfully created.
[root@node103 backup]# pvscan
  PV /dev/sdd2   VG onn              lvm2 [277.87 GiB / 54.18 GiB free]
  PV /dev/sda    VG gluster_vg_sda   lvm2 [558.91 GiB / 0    free]
  PV /dev/sdb                        lvm2 [838.36 GiB]
  PV /dev/sdc                        lvm2 [838.36 GiB]
  Total: 4 [2.45 TiB] / in use: 2 [836.78 GiB] / in no VG: 2 [<1.64 TiB]
[root@node103 backup]# vgcfgrestore vmstore
  Restored volume group vmstore.
[root@node103 backup]# pvs -o pv_name,uuid
  PV         PV UUID
  /dev/sda   kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8
  /dev/sdb   m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF
  /dev/sdc   pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s
  /dev/sdd2  DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW
[root@node103 backup]# lvscan
  ACTIVE            '/dev/onn/pool00' [206.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log_audit' [2.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log' [8.00 GiB] inherit
  ACTIVE            '/dev/onn/var_crash' [10.00 GiB] inherit
  ACTIVE            '/dev/onn/var' [15.00 GiB] inherit
  ACTIVE            '/dev/onn/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/onn/home' [1.00 GiB] inherit
  inactive          '/dev/onn/root' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/swap' [15.68 GiB] inherit
  inactive          '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1' [169.00 GiB] inherit
  inactive          '/dev/vmstore/lv_store' [<1.64 TiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_thinpool_gluster_vg_sda' [452.91 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_engine' [100.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_data' [480.00 GiB] inherit
[root@node103 backup]# lvchange -ay /dev/vmstore/lv_store
[root@node103 backup]# lvscan
  ACTIVE            '/dev/onn/pool00' [206.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log_audit' [2.00 GiB] inherit
  ACTIVE            '/dev/onn/var_log' [8.00 GiB] inherit
  ACTIVE            '/dev/onn/var_crash' [10.00 GiB] inherit
  ACTIVE            '/dev/onn/var' [15.00 GiB] inherit
  ACTIVE            '/dev/onn/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/onn/home' [1.00 GiB] inherit
  inactive          '/dev/onn/root' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/swap' [15.68 GiB] inherit
  inactive          '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0' [169.00 GiB] inherit
  ACTIVE            '/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1' [169.00 GiB] inherit
  ACTIVE            '/dev/vmstore/lv_store' [<1.64 TiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_thinpool_gluster_vg_sda' [452.91 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_engine' [100.00 GiB] inherit
  ACTIVE            '/dev/gluster_vg_sda/gluster_lv_data' [480.00 GiB] inherit
[root@node103 backup]#

查看uuid

[root@node103 backup]# blkid
/dev/mapper/onn-var_crash: UUID="00050fc2-5319-46da-b6c2-b0557f602845" BLOCK_SIZE="512" TYPE="xfs"
/dev/sdd2: UUID="DWXIm1-1QSa-qU1q-0urm-1sRl-BEiz-NaemIW" TYPE="LVM2_member" PARTUUID="e6ac2d12-02"
/dev/sdd1: UUID="840d0f88-c5f0-4a98-a7dd-73ebf347a85b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="e6ac2d12-01"
/dev/sda: UUID="kMvSz0-GoE5-Q0vf-qrjM-3H5m-NCLg-fSTZb8" TYPE="LVM2_member"
/dev/sdb: UUID="m8ZIeU-j2eU-8ZMK-q52W-Zcjd-hqij-JSG8XF" TYPE="LVM2_member"
/dev/sdc: UUID="pHlCKc-wvGK-4DKX-m5IF-IXig-BZJM-YqX38s" TYPE="LVM2_member"
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: UUID="8bef5e42-0fd1-4595-8b3c-079127a2c237" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-swap: UUID="18284c2e-3596-4596-b0e8-59924944c417" TYPE="swap"
/dev/mapper/onn-var_log_audit: UUID="928237b1-7c78-42d0-afca-dfd362225740" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-var_log: UUID="cb6aafa3-9a97-4dc5-81f6-d2da3b0fdcd9" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-var: UUID="47b2e20c-f623-404d-9a12-3b6a31b71096" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-tmp: UUID="599eda13-d184-4a59-958c-93e37bc6fdaa" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-home: UUID="ef67a11a-c786-42d9-89d0-ee518b50fce5" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_engine: UUID="3b99a374-baa4-4783-8da8-8ecbfdb227a4" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/gluster_vg_sda-gluster_lv_data: UUID="eb5ef30a-cb48-4491-99c6-3e6c6bfe3f69" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/vmstore-lv_store: UUID="0ab91915-a144-4e81-bca9-996daf719c5c" BLOCK_SIZE="512" TYPE="xfs"
[root@node103 backup]# vi /etc/fstab
[root@node103 backup]# vi /etc/fstab
[root@node103 backup]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri May 13 14:46:50 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/onn/ovirt-node-ng-4.4.10.2-0.20220303.0+1 / xfs defaults,discard 0 0
UUID=840d0f88-c5f0-4a98-a7dd-73ebf347a85b /boot                   xfs     defaults        0 0
/dev/mapper/onn-home /home xfs defaults,discard 0 0
/dev/mapper/onn-tmp /tmp xfs defaults,discard 0 0
/dev/mapper/onn-var /var xfs defaults,discard 0 0
/dev/mapper/onn-var_crash /var/crash xfs defaults,discard 0 0
/dev/mapper/onn-var_log /var/log xfs defaults,discard 0 0
/dev/mapper/onn-var_log_audit /var/log/audit xfs defaults,discard 0 0
/dev/mapper/onn-swap    none                    swap    defaults        0 0
UUID=3b99a374-baa4-4783-8da8-8ecbfdb227a4 /gluster_bricks/engine xfs inode64,noatime,nodiratime 0 0
UUID=eb5ef30a-cb48-4491-99c6-3e6c6bfe3f69 /gluster_bricks/data xfs inode64,noatime,nodiratime 0 0
UUID=0ab91915-a144-4e81-bca9-996daf719c5c /gluster_bricks/xgvmstore xfs defaults 0 0
[root@node103 backup]#
[root@node103 backup]# mount -a
[root@node103 backup]# dh -HT
-bash: dh: command not found
[root@node103 backup]# df -HT
Filesystem                                                Type            Size  Used Avail Use% Mounted on
devtmpfs                                                  devtmpfs         17G     0   17G   0% /dev
tmpfs                                                     tmpfs            17G  4.1k   17G   1% /dev/shm
tmpfs                                                     tmpfs            17G   19M   17G   1% /run
tmpfs                                                     tmpfs            17G     0   17G   0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1 xfs             182G  6.1G  176G   4% /
/dev/mapper/gluster_vg_sda-gluster_lv_engine              xfs             108G  107G  1.3G  99% /gluster_bricks/engine
/dev/mapper/gluster_vg_sda-gluster_lv_data                xfs             516G   14G  502G   3% /gluster_bricks/data
/dev/sdd1                                                 xfs             1.1G  368M  696M  35% /boot
/dev/mapper/onn-home                                      xfs             1.1G   42M  1.1G   4% /home
/dev/mapper/onn-tmp                                       xfs             1.1G   42M  1.1G   4% /tmp
/dev/mapper/onn-var                                       xfs              17G  382M   16G   3% /var
/dev/mapper/onn-var_crash                                 xfs              11G  110M   11G   2% /var/crash
/dev/mapper/onn-var_log                                   xfs             8.6G  372M  8.3G   5% /var/log
/dev/mapper/onn-var_log_audit                             xfs             2.2G   59M  2.1G   3% /var/log/audit
ovirt106.com:/data                                        fuse.glusterfs  215G   15G  201G   7% /rhev/data-center/mnt/glusterSD/ovirt106.com:_data
ovirt106.com:/engine                                      fuse.glusterfs  108G  108G  227M 100% /rhev/data-center/mnt/glusterSD/ovirt106.com:_engine
ovirt106.com:/gs_store                                    fuse.glusterfs  1.8T  568G  1.3T  32% /rhev/data-center/mnt/glusterSD/ovirt106.com:_gs__store
tmpfs                                                     tmpfs           3.4G     0  3.4G   0% /run/user/0
/dev/mapper/vmstore-lv_store                              xfs             1.8T  206G  1.6T  12% /gluster_bricks/xgvmstore

重启gluster

[root@node103 backup]# gluster volume status
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node101.com:/gluster_bricks/data/data 49153     0          Y       259716
Brick node103.com:/gluster_bricks/data/data 49152     0          Y       3159
Brick ovirt106.com:/gluster_bricks/data/dat
a                                           49152     0          Y       2567
Self-heal Daemon on localhost               N/A       N/A        Y       3378
Self-heal Daemon on node101.com             N/A       N/A        Y       2974
Self-heal Daemon on ovirt106.com            N/A       N/A        Y       2671

Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: engine
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node101.com:/gluster_bricks/engine/en
gine                                        49154     0          Y       259727
Brick node103.com:/gluster_bricks/engine/en
gine                                        49153     0          Y       3177
Brick ovirt106.com:/gluster_bricks/engine/e
ngine                                       49153     0          Y       2582
Self-heal Daemon on localhost               N/A       N/A        Y       3378
Self-heal Daemon on node101.com             N/A       N/A        Y       2974
Self-heal Daemon on ovirt106.com            N/A       N/A        Y       2671

Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: gs_store
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node101.com:/gluster_bricks/xgvmstore 49152     0          Y       123771
Brick node103.com:/gluster_bricks/xgvmstore N/A       N/A        N       N/A
Brick ovirt106.com:/gluster_bricks/xgvmstor
e                                           49154     0          Y       2625

Task Status of Volume gs_store
------------------------------------------------------------------------------
There are no active volume tasks

[root@node103 backup]# gluster volume start gs_store
volume start: gs_store: failed: Volume gs_store already started
[root@node103 backup]# gluster volume restart gs_store
unrecognized word: restart (position 1)

 Usage: gluster [options] <help> <peer> <pool> <volume>
 Options:
 --help  Shows the help information
 --version  Shows the version
 --print-logdir  Shows the log directory
 --print-statedumpdir Shows the state dump directory

[root@node103 backup]# gluster volume stop gs_store
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gs_store: success
[root@node103 backup]# gluster volume start gs_store
volume start: gs_store: success
[root@node103 backup]# gluster volume status
Status of volume: data
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node101.com:/gluster_bricks/data/data 49153     0          Y       259716
Brick node103.com:/gluster_bricks/data/data 49152     0          Y       3159
Brick ovirt106.com:/gluster_bricks/data/dat
a                                           49152     0          Y       2567
Self-heal Daemon on localhost               N/A       N/A        Y       3378
Self-heal Daemon on node101.com             N/A       N/A        Y       2974
Self-heal Daemon on ovirt106.com            N/A       N/A        Y       2671

Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: engine
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node101.com:/gluster_bricks/engine/en
gine                                        49154     0          Y       259727
Brick node103.com:/gluster_bricks/engine/en
gine                                        49153     0          Y       3177
Brick ovirt106.com:/gluster_bricks/engine/e
ngine                                       49153     0          Y       2582
Self-heal Daemon on localhost               N/A       N/A        Y       3378
Self-heal Daemon on ovirt106.com            N/A       N/A        Y       2671
Self-heal Daemon on node101.com             N/A       N/A        Y       2974

Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: gs_store
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node101.com:/gluster_bricks/xgvmstore 49155     0          Y       357072
Brick node103.com:/gluster_bricks/xgvmstore 49154     0          Y       15077
Brick ovirt106.com:/gluster_bricks/xgvmstor
e                                           49155     0          Y       193157

Task Status of Volume gs_store
------------------------------------------------------------------------------
There are no active volume tasks

[root@node103 backup]#

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值