ovirt安装记录

本文记录了一名用户在Linux环境下进行GlusterFS操作的过程,包括删除旧域、主机UP、新建分区、修改权限、格式化磁盘并挂载数据卷,以及设置和优化glusterd服务。遇到问题如挂载失败和已存在的卷,作者提供了相应解决方案,如删除旧卷和调整mount路径。
摘要由CSDN通过智能技术生成

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
删除域,

在这里插入图片描述
主机UP
在这里插入图片描述
新建域
在这里插入图片描述
在这里插入图片描述

修改目录权限

[root@ovirt108 gluster_bricks]# umount /gluster_bricks/data/
[root@ovirt108 gluster_bricks]# fusermount-glusterfs -uz /gluster_bricks/data/
fusermount-glusterfs: /gluster_bricks/data not mounted
[root@ovirt108 gluster_bricks]# gluster volume stop gv_data
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv_data: success
[root@ovirt108 gluster_bricks]# cd ..
[root@ovirt108 /]# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mkpart primary
File system type?  [ext2]? xfs
Start? 0%
End? 100%
Warning: You requested a partition from 0.00B to 75.2GB (sectors 0..146800639).
The closest location we can manage is 512B to 1048kB (sectors 1..2047).
Is this still acceptable to you?
Yes/No? yes
Warning: The resulting partition is not properly aligned for best performance: 1s % 2048s != 0s
Ignore/Cancel? I
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 75.2GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 2      512B    1049kB  1048kB  primary  xfs          lba
 1      1049kB  75.2GB  75.2GB  primary  xfs

(parted)
align-check  help         mktable      quit         resizepart   set          version
disk_set     mklabel      name         rescue       rm           toggle
disk_toggle  mkpart       print        resize       select       unit
(parted)
align-check  help         mktable      quit         resizepart   set          version
disk_set     mklabel      name         rescue       rm           toggle
disk_toggle  mkpart       print        resize       select       unit
(parted) rm
Partition number? 2
(parted) rm
Partition number?
Partition number? 1
(parted) mkpart
Partition type?  primary/extended? primary
File system type?  [ext2]? xfs
Start? 0%
End? 100%
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 75.2GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  75.2GB  75.2GB  primary  xfs          lba

(parted)
align-check  help         mktable      quit         resizepart   set          version
disk_set     mklabel      name         rescue       rm           toggle
disk_toggle  mkpart       print        resize       select       unit
(parted) quit
Information: You may need to update /etc/fstab.

[root@ovirt108 /]# mkfs.xfs -i size=512 /dev/sdb1
mkfs.xfs: /dev/sdb1 contains a mounted filesystem
Usage: mkfs.xfs

[root@ovirt108 /]# mkfs.xfs -f  /dev/sdb1
mkfs.xfs: /dev/sdb1 contains a mounted filesystem
Usage: mkfs.xfs
/* blocksize */         [-b size=num]
/* metadata */          [-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1,
                            inobtcount=0|1,bigtime=0|1]
/* data subvol */       [-d agcount=n,agsize=n,file,name=xxx,size=num,
                            (sunit=value,swidth=value|su=num,sw=num|noalign),
                            sectsize=num
/* force overwrite */   [-f]
/* inode size */        [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
                            projid32bit=0|1,sparse=0|1]
/* no discard */        [-K]
/* log subvol */        [-l agnum=n,internal,size=num,logdev=xxx,version=n
                            sunit=value|su=num,sectsize=num,lazy-count=0|1]
/* label */             [-L label (maximum 12 characters)]
/* naming */            [-n size=num,version=2|ci,ftype=0|1]
/* no-op info only */   [-N]
/* prototype file */    [-p fname]
/* quiet */             [-q]
/* realtime subvol */   [-r extsize=num,size=num,rtdev=xxx]
/* sectorsize */        [-s size=num]
/* version */           [-V]
                        devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
      xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).
[root@ovirt108 /]# df -h
Filesystem                                                 Size  Used Avail Use% Mounted on
devtmpfs                                                   3.9G     0  3.9G   0% /dev
tmpfs                                                      3.9G  4.0K  3.9G   1% /dev/shm
tmpfs                                                      3.9G   25M  3.9G   1% /run
tmpfs                                                      3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1   22G  6.6G   16G  30% /
/dev/sda1                                                 1014M  350M  665M  35% /boot
/dev/mapper/onn-home                                      1014M   40M  975M   4% /home
/dev/mapper/onn-var                                         15G  394M   15G   3% /var
/dev/mapper/onn-var_log                                    8.0G  166M  7.9G   3% /var/log
/dev/mapper/onn-tmp                                       1014M   40M  975M   4% /tmp
/dev/mapper/onn-var_log_audit                              2.0G   54M  2.0G   3% /var/log/audit
/dev/mapper/onn-var_crash                                   10G  105M  9.9G   2% /var/crash
tmpfs                                                      792M     0  792M   0% /run/user/0
/dev/sdb1                                                   70G  532M   70G   1% /gluster_bricks/data
[root@ovirt108 /]# mkfs.xfs
No device name specified
Usage: mkfs.xfs
/* blocksize */         [-b size=num]
/* metadata */          [-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1,
                            inobtcount=0|1,bigtime=0|1]
/* data subvol */       [-d agcount=n,agsize=n,file,name=xxx,size=num,
                            (sunit=value,swidth=value|su=num,sw=num|noalign),
                            sectsize=num
/* force overwrite */   [-f]
/* inode size */        [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
                            projid32bit=0|1,sparse=0|1]
/* no discard */        [-K]
/* log subvol */        [-l agnum=n,internal,size=num,logdev=xxx,version=n
                            sunit=value|su=num,sectsize=num,lazy-count=0|1]
/* label */             [-L label (maximum 12 characters)]
/* naming */            [-n size=num,version=2|ci,ftype=0|1]
/* no-op info only */   [-N]
/* prototype file */    [-p fname]
/* quiet */             [-q]
/* realtime subvol */   [-r extsize=num,size=num,rtdev=xxx]
/* sectorsize */        [-s size=num]
/* version */           [-V]
                        devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
      xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).
[root@ovirt108 /]# mkfs.xfs -i /dev/sdb1
unknown option -i /dev/sdb1
Usage: mkfs.xfs
``
卸载后格式化

```cpp
在这里插入代码片
`

[root@ovirt108 /]# umount /gluster_bricks/data/

[root@ovirt108 /]# mkfs.xfs  /dev/sdb1 -f
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4587456 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=18349824, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=8959, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@ovirt108 /]# blkid
/dev/mapper/onn-var_crash: UUID="6030a2ae-fb93-4270-b803-3290d795b3cc" BLOCK_SIZE="512" TYPE="xfs"
/dev/sda2: UUID="JFvK2K-XchK-WrhO-MAUS-Wxx1-271A-juwQp2" TYPE="LVM2_member" PARTUUID="bb2642f5-02"
/dev/sda1: UUID="22b64f77-42f9-49fe-a72e-0c620e704cf6" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="bb2642f5-01"
/dev/sdb1: UUID="f3d02c66-e2cd-41c1-9ada-485cbcd67037" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="af52d656-01"
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: UUID="fc6c1e69-d3c8-414c-bb4c-f851ac6f6722" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-swap: UUID="951aa20c-6f8b-44b4-82da-ae2768bbb674" TYPE="swap"
/dev/mapper/onn-var_log_audit: UUID="73658248-7810-45e8-85a1-3f9716f13dc6" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-var_log: UUID="f6635362-4cf5-4237-bc12-66dcc3d0a479" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-var: UUID="92cb454c-1cfb-4f1a-9b5b-4123e5a48734" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-tmp: UUID="413bad74-04ae-4d4b-8e95-4fb04b4ebbae" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-home: UUID="ecdfcc80-f9c8-4075-893b-dd358f377ec0" BLOCK_SIZE="512" TYPE="xfs"
[root@ovirt108 /]# vi /etc/fstab
[root@ovirt108 /]# vi /etc/fstab
[root@ovirt108 /]# mount -a && mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=3997348k,nr_inodes=999337,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1 on / type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=33,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=26650)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/onn-home on /home type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-var on /var type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-var_log on /var/log type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-tmp on /tmp type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-var_log_audit on /var/log/audit type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-var_crash on /var/crash type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=810124k,mode=700)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sdb1 on /gluster_bricks/data type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
[root@ovirt108 /]# df -h
Filesystem                                                 Size  Used Avail Use% Mounted on
devtmpfs                                                   3.9G     0  3.9G   0% /dev
tmpfs                                                      3.9G  4.0K  3.9G   1% /dev/shm
tmpfs                                                      3.9G   25M  3.9G   1% /run
tmpfs                                                      3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1   22G  6.6G   16G  30% /
/dev/sda1                                                 1014M  350M  665M  35% /boot
/dev/mapper/onn-home                                      1014M   40M  975M   4% /home
/dev/mapper/onn-var                                         15G  394M   15G   3% /var
/dev/mapper/onn-var_log                                    8.0G  166M  7.9G   3% /var/log
/dev/mapper/onn-tmp                                       1014M   40M  975M   4% /tmp
/dev/mapper/onn-var_log_audit                              2.0G   56M  2.0G   3% /var/log/audit
/dev/mapper/onn-var_crash                                   10G  105M  9.9G   2% /var/crash
tmpfs                                                      792M     0  792M   0% /run/user/0
/dev/sdb1                                                   70G  532M   70G   1% /gluster_bricks/data
[root@ovirt108 /]#

[root@node110 /]# parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
align-check  help         mktable      quit         resizepart   set          version
disk_set     mklabel      name         rescue       rm           toggle
disk_toggle  mkpart       print        resize       select       unit
(parted) mkpart
Partition name?  []?
File system type?  [ext2]? ^C
Error: Expecting a file system type.
(parted) mkpart primary
File system type?  [ext2]? xfs
Start? 0%
End? 100%
Warning: You requested a partition from 0.00B to 75.2GB (sectors 0..146800639).
The closest location we can manage is 17.4kB to 1048kB (sectors 34..2047).
Is this still acceptable to you?
Yes/No? yes
Warning: The resulting partition is not properly aligned for best performance: 34s % 2048s != 0s
Ignore/Cancel? I
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 75.2GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 2      17.4kB  1049kB  1031kB  xfs          primary
 1      1049kB  75.2GB  75.2GB  xfs          primary

(parted) rm
Partition number? 2
(parted) rm
Partition number? 1
Warning: Partition /dev/sdb1 is being used. Are you sure you want to continue?
Yes/No? yes
Error: Partition(s) 1 on /dev/sdb have been written, but we have been unable to inform the kernel of the change,
probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot
now before making further changes.
Ignore/Cancel? I
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 75.2GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  Flags

(parted) mkpart primary
File system type?  [ext2]? xfs
Start? 0%
End? 100%
(parted) p
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 75.2GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  75.2GB  75.2GB  xfs          primary

(parted) q
Information: You may need to update /etc/fstab.

[root@node110 /]# umount /gluster_bricks/data/
[root@node110 /]# mkfs.xfs /dev/sdb1 -f
meta-data=/dev/sdb1              isize=512    agcount=4, agsize=4587392 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=18349568, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=8959, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@node110 /]# blkid
/dev/sda1: UUID="0c3b78f9-431b-47e8-9b14-9c9b58c97932" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="29902ec3-01"
/dev/sda2: UUID="DuV2GD-R4jV-MVch-9Okw-AOyq-yE71-59mvOs" TYPE="LVM2_member" PARTUUID="29902ec3-02"
/dev/sdb1: UUID="e27fb3a0-5e2b-4fd3-81e7-0888df63f73a" BLOCK_SIZE="512" TYPE="xfs" PARTLABEL="primary" PARTUUID="8825f74a-0f20-4c09-8848-c5edd11bf70b"
/dev/sr0: BLOCK_SIZE="2048" UUID="2022-03-03-08-20-27-00" LABEL="CentOS-Stream-8-x86_64-dvd" TYPE="iso9660" PTUUID="250e8015" PTTYPE="dos"
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1: UUID="08903906-6000-4038-b224-c9ad76505867" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-swap: UUID="d984a2f6-da82-496f-a755-e9aaf596c86d" TYPE="swap"
/dev/mapper/onn-var_log_audit: UUID="ece1a2a3-0fcb-4fb5-a68a-3c92384a8ddc" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-var_log: UUID="1f4963af-d0b8-4c69-92c4-f424bbf75805" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-var_crash: UUID="89a62c3c-7537-4b6b-92f3-0526a0148e16" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-var: UUID="7ed192e6-48e4-47c4-9895-267f132ae459" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-tmp: UUID="b019f539-b01d-4fc4-ad5d-3c0cfbdb3984" BLOCK_SIZE="512" TYPE="xfs"
/dev/mapper/onn-home: UUID="9bf6c9e7-bbcd-4da6-bc04-b3794e7a1b8d" BLOCK_SIZE="512" TYPE="xfs"
[root@node110 /]# vi /etc/fstab
[root@node110 /]# mount -a && mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=3997568k,nr_inodes=999392,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1 on / type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=44,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=24331)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/onn-tmp on /tmp type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-home on /home type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-var on /var type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-var_log on /var/log type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-var_crash on /var/crash type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
/dev/mapper/onn-var_log_audit on /var/log/audit type xfs (rw,relatime,attr2,discard,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=810168k,mode=700)
/dev/sdb1 on /gluster_bricks/data type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
[root@node110 /]# df -h
Filesystem                                                 Size  Used Avail Use% Mounted on
devtmpfs                                                   3.9G     0  3.9G   0% /dev
tmpfs                                                      3.9G  4.0K  3.9G   1% /dev/shm
tmpfs                                                      3.9G   25M  3.9G   1% /run
tmpfs                                                      3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1   22G  3.5G   19G  16% /
/dev/sda1                                                 1014M  350M  665M  35% /boot
/dev/mapper/onn-tmp                                       1014M   40M  975M   4% /tmp
/dev/mapper/onn-home                                      1014M   40M  975M   4% /home
/dev/mapper/onn-var                                         15G  361M   15G   3% /var
/dev/mapper/onn-var_log                                    8.0G  135M  7.9G   2% /var/log
/dev/mapper/onn-var_crash                                   10G  105M  9.9G   2% /var/crash
/dev/mapper/onn-var_log_audit                              2.0G   53M  2.0G   3% /var/log/audit
tmpfs                                                      792M     0  792M   0% /run/user/0
/dev/sdb1                                                   70G  532M   70G   1% /gluster_bricks/data
[root@node110 /]#

[root@ovirt108 /]# systemctl enable --now glusterd
[root@ovirt108 /]# gluster peer status
Number of Peers: 2

Hostname: node109.com
Uuid: 166bd34c-a652-4f5f-b87f-9d40c0961c6c
State: Peer in Cluster (Connected)
Other names:
172.16.100.109

Hostname: node110.com
Uuid: 36352480-6e7f-45eb-9b69-c2118b7c15b9
State: Peer in Cluster (Connected)
Other names:
172.16.100.110
[root@ovirt108 /]# gluster volume start gv_data
volume start: gv_data: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /gluster_bricks/data. Reason : No data available
[root@ovirt108 /]# gluster volume create gv_data replica 3 transport tcp ovirt108.com:/gluster_bricks/data/ node109.com:/gluster_bricks/data/ node110.com:/gluster_bricks/data/
volume create: gv_data: failed: Volume gv_data already exists
[root@ovirt108 /]# volume create: gv_data: failed: Volume gv_data already exists

先在客户端umount已经挂载的目录

rm /test1/* -rf --在umount之前把测试的数据干掉

umount /test1

umount /test2

umount /test3

umount /test4

在其中任一存储节点使用下面的命令停止gv0(其它存储节点不用重复执行)

gluster volume stop gv0

Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv0: success
在其中任一存储节点使用下面的命令删除gv0(其它存储节点不用重复执行)

gluster volume delete gv0

Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gv0: success
在所有存储节点上都可以查看,没有gv0的信息了,说明这个volumn被删除了

[root@ovirt108 /]# gluster volume delete gv_data
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gv_data: success
[root@ovirt108 /]#

gluster volume info

Volume gv0 does not exist
————————————————
版权声明:本文为CSDN博主「RunningTeenager」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/birdie_l/article/details/78186010
挂载点报错,无法创建GV_DATA

[root@ovirt108 /]# gluster volume create gv_data replica 3 transport tcp ovirt108.com:/gluster_bricks/data/ node109.com:/gluster_bricks/data/ node110.com:/gluster_bricks/data/
volume create: gv_data: failed: The brick ovirt108.com:/gluster_bricks/data is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.
[root@ovirt108 /]# gluster volume create gv_data replica 3 transport tcp ovirt108.com:/gluster_bricks/data/ n^Ce109.com:/gluster_bricks/data/ node110.com:/gluster_bricks/data/
[root@ovirt108 /]# umount /gluster_bricks/data/
[root@ovirt108 /]#
[root@ovirt108 /]# gluster volume create gv_data replica 3 transport tcp ovirt108.com:/gluster_bricks/data/ node109.com:/gluster_bricks/data/ node110.com:/gluster_bricks/data/
volume create: gv_data: failed: The brick ovirt108.com:/gluster_bricks/data is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
[root@ovirt108 /]# gluster volume create gv_data replica 3 transport tcp ovirt108.com:/gluster_bricks/data/ node109.com:/gluster_bricks/data/ node110.com:/gluster_bricks/data/ force
volume create: gv_data: success: please start the volume to access data
[root@ovirt108 /]#

修改mount目录,增加data目录、

[root@ovirt108 data]# df -h
Filesystem                                                 Size  Used Avail Use% Mounted on
devtmpfs                                                   3.9G     0  3.9G   0% /dev
tmpfs                                                      3.9G  4.0K  3.9G   1% /dev/shm
tmpfs                                                      3.9G   33M  3.9G   1% /run
tmpfs                                                      3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1   22G  6.6G   16G  30% /
/dev/sda1                                                 1014M  350M  665M  35% /boot
/dev/mapper/onn-home                                      1014M   40M  975M   4% /home
/dev/mapper/onn-var                                         15G  394M   15G   3% /var
/dev/mapper/onn-var_log                                    8.0G  167M  7.9G   3% /var/log
/dev/mapper/onn-tmp                                       1014M   40M  975M   4% /tmp
/dev/mapper/onn-var_log_audit                              2.0G   56M  2.0G   3% /var/log/audit
/dev/mapper/onn-var_crash                                   10G  105M  9.9G   2% /var/crash
tmpfs                                                      792M     0  792M   0% /run/user/0
/dev/sdb1                                                   70G  532M   70G   1% /gluster_bricks/data
[root@ovirt108 data]# vi /etc/fstab
[root@ovirt108 data]# /gluster_bricks/data^C
[root@ovirt108 data]# vi /etc/fstab
[root@ovirt108 data]# umount /gluster_bricks/data/
umount: /gluster_bricks/data/: target is busy.
[root@ovirt108 data]# cd ..
[root@ovirt108 gluster_bricks]# cd ..
[root@ovirt108 /]# umount /gluster_bricks/data/
[root@ovirt108 /]# mount /gluster_bricks/
mount: /gluster_bricks/: can't find in /etc/fstab.
[root@ovirt108 /]# mount /dev/sdb1 /gluster_bricks/
[root@ovirt108 /]# cd /gluster_bricks/
[root@ovirt108 gluster_bricks]# ls
[root@ovirt108 gluster_bricks]# mkdir -p data
[root@ovirt108 gluster_bricks]# chown -R vdsm:kvm data/
[root@ovirt108 gluster_bricks]# ll
total 0
drwxr-xr-x 2 vdsm kvm 6 May 21 06:33 data
[root@ovirt108 gluster_bricks]# df -h
Filesystem                                                 Size  Used Avail Use% Mounted on
devtmpfs                                                   3.9G     0  3.9G   0% /dev
tmpfs                                                      3.9G  4.0K  3.9G   1% /dev/shm
tmpfs                                                      3.9G   57M  3.9G   2% /run
tmpfs                                                      3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/onn-ovirt--node--ng--4.4.10.2--0.20220303.0+1   22G  6.6G   16G  30% /
/dev/sda1                                                 1014M  350M  665M  35% /boot
/dev/mapper/onn-home                                      1014M   40M  975M   4% /home
/dev/mapper/onn-var                                         15G  395M   15G   3% /var
/dev/mapper/onn-var_log                                    8.0G  160M  7.9G   2% /var/log
/dev/mapper/onn-tmp                                       1014M   40M  975M   4% /tmp
/dev/mapper/onn-var_log_audit                              2.0G   55M  2.0G   3% /var/log/audit
/dev/mapper/onn-var_crash                                   10G  105M  9.9G   2% /var/crash
tmpfs                                                      792M     0  792M   0% /run/user/0
/dev/sdb1                                                   70G  532M   70G   1% /gluster_bricks
[root@ovirt108 gluster_bricks]#

在这里插入图片描述
在这里插入图片描述

[root@ovirt108 gluster_bricks]# service glusterd start
Redirecting to /bin/systemctl start glusterd.service
[root@ovirt108 gluster_bricks]# systemctl start glusterd
[root@ovirt108 gluster_bricks]# chkconfig glusterd on
Note: Forwarding request to 'systemctl enable glusterd.service'.
[root@ovirt108 gluster_bricks]# systemctl enable glusterd
[root@ovirt108 gluster_bricks]# gluster peer probe node109.com
peer probe: Host node109.com port 24007 already in peer list
[root@ovirt108 gluster_bricks]# gluster peer probe node110.com
peer probe: Host node110.com port 24007 already in peer list
[root@ovirt108 gluster_bricks]# gluster volume start gv_data
volume start: gv_data: failed: Volume gv_data already started
[root@ovirt108 gluster_bricks]# gluster volume set gv_data diagnostics.count-fop-hits on
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data diagnostics.latency-measurement on
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data storage.owner-gid 36
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data storage.owner-uid 36
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data cluster.server-quorum-type server
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data cluster.quorum-type auto
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data network.remote-dio enable
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data cluster.eager-lock enable
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data performance.stat-prefetch off
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data performance.io-cache off
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data performance.read-ahead off
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data performance.quick-read off
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data auth.allow \*
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data user.cifs enable
volume set: success
[root@ovirt108 gluster_bricks]# gluster volume set gv_data nfs.disable off
Gluster NFS is being deprecated in favor of NFS-Ganesha Enter "yes" to continue using Gluster NFS (y/n) yes
volume set: success
[root@ovirt108 gluster_bricks]#

还是不成功。改用ISCSI共享

在这里插入图片描述

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值