1,raid0,raid1,raid5,raid6,raid10,raid01,raid50,raid60一次说清楚各自特点与使用场景
2,常规物理磁盘的分区和挂载
3,如何LVM逻辑券挂载目录扩容与缩容,保证数据不丢失
4,关于磁盘在企业生产环境使用的思考
1,raid0,raid1,raid5,raid6,raid10,raid01,raid50,raid60的比较
RAID级别 | 最少磁盘数 | 磁盘利用率 | 磁盘读写速度 | 可容许坏掉的最多磁盘数 | 特点 | 使用场景 |
---|---|---|---|---|---|---|
RAID0 | 1 | N/N=100% | S=N*s(s为单个磁盘读写速度) | 0 | RAID 0只是单纯的提高性能,并没有为数据的可靠性提供保证 | 个人盘,游戏盘、缓存盘; |
RAID1 | 2(仅支持2块硬盘) | 1/2=50% | S=s,N/2块盘同时写入,N块盘同时读取 | 1 | RAID 1是磁盘阵列中单位成本最高的。但提供了很高的数据安全性和可用性。 | 适用于存放重要数据,如服务器和数据库存储等领域。 |
RAID5 | 3 | (N-1)/N | S=N*s | 1 | 可靠性高 | 是一种存储性能、数据安全和存储成本兼顾的存储解决方案。 |
RAID6 | 4 | (N-2)/N | S=N*s | 2 | 相对于RAID 5有更大的“写损失”,因此写性能较差 | 数据中心,信息中心等对数据安全级别要求比较高的企业 |
RAID10 | 4 | N*1/2 | S=N*1/2*s,N/2块盘同时写入,N块盘同时读取 | 1+1/3 | 性能高,可靠性高 | 集合了RAID0,RAID1的优点,但是空间上由于使用镜像,而不是类似RAID5的“奇偶校验信息”,磁盘利用率一样是50% |
RAID01 | 4 | N*1/2 | S=N*1/2*s,N/2块盘同时写入,N块盘同时读取 | 1+2/3 | 读写性能与RAID 10相同 | 不推荐生产环境使用 |
RAID50 | 6 | (N-1)/N | S=N*s | 2 | 性能高,可靠性高,磁盘利用率高 | 最适合需要高可靠性存储、高读取速度、高数据传输性能的应用。如大型数据库服务器、应用服务器、文件服务器等 |
RAID60 | 8 | (N-2)/N | S=N*s | 4 | 性能高,可靠性高,磁盘利用率高 | 具备更高的容错性,支持同时两块硬盘出现故障的修复功能,和更高的读性能。技术上还存在一定的问题,不够成熟,目前很少使用者。 |
参考文档:
[Raid 0 1 5 10的原理、特点、性能区别_gpcsy的博客-CSDN博客_raid]简单一步操作就能让机械硬盘速度翻倍?一看就会的RAID 0阵列组建方法!机械硬盘什么值得买 (smzdm.com)
1.1 实验场景1:创建10G的RAID1,要求CHUNK为128K,文件系统为ext4,有一个空闲盘,开机可自动挂载至/backup目录
1.1.1 新增磁盘,操作参考2.1步骤
首先要计算出来一共要新增多少磁盘空间,raid1需要2个10G的硬盘(或同样大小的分区),外加一块10G的磁盘,当做空闲盘。也即一共需要30G的磁盘。
1.1.2 采用软raid的方式进行实验,但需要提前准备好软raid工具mdadm
rpm -q mdadm
yum install -y mdadm
1.1.3 第一步新增了磁盘,在系统没重启的情况下让系统识别出来新的盘符
[root@localhost scsi_host]# for i in {0..2}; do echo '- - -' >> /sys/class/scsi_host/host$i/scan; done
[root@localhost scsi_host]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 29G 0 part
├─centos-root 253:0 0 26G 0 lvm /
└─centos-swap 253:1 0 3G 0 lvm [SWAP]
sdb 8:16 0 31G 0 disk
├─sdb1 8:17 0 5G 0 part
├─sdb2 8:18 0 5G 0 part
├─sdb3 8:19 0 5G 0 part
├─sdb4 8:20 0 1K 0 part
├─sdb5 8:21 0 5G 0 part
└─sdb6 8:22 0 11G 0 part
sdc 8:32 0 11G 0 disk
sdd 8:48 0 11G 0 disk
1.1.4 使用mdadm实现raid1的模拟,raid1的设备名为/dev/dm0
[root@localhost scsi_host]# mdadm -C /dev/md0 -a yes -l 1 -n 2 -x 1 /dev/sdb6 /dev/sdc /dev/sdd -c 128
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
sdb6是分区盘,sdc和sdd是没有执行分区等操作的整盘
1.1.5 创建ext4文件系统
[root@localhost scsi_host]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
文件系统标签=
OS type: Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
720896 inodes, 2880512 blocks
144025 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=2151677952
88 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
Allocating group tables: 完成
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
1.1.5 创建挂载目录,并实现永久挂载
[root@localhost scsi_host]# mkdir /backup
[root@localhost scsi_host]# blkid
/dev/sda1: UUID="9a542371-d07c-4db7-96af-0ab52515377a" TYPE="xfs"
/dev/sda2: UUID="1aO698-u0oL-zHnQ-8yBo-Jo2D-x02X-EYdpFw" TYPE="LVM2_member"
/dev/sdb6: UUID="34703f04-0ddb-b34a-b8c5-b7d7e8c36977" UUID_SUB="f4b4d91f-08da-1830-475c-3a7859257982" LABEL="localhost.cc:0" TYPE="linux_raid_member"
/dev/mapper/centos-root: UUID="9db7283f-e22d-4544-a7f2-8c2b1359317c" TYPE="xfs"
/dev/mapper/centos-swap: UUID="b327430e-918f-487b-9035-f8764248fc2a" TYPE="swap"
/dev/sdc: UUID="34703f04-0ddb-b34a-b8c5-b7d7e8c36977" UUID_SUB="b3095e27-8cb9-019d-77f9-3cb2c16b2301" LABEL="localhost.cc:0" TYPE="linux_raid_member"
/dev/md0: UUID="3ffe1428-8c99-4226-ae22-dd29bea76792" TYPE="ext4"
/dev/sdd: UUID="34703f04-0ddb-b34a-b8c5-b7d7e8c36977" UUID_SUB="305785cb-518d-4dd1-599e-44e75f16b51e" LABEL="localhost.cc:0" TYPE="linux_raid_member"
[root@localhost scsi_host]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Sep 20 00:14:59 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=9a542371-d07c-4db7-96af-0ab52515377a /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
UUID="3ffe1428-8c99-4226-ae22-dd29bea76792" /backup ext4 defaults 0 0
[root@localhost scsi_host]# mount -a
[root@localhost scsi_host]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 979M 0 979M 0% /dev
tmpfs tmpfs 991M 0 991M 0% /dev/shm
tmpfs tmpfs 991M 9.5M 981M 1% /run
tmpfs tmpfs 991M 0 991M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 26G 1.6G 25G 7% /
/dev/sda1 xfs 1014M 168M 847M 17% /boot
tmpfs tmpfs 199M 0 199M 0% /run/user/0
/dev/md0 ext4 11G 41M 11G 1% /backup
[root@localhost scsi_host]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 29G 0 part
├─centos-root 253:0 0 26G 0 lvm /
└─centos-swap 253:1 0 3G 0 lvm [SWAP]
sdb 8:16 0 31G 0 disk
├─sdb1 8:17 0 5G 0 part
├─sdb2 8:18 0 5G 0 part
├─sdb3 8:19 0 5G 0 part
├─sdb4 8:20 0 1K 0 part
├─sdb5 8:21 0 5G 0 part
└─sdb6 8:22 0 11G 0 part
└─md0 9:0 0 11G 0 raid1 /backup
sdc 8:32 0 11G 0 disk
└─md0 9:0 0 11G 0 raid1 /backup
sdd 8:48 0 11G 0 disk
└─md0 9:0 0 11G 0 raid1 /backup
参考文档:linux磁盘管理系列二:软RAID的实现 - LinuxPanda - 博客园 (cnblogs.com)
1.2 实验场景1:创建一个可用空间为10G的RAID10设备,要求CHUNK为256K,文件系统为ext4,开机可自动挂载至/mydata目录
1.2.1 计算磁盘容量,新增磁盘
先做2个raid1,在做1个raid0,raid1需要5*2*2=20G空间的磁盘
1.2.2 创建2个raid1,分别对应/dev/dm4和/dev/dm5
[root@localhost scsi_host]# mdadm -C /dev/md4 -a yes -l 1 -n 2 /dev/sdb{1,2} -c 128
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Dec 3 00:28:54 2022
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sdb2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Dec 3 00:28:54 2022
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md4 started.
[root@localhost scsi_host]# mdadm -C /dev/md5 -a yes -l 1 -n 2 /dev/sdb{3,5} -c 128
mdadm: /dev/sdb3 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Dec 3 00:29:12 2022
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sdb5 appears to be part of a raid array:
level=raid1 devices=2 ctime=Sat Dec 3 00:29:12 2022
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
/dev/sdb1和/dev/sdb2是sdb两个子分区,这2个合并制作raid1设备/dev/dm4
/dev/sdb3和/dev/sdb5是sdb另外两个子分区,这2个分区制作raid1设备/dev/dm5
1.2.3 两个raid1设备创建raid0
[root@localhost scsi_host]# mdadm -C /dev/md6 -a yes -l 0 -n 2 /dev/md{4,5} -c 256
mdadm: /dev/md4 appears to be part of a raid array:
level=raid0 devices=2 ctime=Sat Dec 3 00:15:05 2022
mdadm: /dev/md5 appears to be part of a raid array:
level=raid0 devices=2 ctime=Sat Dec 3 00:15:05 2022
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.
[root@localhost scsi_host]# mdadm --detail /dev/md6
/dev/md6:
Version : 1.2
Creation Time : Sat Dec 3 00:40:58 2022
Raid Level : raid0
Array Size : 10465280 (9.98 GiB 10.72 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Dec 3 00:40:58 2022
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 256K
Consistency Policy : none
Name : n72-1.centos7:6 (local to host n72-1.centos7)
UUID : 1bca98af:5063735f:39948cae:4ad902af
Events : 0
Number Major Minor RaidDevice State
0 9 4 0 active sync /dev/md4
1 9 5 1 active sync /dev/md5
1.2.4 raid0设备/dev/dm6创建ext4文件系统并挂载
[root@localhost scsi_host]# mkfs.ext4 /dev/md6
mke2fs 1.42.9 (28-Dec-2013)
文件系统标签=
OS type: Linux
块大小=4096 (log=2)
分块大小=4096 (log=2)
Stride=64 blocks, Stripe width=128 blocks
654080 inodes, 2616320 blocks
130816 blocks (5.00%) reserved for the super user
第一个数据块=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: 完成
正在写入inode表: 完成
Creating journal (32768 blocks): 完成
Writing superblocks and filesystem accounting information: 完成
[root@localhost scsi_host]# vim /etc/fstab
[root@localhost scsi_host]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Sep 20 00:14:59 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=9a542371-d07c-4db7-96af-0ab52515377a /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
UUID="3ffe1428-8c99-4226-ae22-dd29bea76792" /backup ext4 defaults 0 0
UUID="80a7f560-2424-4acc-bba9-e9751a345cc9" /mydata ext4 defaults 0 0
[root@localhost scsi_host]# mount -a
[root@localhost scsi_host]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 979M 0 979M 0% /dev
tmpfs tmpfs 991M 0 991M 0% /dev/shm
tmpfs tmpfs 991M 9.5M 981M 1% /run
tmpfs tmpfs 991M 0 991M 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 26G 1.6G 25G 7% /
/dev/sda1 xfs 1014M 168M 847M 17% /boot
tmpfs tmpfs 199M 0 199M 0% /run/user/0
/dev/md0 ext4 11G 41M 11G 1% /backup
/dev/md6 ext4 9.7G 37M 9.2G 1% /mydata
故障排除,当发现自己配错了,需要重新来过的时候,停止对应的设备
[root@localhost scsi_host]# mdadm -S /dev/md3
mdadm: stopped /dev/md3
注意,停止顺序和创建顺序保持一致,否则会出现问题。
故障问题参考问题:mdadm: Cannot open /dev/xxx: Device or resource busy 解决方案Whistleྂ的博客-CSDN博客mdadm命令用不了
2,常规物理磁盘的分区和挂载
操作环境:centos7.9.2009.X86_64
2.1,VMware上新增磁盘
2.1 系统识别磁盘指令
[root@localhost scsi_host]# for i in {0..2}; do echo '- - -' >> /sys/class/scsi_host/host$i/scan; done
2.2 磁盘分区
[root@localhost ~]# fdisk /dev/sdb
欢迎使用 fdisk (util-linux 2.23.2)。
更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。
Device does not contain a recognized partition table
使用磁盘标识符 0x59cc425d 创建新的 DOS 磁盘标签。
命令(输入 m 获取帮助):m
命令操作
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
g create a new empty GPT partition table
G create an IRIX (SGI) partition table
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
命令(输入 m 获取帮助):n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
分区号 (1-4,默认 1):
起始 扇区 (2048-41943039,默认为 2048):
将使用默认值 2048
Last 扇区, +扇区 or +size{K,M,G} (2048-41943039,默认为 41943039):+10G
分区 1 已设置为 Linux 类型,大小设为 10 GiB
命令(输入 m 获取帮助):p
磁盘 /dev/sdb:21.5 GB, 21474836480 字节,41943040 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0x59cc425d
设备 Boot Start End Blocks Id System
/dev/sdb1 2048 20973567 10485760 83 Linux
命令(输入 m 获取帮助):w
The partition table has been altered!
Calling ioctl() to re-read partition table.
正在同步磁盘。
[root@localhost ~]# lsblk -a
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 20G 0 disk
└─sdb1 8:17 0 10G 0 part
2.3 创建xfs文件系统并永久挂载
这步操作跟1.1.5和1.2.4是一样的,只是`mkfs.xfs /dev/sdb1`区别
3,LVM实现动态扩缩容
LVM利用Linux内核的device-mapper功能来实现存储系统的虚拟化(系统分区独立于底层硬件)。 通过LVM,可以实现存储空间的抽象化并在上面建立虚拟分区(virtual partitions),可以更简便地扩大和缩小分区,可以增删分区时无需担心某个硬盘上没有足够的连续空间,避免为正在使用的磁盘重新分区的麻烦、为调整分区而不得不移动其他分区的不便,它相比传统的分区系统可以更灵活地管理磁盘。
3.1,LVM的基本组成
- 物理卷 (PV,Physical Volume):一个可供存储LVM的块设备. 如硬盘分区(MBR或GPT分区)、SAN 的硬盘、RAID 或 LUN。
- 卷组 (VG,Volume Group):卷组是对一个或多个物理卷的集合,并在设备文件系统中显示为 /dev/VG_NAME。
- 逻辑卷 (LV,Logical Volume):逻辑卷是可供系统使用的最终元设备,它们在卷组中创建和管理,由物理块组成,实际上就是一个虚拟分区,并显示为 /dev/VG_NAME/LV_NAME,通常在其上可以创建文件系统。
- 物理块 (PE,Physical Extends):一个卷组中最小的连续区域(默认为4 MiB),多个物理块将被分配给一个逻辑卷。你可以把它看成物理卷的一部分,这部分可以被分配给一个逻辑卷。
说明比较详细的参考文章:Linux下的磁盘管理之LVM详解及lvm的常用磁盘操作命令_yg@hunter的博客-CSDN博客_lvm命令
3.2 实验场景1:创建一个至少有两个PV组成的大小为20G的名为testvg的VG,要求PE大小为16M,而后在卷组中创建大小为5G的逻辑卷testlv;挂载至/users目录
3.2.1 新增磁盘和分区
10G新盘+10G分区盘的方式实现,此处参考2的内容
3.2.2 新增物理卷
[root@n72 ~]# pvcreate /dev/sdc /dev/sdd1
[root@n72 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name centos
PV Size <19.00 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4863
Free PE 0
Allocated PE 4863
PV UUID 1gPX51-tuuH-XBxA-uZQg-X5uD-TVLG-i5hOgM
--- Physical volume ---
PV Name /dev/sdb1
VG Name centos
PV Size 10.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 2559
Free PE 0
Allocated PE 2559
PV UUID tKLgPp-n0U7-WZtU-EThD-dcaa-eKsJ-3AktNk
--- Physical volume ---
PV Name /dev/sdc
VG Name testvg
PV Size 10.00 GiB / not usable 16.00 MiB
Allocatable yes
PE Size 16.00 MiB
Total PE 639
Free PE 319
Allocated PE 320
PV UUID sdUxQQ-CCnD-McYW-PxT3-AkNn-csMB-FYldcD
--- Physical volume ---
PV Name /dev/sdd1
VG Name testvg
PV Size 20.00 GiB / not usable 16.00 MiB
Allocatable yes
PE Size 16.00 MiB
Total PE 1279
Free PE 1279
Allocated PE 0
PV UUID 75fvMX-3wls-buUj-r4pq-Sv91-6oiq-rT7EEv
3.2.3 创建卷组testvg,指定PE为16M
[root@n72 ~]# vgcreate -s 16M testvg /dev/sdc /dev/sdd1
[root@n72 ~]# vgdisplay
--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 2
Act PV 2
VG Size 28.99 GiB
PE Size 4.00 MiB
Total PE 7422
Alloc PE / Size 7422 / 28.99 GiB
Free PE / Size 0 / 0
VG UUID fe4Qzi-xffX-Ekod-ZxCG-2Vnq-BHuW-y1aTnm
--- Volume group ---
VG Name testvg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size <29.97 GiB
PE Size 16.00 MiB
Total PE 1918
Alloc PE / Size 320 / 5.00 GiB
Free PE / Size 1598 / <24.97 GiB
VG UUID Kr92jl-jpge-CbE7-5OLV-wguT-FmYn-bgoB6a
3.2.4 创建逻辑卷testlv
[root@n72 ~]# lvcreate -L 5G -n testlv testvg
3.2.4 格式化并挂载逻辑卷testlv到/users目录
[root@n72 ~]# mkfs.xfs /dev/testvg/testlv
[root@n72 ~]# mkdir /users
[root@n72 ~]# mount /dev/testvg/testlv /users
[root@n72 ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 12M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 27G 3.4G 24G 13% /
/dev/sda1 xfs 1014M 168M 847M 17% /boot
tmpfs tmpfs 394M 0 394M 0% /run/user/0
/dev/mapper/testvg-testlv xfs 5.0G 33M 5.0G 1% /users
[root@n72 ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Sep 8 18:46:21 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=79cd1e38-ee63-49f1-8797-062b43b7b094 /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
/dev/mapper/testvg-testlv /users xfs defaults 0 0
[root@n72 ~]# mount -a
3.3 实验场景2:扩展testlv至7G,要求archlinux用户的文件不能丢失(archlinux用户家目录为/users/archlinux)
3.3.1 新建用户archlinux,要求其家目录为/users/archlinux,而后su切换至archlinux用户,复制/etc/pam.d/目录至自己的家目录
[root@n72 ~]# useradd -d /users/archlinux archlinux
[root@n72 ~]# su - archlinux
[archlinux@n72 ~]$ pwd
/users/archlinux
[archlinux@n72 ~]$ cp -r /etc/pam.d ./
[archlinux@n72 ~]$ ll
总用量 4
drwxr-xr-x 2 archlinux archlinux 4096 12月 2 12:38 pam.d
3.3.2 扩展testlv至7G
[root@n72 ~]# lvextend -L +2G /dev/testvg/testlv
Size of logical volume testvg/testlv changed from 5.00 GiB (320 extents) to 7.00 GiB (448 extents).
Logical volume testvg/testlv successfully resized.
[root@n72 ~]# xfs_growfs /users
meta-data=/dev/mapper/testvg-testlv isize=512 agcount=4, agsize=327680 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=1310720, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 1310720 to 1835008
[root@n72 ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 12M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 27G 3.4G 24G 13% /
/dev/sda1 xfs 1014M 168M 847M 17% /boot
tmpfs tmpfs 394M 0 394M 0% /run/user/0
/dev/mapper/testvg-testlv xfs 7.0G 33M 7.0G 1% /users
[root@n72 ~]# du /users
104 /users/archlinux/pam.d
120 /users/archlinux
120 /users
扩容无需停服和取消挂载。
3.4 实验场景3:收缩testlv至3G,要求archlinux用户的文件不能丢失
ext4支持缩容,但xfs文件系统不支持缩容,所以只能采取备份恢复的方式进行缩容。这里只实验了xfs的缩容,没有针对ext4文件系统做缩容实验,逻辑大体相同。生产环境即使ext4支持缩容,也建议备份后在操作。
3.4.1 缩容需要用到xsf文件系统备份恢复工具xfsdump
[root@n72 ~]# yum install -y xfsdump
3.4.2 查看文件相关信息用于恢复比对备用并使用xfsdump备份/users目录下的数据
[root@n72 archlinux]# touch reducetest.txt
[root@n72 archlinux]# ll
总用量 4
drwxr-xr-x 2 archlinux archlinux 4096 12月 2 12:38 pam.d
-rw-r--r-- 1 root root 0 12月 2 12:54 reducetest.txt
[root@n72 ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 12M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 27G 3.4G 24G 13% /
/dev/sda1 xfs 1014M 168M 847M 17% /boot
tmpfs tmpfs 394M 0 394M 0% /run/user/0
/dev/mapper/testvg-testlv xfs 7.0G 33M 7.0G 1% /users
[root@n72 ~]# xfsdump -f users.img /users -L users_full -M media0
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control
xfsdump: level 0 dump of n72.demo:/users
xfsdump: dump date: Fri Dec 2 13:28:35 2022
xfsdump: session id: c351ad3c-5105-47ce-8ab8-debd79d9c927
xfsdump: session label: "users_full"
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 151744 bytes
xfsdump: /var/lib/xfsdump/inventory created
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 54152 bytes
xfsdump: dump size (non-dir files) : 20896 bytes
xfsdump: dump complete: 0 seconds elapsed
xfsdump: Dump Summary:
xfsdump: stream 0 /root/users.img OK (success)
xfsdump: Dump Status: SUCCESS
注意:备份数据需要放在不同挂载分区
3.4.3 取消/users挂载
[root@n72 ~]# umount /users
3.4.4 缩容并重新格式化
[root@n72 ~]# lvreduce -L 3G /dev/testvg/testlv
WARNING: Reducing active logical volume to 3.00 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce testvg/testlv? [y/n]: y
Size of logical volume testvg/testlv changed from 7.00 GiB (448 extents) to 3.00 GiB (192 extents).
Logical volume testvg/testlv successfully resized.
[root@n72 ~]# mkfs.xfs -f /dev/testvg/testlv
meta-data=/dev/testvg/testlv isize=512 agcount=4, agsize=196608 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=786432, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
3.4.5 挂载与恢复
[root@n72 ~]# mount -a
[root@n72 ~]# df -Th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs tmpfs 2.0G 12M 2.0G 1% /run
tmpfs tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/centos-root xfs 27G 3.4G 24G 13% /
/dev/sda1 xfs 1014M 168M 847M 17% /boot
tmpfs tmpfs 394M 0 394M 0% /run/user/0
/dev/mapper/testvg-testlv xfs 3.0G 33M 3.0G 2% /users
[root@n72 ~]# xfsrestore -f users.img /users
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: n72.demo
xfsrestore: mount point: /users
xfsrestore: volume: /dev/mapper/testvg-testlv
xfsrestore: session time: Fri Dec 2 13:28:35 2022
xfsrestore: level: 0
xfsrestore: session label: "users_full"
xfsrestore: media label: "media0"
xfsrestore: file system id: 435d5154-a157-4b7c-a7ad-610f2701ed4e
xfsrestore: session id: c351ad3c-5105-47ce-8ab8-debd79d9c927
xfsrestore: media id: 870398f5-1e4e-440e-8aae-b21659130af0
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 3 directories and 38 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 1 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore: stream 0 /root/users.img OK (success)
xfsrestore: Restore Status: SUCCESS
[root@n72 ~]# ll /users/archlinux/
总用量 4
drwxr-xr-x 2 archlinux archlinux 4096 12月 2 12:38 pam.d
-rw-r--r-- 1 root root 0 12月 2 12:54 reducetest.txt
比对发现文件大小与缩容之前一样。实验成功。
3.5 实验场景4:对testlv创建快照,并尝试基于快照备份数据,验正快照的功能
3.5.1 快照之前看一下文件大小
[root@n72 /]# tree -h /users
/users
├── [ 154] archlinux
│ ├── [100M] 111.img
│ ├── [4.0K] pam.d
│ │ ├── [ 192] chfn
│ │ ├── [ 192] chsh
│ │ ├── [ 232] config-util
│ │ ├── [ 287] crond
│ │ ├── [ 19] fingerprint-auth -> fingerprint-auth-ac
│ │ ├── [ 702] fingerprint-auth-ac
│ │ ├── [ 796] login
│ │ ├── [ 154] other
│ │ ├── [ 188] passwd
│ │ ├── [ 16] password-auth -> password-auth-ac
│ │ ├── [1.0K] password-auth-ac
│ │ ├── [ 155] polkit-1
│ │ ├── [ 12] postlogin -> postlogin-ac
│ │ ├── [ 330] postlogin-ac
│ │ ├── [ 681] remote
│ │ ├── [ 143] runuser
│ │ ├── [ 138] runuser-l
│ │ ├── [ 17] smartcard-auth -> smartcard-auth-ac
│ │ ├── [ 752] smartcard-auth-ac
│ │ ├── [ 25] smtp -> /etc/alternatives/mta-pam
│ │ ├── [ 76] smtp.postfix
│ │ ├── [ 904] sshd
│ │ ├── [ 540] su
│ │ ├── [ 200] sudo
│ │ ├── [ 178] sudo-i
│ │ ├── [ 137] su-l
│ │ ├── [ 14] system-auth -> system-auth-ac
│ │ ├── [1.0K] system-auth-ac
│ │ ├── [ 129] systemd-user
│ │ ├── [ 84] vlock
│ │ └── [ 159] vmtoolsd
│ ├── [ 66] reducetest.txt
│ └── [ 32] snaptest1.txt
└── [ 33] snaptest.txt
2 directories, 35 files
3.5.2 创建快照(也即给这之前的内容做快照)
[root@n72 ~]# lvcreate -L 3G -s -n testlv-snapshot /dev/testvg/testlv
Logical volume "testlv-snapshot" created.
[root@n72 ~]# mkdir -p /mnt/snap
[root@n72 ~]# mount -o ro,nouuid /dev/testvg/testlv-snapshot /mnt/snap
3.5.3 修改快照后的相关测试文件内容
[root@n72 users]# cd /users/archlinux/
[root@n72 archlinux]# ls
111.img pam.d reducetest.txt snaptest1.txt
[root@n72 archlinux]# rm -f 111.img snaptest1.txt
[root@n72 archlinux]# cat reducetest.txt
#!/bin/bash
#this change is after the snapshot
echo "here we go"
[root@n72 archlinux]# vim reducetest.txt
[root@n72 archlinux]# cat reducetest.txt
#!/bin/bash
#this change is before the snapshot
echo "here we go"
#below is newly type in
echo "if you see me after merge the snapshot"
echo "then say snapshot is NOT ok. you need try it again."
3.5.4 取消挂载(快照的挂载和目标的挂载都需要取消)
[root@n72 archlinux]# umount /mnt/snap/
[root@n72 archlinux]# umount /users/
umount: /users:目标忙。
(有些情况下通过 lsof(8) 或 fuser(1) 可以
找到有关使用该设备的进程的有用信息)
[root@n72 archlinux]# lsof /users
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 1450 root cwd DIR 253,2 133 4214848 /users/archlinux
lsof 2545 root cwd DIR 253,2 133 4214848 /users/archlinux
lsof 2547 root cwd DIR 253,2 133 4214848 /users/archlinux
[root@n72 archlinux]# cd /
[root@n72 /]# umount /users
3.5.5 恢复快照并验证
[root@n72 /]# lvconvert --merge /dev/testvg/testlv-snapshot
Merging of volume testvg/testlv-snapshot started.
testvg/testlv: Merged: 100.00%
[root@n72 /]# mount /dev/testvg/testlv /users
[root@n72 /]# tree -h /users
/users
├── [ 154] archlinux
│ ├── [100M] 111.img
│ ├── [4.0K] pam.d
│ │ ├── [ 192] chfn
│ │ ├── [ 192] chsh
│ │ ├── [ 232] config-util
│ │ ├── [ 287] crond
│ │ ├── [ 19] fingerprint-auth -> fingerprint-auth-ac
│ │ ├── [ 702] fingerprint-auth-ac
│ │ ├── [ 796] login
│ │ ├── [ 154] other
│ │ ├── [ 188] passwd
│ │ ├── [ 16] password-auth -> password-auth-ac
│ │ ├── [1.0K] password-auth-ac
│ │ ├── [ 155] polkit-1
│ │ ├── [ 12] postlogin -> postlogin-ac
│ │ ├── [ 330] postlogin-ac
│ │ ├── [ 681] remote
│ │ ├── [ 143] runuser
│ │ ├── [ 138] runuser-l
│ │ ├── [ 17] smartcard-auth -> smartcard-auth-ac
│ │ ├── [ 752] smartcard-auth-ac
│ │ ├── [ 25] smtp -> /etc/alternatives/mta-pam
│ │ ├── [ 76] smtp.postfix
│ │ ├── [ 904] sshd
│ │ ├── [ 540] su
│ │ ├── [ 200] sudo
│ │ ├── [ 178] sudo-i
│ │ ├── [ 137] su-l
│ │ ├── [ 14] system-auth -> system-auth-ac
│ │ ├── [1.0K] system-auth-ac
│ │ ├── [ 129] systemd-user
│ │ ├── [ 84] vlock
│ │ └── [ 159] vmtoolsd
│ ├── [ 66] reducetest.txt
│ └── [ 32] snaptest1.txt
└── [ 33] snaptest.txt
2 directories, 35 files
恢复验证文件和内容都正常。
4,关于磁盘在企业生产环境使用的思考
古话说有备无患,针对磁盘管理在生产环境的应用简直是至理名言。从开始规划存储方案到服务器分区方案设定,再到后面的磁盘伸缩,无一不相互影响与限制。从生产环境的实战经验来看,系统的根目录和应用存储区需要特别谨慎,合理规划。不然到后面左右为难,一地鸡毛。
目前存储所学不深,分享出来如有纰漏希望大家在评论区多多指正。我认为做为一个系统运维,存储是需要不断深入学习、了解并实战的模块。