linux 文件系统 pe,Linux磁盘和文件系统管理

对于linux系统的使用者来说如何正确的来进行系统的分区,如何当初对系统评估不准确的话,那么一旦系统分区不够用了,那么不得不备份,删除相关数据,重要的时候被迫还得重新规划分区并装系统来满足需求。今天我们来写一下如何在linux下进行LVM逻辑卷的使用,让我们自由调整分区容量。

一. LVM逻辑卷管理

概述

LVM:(逻辑卷管理Logical Volume Manager)就是可以自由调整分区大小 作用:动态调整磁盘容量,从而提高磁盘管理的灵活性

需要注意:/boot分区用于存放引导文件,不能基于LVM创建

图形界面:system-config-lvm

LVM的管理命令

PV(Physical Volume,物理卷)

整个硬盘,或使用fdisk等工具建立的普通分区

包括许多默认4MB大小的PE(Physical Extent,基本单元)

VG(Volume Group,卷组)

一个或多个物理卷组合而成的整体

LV(Logical Volume,逻辑卷)

从卷组中分割出的一块空间,用于建立文件系统

LVM应用实例

功能

物理卷管理

卷组管理

逻辑卷管理

Scan 扫描

pvscan

vgscan

lvscan

Create 建立

pvcreate

vgcreate

lvcreate

Display 显示

pvdisplay

vgdisplay

lvdisplay

Remove 删除

pvremove

vgremove

lvremove

Extend 扩展

vgextend

lvextend

Reduce 减少

vgreduce

lvreduce

首先我在系统上添加了两块硬盘 /dev/sdb /dev/dbc来为下面的LVM做准备,你也可以先进行fdisk工具分完区然后再做PV,此处我们直接拿新添加的硬盘来做PV

1.物理卷【PV】的创建、删除.

创建物理卷【pvcreate 设备名/设备名2….】

[root@localhost ~]# pvcreate /dev/sdb /dev/sdc

Physical volume "/dev/sdb" successfully created

Physical volume "/dev/sdc" successfully created

[root@localhost ~]# pvscan

PV VG Fmt Attr PSize PFree

/dev/sda2 VolGroup lvm2 a-- 49.51g 0

/dev/sdb lvm2 a-- 100.00g 100.00g

/dev/sdc lvm2 a-- 50.00g 50.00g

[root@localhost ~]# pvdisplay

--- Physical volume ---

"/dev/sdb" is a new physical volume of "100.00 GiB"

--- NEW Physical volume ---

PV Name /dev/sdb

VG Name

PV Size 100.00 GiB

Allocatable NO

PE Size 0

Total PE 0

Free PE 0

Allocated PE 0

PV UUID su9PDS-jtXa-SKWH-n4Pf-TM0j-w8SW-swgd6d

"/dev/sdc" is a new physical volume of "50.00 GiB"

--- NEW Physical volume ---

PV Name /dev/sdc

VG Name

PV Size 50.00 GiB

Allocatable NO

PE Size 0

Total PE 0

Free PE 0

Allocated PE 0

PV UUID AASOe8-1yje-2rjn-O33d-0y27-PmIw-YoYCpF

删除物理卷【pvremove 设备名/设备名2….】前提是下面没有分配VG

[root@localhost ~]# pvremove /dev/sdc

Labels on physical volume "/dev/sdc" successfully wiped

[root@localhost ~]# pvs

PV VG Fmt Attr PSize PFree

/dev/sdb vg_new lvm2 a-- 100.00g 50.00g

[root@localhost ~]# pvdisplay

--- Physical volume ---

PV Name /dev/sdb

VG Name vg_new

PV Size 100.00 GiB / not usable 4.00 MiB

Allocatable yes

PE Size 4.00 MiB

Total PE 25599

Free PE 12799

Allocated PE 12800

PV UUID su9PDS-jtXa-SKWH-n4Pf-TM0j-w8SW-swgd6d

2.卷组【VG】创建、扩展、减少、删除、

创建卷组【vgcreate 卷组名 物理卷名1 物理卷名2 】

[root@localhost ~]# vgcreate vg_new /dev/sdb /dev/sdc

Volume group "vg_new" successfully created

[root@localhost ~]# vgscan

Reading all physical volumes. This may take a while...

Found volume group "vg_new" using metadata type lvm2

Found volume group "VolGroup" using metadata type lvm2

[root@localhost ~]# vgdisplay

--- Volume group ---

VG Name vg_new

System ID

Format lvm2

Metadata Areas 2

Metadata Sequence No 1

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 0

Open LV 0

Max PV 0

Cur PV 2

Act PV 2

VG Size 149.99 GiB

PE Size 4.00 MiB

Total PE 38398

Alloc PE / Size 0 / 0

Free PE / Size 38398 / 149.99 GiB

VG UUID qZmTmm-g2fV-La4E-vzcN-yN9S-giyQ-eWra2G

扩展卷组【vgextend 卷组名 物理卷名】

[root@localhost ~]# vgs

VG #PV #LV #SN Attr VSize VFree

vg_new 1 0 0 wz--n- 50.00g 50.00g

[root@localhost ~]# vgextend vg_new /dev/sdb

Volume group "vg_new" successfully extended

[root@localhost ~]# vgs

VG #PV #LV #SN Attr VSize VFree

vg_new 2 0 0 wz--n- 149.99g 149.99g

[root@localhost ~]# vgdisplay

--- Volume group ---

VG Name vg_new

System ID

Format lvm2

Metadata Areas 2

Metadata Sequence No 2

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 0

Open LV 0

Max PV 0

Cur PV 2

Act PV 2

VG Size 149.99 GiB

PE Size 4.00 MiB

Total PE 38398

Alloc PE / Size 0 / 0

Free PE / Size 38398 / 149.99 GiB

VG UUID 31tIMa-ldzd-caOd-0lsl-tZnY-QN47-H6mYO9

缩减卷组【vgreduce 卷组名 物理卷】

[root@localhost ~]# vgreduce vg_new /dev/sdb

Removed "/dev/sdb" from volume group "vg_new"

[root@localhost ~]# vgs

VG #PV #LV #SN Attr VSize VFree

vg_new 1 0 0 wz--n- 50.00g 50.00g

[root@localhost ~]# vgdisplay

--- Volume group ---

VG Name vg_new

System ID

Format lvm2

Metadata Areas 1

Metadata Sequence No 3

VG Access read/write

VG Status resizable

MAX LV 0

Cur LV 0

Open LV 0

Max PV 0

Cur PV 1

Act PV 1

VG Size 50.00 GiB

PE Size 4.00 MiB

Total PE 12799

Alloc PE / Size 0 / 0

Free PE / Size 12799 / 50.00 GiB

VG UUID 31tIMa-ldzd-caOd-0lsl-tZnY-QN47-H6mYO9

删除卷组【vgremove 卷组名 】

[root@localhost ~]# vgs

VG #PV #LV #SN Attr VSize VFree

VolGroup 1 2 0 wz--n- 49.51g 0

vg_new 1 0 0 wz--n- 50.00g 50.00g

[root@localhost ~]# vgremove vg_new

Volume group "vg_new" successfully removed

3.逻辑卷【LV】创建、扩展、缩减、删除

创建逻辑卷【lvcreate -L 大小 -n 逻辑卷名 卷组名 】

[root@localhost ~]# lvcreate -L 50G -n lv_new vg_new

Logical volume "lv_new" created

[root@localhost ~]# lvscan

ACTIVE '/dev/vg_new/lv_new' [50.00 GiB] inherit

[root@localhost ~]# lvdisplay

--- Logical volume ---

LV Path /dev/vg_new/lv_new

LV Name lv_new

VG Name vg_new

LV UUID wxH3Nn-GKXH-UrRX-Xotl-1bVi-GWUC-VQejzZ

LV Write Access read/write

LV Creation host, time localhost, 2016-04-17 13:17:13 -0400

LV Status available

# open 0

LV Size 50.00 GiB

Current LE 12800

Segments 1

Allocation inherit

Read ahead sectors auto

- currently set to 256

Block device 253:2

扩展逻辑卷【lvextend -L +大小 /dev/卷组名/逻辑卷名】

[root@localhost ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert

lv_new vg_new -wi-a---- 50.00g

[root@localhost ~]# lvextend -L +20G /dev/vg_new/lv_new

Extending logical volume lv_new to 70.00 GiB

Logical volume lv_new successfully resized

[root@localhost ~]# lvdisplay

--- Logical volume ---

LV Path /dev/vg_nw/lv_new

LV Name lv_new

VG Name vg_new

LV UUID 1JAHRk-JShy-jsqE-IpOL-Bzbq-VGOS-0HhVtz

LV Write Access read/write

LV Creation host, time localhost, 2016-04-17 13:56:42 -0400

LV Status available

# open 0

LV Size 70.00 GiB

Current LE 17920

Segments 1

Allocation inherit

Read ahead sectors auto

- currently set to 256

Block device 253:2

缩减逻辑卷空间【注意:必须按以下几步操作】

1、 先卸载逻辑卷lv_new 【逻辑卷只在挂载目录使用】

[root@localhost ~]# df -Th

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root

ext4 45G 3.2G 40G 8% /

tmpfs tmpfs 491M 0 491M 0% /dev/shm

/dev/sda1 ext4 485M 38M 423M 9% /boot

/dev/mapper/vg_new-lv_new

ext4 69G 180M 66G 1% /date

[root@localhost ~]#

[root@localhost ~]#

[root@localhost ~]# umount /date

2、 然后通过e2fsck命令检测逻辑卷上空余的空间。

[root@localhost ~]# e2fsck -f /dev/vg_new/lv_new

e2fsck 1.41.12 (17-May-2010)

Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure

Pass 3: Checking directory connectivity

Pass 4: Checking reference counts

Pass 5: Checking group summary information

/dev/vg_new/lv_new: 139/4587520 files (0.0% non-contiguous), 334336/18350080 blocks

3、 使用resize2fs将文件系统减少到50G。

[root@localhost ~]# resize2fs /dev/mapper/vg_new-lv_new 50G

resize2fs 1.41.12 (17-May-2010)

Resizing the filesystem on /dev/mapper/vg_new-lv_new to 13107200 (4k) blocks.

The filesystem on /dev/mapper/vg_new-lv_new is now 13107200 blocks long.

4、 再使用lvreduce命令将逻辑卷减少到50G。

[root@localhost ~]# lvreduce -L 50G /dev/vg_new/lv_new

WARNING: Reducing active logical volume to 50.00 GiB

THIS MAY DESTROY YOUR DATA (filesystem etc.)

Do you really want to reduce lv_new? [y/n]: Y

Reducing logical volume lv_new to 50.00 GiB

Logical volume lv_new successfully resized

[root@localhost ~]# mount /dev/vg_new/lv_new /date

[root@localhost ~]# df -Th

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root

ext4 45G 3.2G 40G 8% /

tmpfs tmpfs 491M 0 491M 0% /dev/shm

/dev/sda1 ext4 485M 38M 423M 9% /boot

/dev/mapper/vg_new-lv_new

ext4 50G 182M 47G 1% /date

注意:文件系统大小和逻辑卷大小一定要保持一致才行。如果逻辑卷大于文件系统,由于部分区域未格式化成文件系统会造成空间的浪费。如果逻辑卷小于文件系统,哪数据就出问题了

删除逻辑卷【lvremove /dev/vg_new/lv_new】

[root@localhost ~]# lvs

LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert

lv_new vg_new -wi-a---- 50.00g

[root@localhost ~]# lvremove /dev/vg_new/lv_new

Do you really want to remove active logical volume lv_new? [y/n]: Y

Logical volume "lv_new" successfully removed

如果某一个块物理磁盘或者分区故障了,如何将数据转移到其他卷组空间中。

1、通过pvmove命令转移空间数据

[root@localhost ~]# pvs

PV VG Fmt Attr PSize PFree

/dev/sda2 VolGroup lvm2 a-- 49.51g 0

/dev/sdb vg0 lvm2 a-- 100.00g 100.00g

/dev/sdc vg0 lvm2 a-- 50.00g 50.00g

[root@localhost ~]# pvmove /dev/sdb /dev/sdc

2、通过vgreduce命令将即将坏的磁盘或者分区从卷组vgdata里面移除除去。

[root@localhost ~]# vgreduce lv_new /dev/sdb

3、通过pvremove命令将即将坏的磁盘或者分区从系统中删除掉。

[root@localhost ~]# pvremove /dev/sdb

4、手工拆除硬盘或者通过一些工具修复分区。

二.设置磁盘配额

实现磁盘限额的条件

1.需要Linux内核支持

2.安装quota软件包

Linux磁盘限额的特点

作用范围:针对指定的文件系统(分区)

限制对象:用户帐号、组帐号

限制类型:

磁盘容量(默认单位为KB)

文件数量

限制方法:

软限制:一个用户在一定时间范围内(默认为一周,可以使用命令“edquota -t”重新设置,时间单位可以为天、小时、分 钟、秒)超过其限制的额度,在不超出硬限制的范围内可以继续使用空间,系统会发出警告(警告信息设置文件为“/etc/warnquota.conf”),但如果用户达到时间期限仍未释放空间到限制的额度下,系统将不再允许该用户使用更多的空间。

硬限制:一个用户可拥有的磁盘空间或文件的绝对数量,绝对不允许超过这个限制。

启用文件系统的配额支持

添加usrquota、grpquota挂载参数

检测磁盘配额并创建配额文件

使用quotacheck命令创建配额文件

quotacheck -ugcv 文件系统

quotacheck -augcv

-u、-g:检测用户、组配额 -c:创建配额数据文件 -v:显示执行过程信息 -a:检测所有可用的分区

编辑用户和组帐号的配额设置

使用edquota命令编辑配额设置

edquota -u 用户名

edquota -g 组名

启用、关闭文件系统的配额功能

使用quotaon、quotaoff命令

验证磁盘配额功能

必须切换到设置配额的分区(挂载目录)

创建指定数量的文件:使用touch命令,或cp命令

创建指定容量的文件:使用dd命令,或cp命令

查看配额使用情况

侧重用户、组帐号角度:使用quota命令

quota -u 用户名

quota -g 组名

侧重文件系统角度:使用repquota

1. 练习步骤:

创建用户hunter,并设置为此用户进行磁盘配额

[root@localhost ~]# useradd hunter

[root@localhost ~]# passwd hunter

Changing password for user hunter.

New password:

Retype new password:

passwd: all authentication tokens updated successfully.

创建一个逻辑分区,对此分区设置配额。

[root@localhost ~]# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to

switch off the mode (command 'c') and change display units to

sectors (command 'u').

Command (m for help): n

Command action

e extended

p primary partition (1-4)

e

Partition number (1-4): 4

First cylinder (1-13054, default 1):

Using default value 1

Last cylinder, +cylinders or +size{K,M,G} (1-13054, default 13054):

Using default value 13054

Command (m for help):

Command (m for help): p

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xb8d88355

Device Boot Start End Blocks Id System

/dev/sdb4 1 13054 104856223+ 5 Extended

Command (m for help): n

Command action

l logical (5 or over)

p primary partition (1-4)

l

First cylinder (1-13054, default 1):

Using default value 1

Last cylinder, +cylinders or +size{K,M,G} (1-13054, default 13054):

Using default value 13054

Command (m for help): p

Disk /dev/sdb: 107.4 GB, 107374182400 bytes

255 heads, 63 sectors/track, 13054 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xb8d88355

Device Boot Start End Blocks Id System

/dev/sdb4 1 13054 104856223+ 5 Extended

/dev/sdb5 1 13054 104856192 83 Linux

Command (m for help): w

[root@localhost ~]# ls /dev/sdb*

/dev/sdb /dev/sdb4 /dev/sdb5

[root@localhost ~]# partx -a /dev/sdb

sdb sdb4 sdb5

[root@localhost ~]# partx -a /dev/sdb

BLKPG: Device or resource busy

error adding partition 4

BLKPG: Device or resource busy

error adding partition 5

[root@localhost ~]# ls /dev/sdb*

/dev/sdb /dev/sdb4 /dev/sdb5

创建挂载点,并格式化分区和文件系统

[root@localhost ~]# mkdir /quota

[root@localhost ~]# mkfs.ext4 /dev/sdb5

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

6553600 inodes, 26214048 blocks

1310702 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

800 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 25 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

修改配置文件,设置挂载位置永久生效。

[root@localhost date]# df -Th

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root

ext4 45G 3.2G 40G 8% /

tmpfs tmpfs 491M 0 491M 0% /dev/shm

/dev/sda1 ext4 485M 38M 423M 9% /boot

/dev/sdb5 ext4 99G 188M 94G 1% /quota

[root@localhost date]# vim /etc/fstab

#

# /etc/fstab

# Created by anaconda on Mon Jun 15 08:16:31 2015

#

# Accessible filesystems, by reference, are maintained under '/dev/disk'

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info

#

/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1

UUID=1773339b-7194-409d-872d-6a850058e748 /boot ext4 defaults 1 2

/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0

tmpfs /dev/shm tmpfs defaults 0 0

devpts /dev/pts devpts gid=5,mode=620 0 0

sysfs /sys sysfs defaults 0 0

proc /proc proc defaults 0 0

[root@localhost date]# mount -a

[root@localhost date]# df -TH

Filesystem Type Size Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root

ext4 49G 3.5G 43G 8% /

tmpfs tmpfs 515M 0 515M 0% /dev/shm

/dev/sda1 ext4 508M 39M 443M 9% /boot

/dev/sdb5 ext4 106G 197M 101G 1% /quota

使用quotacheck命令生成配置磁盘配置的数据库文件,如果出现以下权限不够的情况,通过setenforce 0临时关掉selinux就可以了,quotacheck执行成功可以看到/quota/下面多了两个文件。然后通过quotaon /quota/启动对应文件系统上的磁盘配额功能。

[root@localhost date]# quotacheck -cug /quota

quotacheck: Cannot create new quotafile /quota/aquota.user.new: Permission denied

quotacheck: Cannot initialize IO on new quotafile: Permission denied

quotacheck: Cannot create new quotafile /quota/aquota.group.new: Permission denied

quotacheck: Cannot initialize IO on new quotafile: Permission denied

[root@localhost date]# setenforce 0 ---> 将selinux进行关闭

[root@localhost date]# quotacheck -cug /quota

[root@localhost date]# ls -ltr

total 0

[root@localhost date]# ls -ltr /quota

total 32

drwx------. 2 root root 16384 Apr 17 16:39 lost+found

-rw-------. 1 root root 6144 Apr 17 18:46 aquota.user

-rw-------. 1 root root 6144 Apr 17 18:46 aquota.group

通过edquota -u hunter配置用户hunter对这个磁盘分区的使用配额。还可以通过edquota -g groupname 对groupname这个组设定配额

[root@localhost date]# edquota -u hunter

Disk quotas for user hunter (uid 501):

Filesystem blocks soft hard inodes soft hard

/dev/sdb5 2 10240 20480 2 5 10

Filesystem 文件系统

blocks 已经使用的块数(块的单位为1K)

soft 块数软限制,0表示禁用

hard 块数硬限制,0表示禁用

inodes 已经创建的文件个数,如果后面有*表示已经超出软限制

soft 创建的文件个数的软限制,0表示禁用

hard 创建的文件个数的硬限制,0表示禁用

edquota -p user1 user2 把user1用户的设定复制给user2用户

这里对hunter这个用对该分区磁盘的容量软限制为10M,硬限制为20M(即使该分区有50M的空间),对文件个数的软限制为5个,硬限制为10个。

下面进行测试,测试之前要给hunter这个用户对/quota目录写权限。

进入实战测试,看刚才我给hunter用户做的磁盘限额是否生效

[root@localhost date]# su - hunter #切换到hunter用户

[hunter@localhost ~]$ cd /quota/

[hunter@localhost quota]$ touch user{1..5} #创建5个空文件

[hunter@localhost quota]$ ls -ltr

total 32

drwx------. 2 root root 16384 Apr 17 16:39 lost+found

-rw-------. 1 root root 7168 Apr 17 19:59 aquota.user

-rw-rw-r--. 1 hunter hunter 0 Apr 17 20:06 user5

-rw-rw-r--. 1 hunter hunter 0 Apr 17 20:06 user4

-rw-rw-r--. 1 hunter hunter 0 Apr 17 20:06 user3

-rw-rw-r--. 1 hunter hunter 0 Apr 17 20:06 user2

-rw-rw-r--. 1 hunter hunter 0 Apr 17 20:06 user1

-rw-------. 1 root root 7168 Apr 17 20:06 aquota.group

[hunter@localhost quota]$ touch user6 #此处触发了软限制

sdb5: warning, user file quota exceeded.

[hunter@localhost quota]$ ls

aquota.group aquota.user lost+found use6 user1 user2 user3 user4 user5

[hunter@localhost quota]$ touch user{7..15} #此处真正到了硬限制

sdb5: write failed, user file limit reached.

touch: cannot touch `user11': Disk quota exceeded

touch: cannot touch `user12': Disk quota exceeded

touch: cannot touch `user13': Disk quota exceeded

touch: cannot touch `user14': Disk quota exceeded

touch: cannot touch `user15': Disk quota exceeded

[hunter@localhost quota]$ rm -rf user* #删除之后我们再来试验一下容量的限制

[hunter@localhost quota]$ ls -ltr

total 32

drwx------. 2 root root 16384 Apr 17 16:39 lost+found

-rw-------. 1 root root 7168 Apr 17 19:59 aquota.user

-rw-------. 1 root root 7168 Apr 17 20:06 aquota.group

[hunter@localhost quota]$ dd if=/dev/zero of=text.txt bs=1M count=11

sdb5: warning, user block quota exceeded. #此处已经触发了容量10M的软限制,不过还能进行写入

11+0 records in

11+0 records out

11534336 bytes (12 MB) copied, 0.373587 s, 30.9 MB/s

hunter@localhost quota]$ dd if=/dev/zero of=text.txt bs=1M count=21

sdb5: warning, user block quota exceeded.#此处已经触发了20M的硬限制,现已经写不进去了,看来做的对用户的磁盘限额没有问题。

sdb5: write failed, user block limit reached.

dd: writing `text.txt': Disk quota exceeded

21+0 records in

20+0 records out

20971520 bytes (21 MB) copied, 0.351462 s, 59.7 MB/s

切换root用户使用repquota -a来查看当前各磁盘配额的使用情况。从下图可以看出hunter用户已经达到了磁盘使用的最大容量限制

[root@localhost date]# repquota -a

***Report for user quotas on device /dev/sdb5

Block grace time: 7days; Inode grace time: 7days

Block limits File limits

User used soft hard grace used soft hard grace

----------------------------------------------------------------------

root -- 20 0 0 2 0 0

hunter +- 11264 10240 20480 6days 1 5 10

需要注意的是,当用户触发软限制时,grace time就会倒计时,在这个时间(默认是7天)没有耗尽之前,若用户还不清理磁盘使之符合软限制的要求,则软限制就会变成硬限制,这个时间叫宽限期。可以通过edquota -t设置这个时间,分别设置容量和文件数量的宽限期。

[root@localhost date]# edquota -t

Grace period before enforcing soft limits for users:

Time units may be: days, hours, minutes, or seconds

Filesystem Block grace period Inode grace period

/dev/sdb5 7days 7days

对于LVM逻辑卷和磁盘配额还是比较重要的,请各位大神多多指教!!!

0b1331709591d260c1c78e86d0c51c18.png

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值