一、tuned:调整调优配置文件
调优系统:
系统管理员1可以基于多种用例工作负载来调整各种设备设置,依此优化系统性能。tuned守护进程会利用反应特定工作负载要求的调优配置文件,以静态和动态两种方法应用调优调整。
配置动态调优:对于动态调优而言,tuned守护进程会监视系统活动,并根据运行时行为的变化来调整设置,从所选调优配置文件中声明的初始设置开始,动态调优会不断进行调整以适应当前工作负载。
默认情况下要手动并启用安装包
[root@localhost ~]# yum install tuned -y
正在更新 Subscription Management 软件仓库。
baseos 2.0 MB/s | 2.8 kB 00:00
AppStream 1.3 MB/s | 3.2 kB 00:00
Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) 3.0 kB/s | 4.5 kB 00:01
Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) 3.9 kB/s | 4.1 kB 00:01
软件包 tuned-2.18.0-2.el8.noarch 已安装。
依赖关系解决。
========================================================================================================
软件包 架构 版本 仓库 大小
========================================================================================================
升级:
tuned noarch 2.18.0-2.el8_6.1 rhel-8-for-x86_64-baseos-rpms 316 k
事务概要
========================================================================================================
升级 1 软件包
总下载:316 k
下载软件包:
tuned-2.18.0-2.el8_6.1.noarch.rpm 117 kB/s | 316 kB 00:02
--------------------------------------------------------------------------------------------------------
总计 117 kB/s | 316 kB 00:02
运行事务检查
事务检查成功。
运行事务测试
事务测试成功。
运行事务
准备中 : 1/1
运行脚本: tuned-2.18.0-2.el8_6.1.noarch 1/1
升级 : tuned-2.18.0-2.el8_6.1.noarch 1/2
运行脚本: tuned-2.18.0-2.el8_6.1.noarch 1/2
运行脚本: tuned-2.18.0-2.el8.noarch 2/2
清理 : tuned-2.18.0-2.el8.noarch 2/2
运行脚本: tuned-2.18.0-2.el8.noarch 2/2
运行脚本: tuned-2.18.0-2.el8_6.1.noarch 2/2
运行脚本: tuned-2.18.0-2.el8.noarch 2/2
验证 : tuned-2.18.0-2.el8_6.1.noarch 1/2
验证 : tuned-2.18.0-2.el8.noarch 2/2
已更新安装的产品。
已升级:
tuned-2.18.0-2.el8_6.1.noarch
完毕!
[root@localhost ~]# systemctl is-active tuned
active
1、选择调优配置文件:
Tuned应用提供的配置文件分为以下几个类别:
节能型配置文件
性能提升型配置文件
性能提升型配置文件中包括侧重于以下的配置文件:
存储和网络的低延迟
存储和网络的高吞吐量
虚拟机性能
虚拟化主机性能
随红帽企业Linux 8 分发的调优配置文件
调优配置文件 | 用途 |
均衡 | 非常适合需要在节能和性能之间进行折衷的系统 |
desktop | 从balanced配置文件衍生而来。加快交互式应用响应速度 |
throughput-performance | 调优系统,以获得最大吞吐量 |
latency-performance | 非常适合需要牺牲能耗来获取延迟的服务器系统 |
网络延迟 | 从latency-performance配置文件衍生而来。它可以启用额外的网络调优参数,以提供低网络延迟。 |
网络吞吐量 | 从throughput-performance配置文件衍生而来。应用其他网络调优参数,以获得最大网络吞吐量 |
节能 | 调优系统,以最大程度实现节能 |
Oracle | 基于throughput-performance配置文件,针对Oracle数据库负载进行优化 |
virtual-guest | 当系统在虚拟机上运行时,调优系统以获得最高性能 |
virtual-host | 当系统充当虚拟机的主机时,调优系统以获得最高性能 |
2、从命令行管理配置文件
tuned-adm命令可用于更改tuned守护进程的设置。tuned-adm命令可以查询当前设置,列出可用的配置文件,为系统推荐调优配置文件,直接更改配置文件或关闭调优。
系统管理员可以使用tuned-adm active来确定当前活动的调优配置文件
tuned-adm list命令列出所有可用的调优配置1文件,包括内容的配置文件和系统管理员的创建的自定义的文件
使用tuned-adm profile profilename可以将活动的配置文件切换为更符合系统当前调优要求的其他配置文件
tuned-adm recommend的输出基于各种系统特征,包括系统是否为虚拟机以及在系统安装期间选择的其他预定义类别
二、Stratis管理分层存储
通过Stratis,便捷的使用精简配置(thin provisioning),快照(snapshots)和基于池(pool-based )的管理和监控等高级存储功能;
案例
1、配置yum源,安装软件包
[root@localhost ~]# yum install stratisd -y
[root@localhost ~]# yum install stratis-cli
2、启动stratisd服务
[root@localhost ~]# systemctl enable stratisd --now
[root@localhost ~]# stratis --version
2.4.2
[root@localhost ~]# stratis pool create pool1 /dev/nvme0n2
[root@kongd ~]# stratis filesystem list
[root@kongd ~]# stratis pool add-data pool1 /dev/nvme0n3
[root@kongd ~]# stratis pool list
4、永久挂载
[root@localhost ~]# tail -l /etc/fstab
5、快照
[root@localhost ~]# stratis filesystem snapshot redhat rhce snap01
三、VDO压缩存储和删除重复数据
案例:
[root@localhost ~]# mount /dev/sr0 /mnt/
mount: /mnt: WARNING: device write-protected, mounted read-only.
[root@localhost ~]# yum install ydo kmod-kvdo -y
正在更新 Subscription Management 软件仓库。
上次元数据过期检查:0:47:16 前,执行于 2022年11月07日 星期一 19时31分43秒。
未找到匹配的参数: ydo
软件包 kmod-kvdo-6.2.6.14-84.el8.x86_64 已安装。
错误:没有任何匹配: ydo
2、创建VOD卷
3、分析一个VDO卷(deduplication重复删除数据compression压缩)
[root@localhost ~]# vdo status --name vdo1
VDO status:
Date: '2022-11-07 20:26:50-08:00'
Node: localhost.localdomain
Kernel module:
Loaded: true
Name: kvdo
Version information:
kvdo version: 6.2.6.14
Configuration:
File: /etc/vdoconf.yml
Last modified: '2022-11-07 20:21:54'
VDOs:
vdo1:
Acknowledgement threads: 1
Activate: enabled
Bio rotation interval: 64
Bio submission threads: 4
Block map cache size: 128M
Block map period: 16380
Block size: 4096
CPU-work threads: 2
Compression: enabled
Configured write policy: auto
Deduplication: enabled
Device mapper status: 0 31457280 vdo /dev/nvme0n3 normal - online online 786786 1310720
Emulate 512 byte: disabled
Hash zone threads: 1
Index checkpoint frequency: 0
Index memory setting: 0.25
Index parallel factor: 0
Index sparse: disabled
Index status: online
Logical size: 15G
Logical threads: 1
Max discard size: 4K
Physical size: 5G
Physical threads: 1
Slab size: 2G
Storage device: /dev/disk/by-id/nvme-VMware_Virtual_NVMe_Disk_VMware_NVME_0000
UUID: VDO-ee8d5d29-da00-43f9-8019-70794c38b39d
VDO statistics:
/dev/mapper/vdo1:
1K-blocks: 5242880
1K-blocks available: 2095736
1K-blocks used: 3147144
512 byte emulation: false
KVDO module bytes used: 408542976
KVDO module peak bytes used: 408542976
bios acknowledged discard: 0
bios acknowledged flush: 0
bios acknowledged fua: 0
bios acknowledged partial discard: 0
bios acknowledged partial flush: 0
bios acknowledged partial fua: 0
bios acknowledged partial read: 0
bios acknowledged partial write: 0
bios acknowledged read: 261
bios acknowledged write: 0
bios in discard: 0
bios in flush: 0
bios in fua: 0
bios in partial discard: 0
bios in partial flush: 0
bios in partial fua: 0
bios in partial read: 0
bios in partial write: 0
bios in progress discard: 0
bios in progress flush: 0
bios in progress fua: 0
bios in progress read: 0
bios in progress write: 0
bios in read: 261
bios in write: 0
bios journal completed discard: 0
bios journal completed flush: 0
bios journal completed fua: 0
bios journal completed read: 0
bios journal completed write: 0
bios journal discard: 0
bios journal flush: 0
bios journal fua: 0
bios journal read: 0
bios journal write: 0
bios meta completed discard: 0
bios meta completed flush: 0
bios meta completed fua: 0
bios meta completed read: 4
bios meta completed write: 65
bios meta discard: 0
bios meta flush: 1
bios meta fua: 1
bios meta read: 4
bios meta write: 65
bios out completed discard: 0
bios out completed flush: 0
bios out completed fua: 0
bios out completed read: 0
bios out completed write: 0
bios out discard: 0
bios out flush: 0
bios out fua: 0
bios out read: 0
bios out write: 0
bios page cache completed discard: 0
bios page cache completed flush: 0
bios page cache completed fua: 0
bios page cache completed read: 0
bios page cache completed write: 0
bios page cache discard: 0
bios page cache flush: 0
bios page cache fua: 0
bios page cache read: 0
bios page cache write: 0
block map cache pressure: 0
block map cache size: 134217728
block map clean pages: 0
block map dirty pages: 0
block map discard required: 0
block map failed pages: 0
block map failed reads: 0
block map failed writes: 0
block map fetch required: 0
block map flush count: 0
block map found in cache: 0
block map free pages: 32768
block map incoming pages: 0
block map outgoing pages: 0
block map pages loaded: 0
block map pages saved: 0
block map read count: 0
block map read outgoing: 0
block map reclaimed: 0
block map wait for page: 0
block map write count: 0
block size: 4096
completed recovery count: 0
compressed blocks written: 0
compressed fragments in packer: 0
compressed fragments written: 0
concurrent data matches: 0
concurrent hash collisions: 0
current VDO IO requests in progress: 0
current dedupe queries: 0
data blocks used: 0
dedupe advice stale: 0
dedupe advice timeouts: 0
dedupe advice valid: 0
entries indexed: 0
flush out: 0
instance: 0
invalid advice PBN count: 0
journal blocks batching: 0
journal blocks committed: 0
journal blocks started: 0
journal blocks writing: 0
journal blocks written: 0
journal commits requested count: 0
journal disk full count: 0
journal entries batching: 0
journal entries committed: 0
journal entries started: 0
journal entries writing: 0
journal entries written: 0
logical blocks: 3932160
logical blocks used: 0
maximum VDO IO requests in progress: 63
maximum dedupe queries: 0
no space error count: 0
operating mode: normal
overhead blocks used: 786786
physical blocks: 1310720
posts found: 0
posts not found: 0
queries found: 0
queries not found: 0
read only error count: 0
read-only recovery count: 0
recovery progress (%): N/A
reference blocks written: 0
release version: 133524
saving percent: N/A
slab count: 1
slab journal blocked count: 0
slab journal blocks written: 0
slab journal disk full count: 0
slab journal flush count: 0
slab journal tail busy count: 0
slab summary blocks written: 0
slabs opened: 0
slabs reopened: 0
updates found: 0
updates not found: 0
used percent: 60
version: 31
write amplification ratio: 0.0
write policy: sync
4、给vdo1一个xfs文件系统,之后挂载到/mnt/vdo1上
[root@localhost ~]# mkfs.xfs -k /dev/mapper/vdo1
mkfs.xfs: invalid option -- 'k'
unknown option -k
Usage: mkfs.xfs
/* blocksize */ [-b size=num]
/* metadata */ [-m crc=0|1,finobt=0|1,uuid=xxx,rmapbt=0|1,reflink=0|1,
inobtcount=0|1,bigtime=0|1]
/* data subvol */ [-d agcount=n,agsize=n,file,name=xxx,size=num,
(sunit=value,swidth=value|su=num,sw=num|noalign),
sectsize=num
/* force overwrite */ [-f]
/* inode size */ [-i log=n|perblock=n|size=num,maxpct=n,attr=0|1|2,
projid32bit=0|1,sparse=0|1]
/* no discard */ [-K]
/* log subvol */ [-l agnum=n,internal,size=num,logdev=xxx,version=n
sunit=value|su=num,sectsize=num,lazy-count=0|1]
/* label */ [-L label (maximum 12 characters)]
/* naming */ [-n size=num,version=2|ci,ftype=0|1]
/* no-op info only */ [-N]
/* prototype file */ [-p fname]
/* quiet */ [-q]
/* realtime subvol */ [-r extsize=num,size=num,rtdev=xxx]
/* sectorsize */ [-s size=num]
/* version */ [-V]
devicename
<devicename> is required unless -d name=xxx is given.
<num> is xxx (bytes), xxxs (sectors), xxxb (fs blocks), xxxk (xxx KiB),
xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<value> is xxx (512 byte blocks).
[root@localhost ~]#
-K选项可防止立即查看文件系统中未使用的块,从而使命令返回更快
格式化结束:
[root@localhost ~]# mkdir /d1
5、使用vdostats命令查看卷的初始统计信息和状态
[root@localhost ~]# vdostats --human-readable
Device Size Used Available Use% Space saving%
/dev/mapper/vdo1 5.0G 3.0G 2.0G 60% N/A
6、创建文件,移动文件到挂载点,然后看一下结果
[root@localhost ~]# mount /dev/mapper/vdo1 /d1
[root@localhost ~]# cd /d1
[root@localhost d1]# ll
总用量 0
[root@localhost d1]# cd
[root@localhost ~]# ll /mnt/images/
总用量 763861
-r--r--r--. 1 root root 10078208 6月 28 00:19 efiboot.img
-r--r--r--. 1 root root 772112384 6月 28 00:09 install.img
dr-xr-xr-x. 2 root root 2048 6月 28 00:50 pxeboot
-r--r--r--. 1 root root 446 6月 28 00:50 TRANS.TBL
[root@localhost ~]# cp /mnt/images/install.img /d1
[root@localhost ~]# vdostats --human-readable
Device Size Used Available Use% Space saving%
/dev/mapper/vdo1 5.0G 3.7G 1.3G 74% 1%
7、重新复制一遍
[root@localhost ~]# cp /mnt/images/install.img /d1/install.img
cp:是否覆盖'/d1/install.img'? y
[root@localhost ~]# vdostats --human-readable
Device Size Used Available Use% Space saving%
/dev/mapper/vdo1 5.0G 3.7G 1.3G 74% 45%