总结
本文讲述了如何在两个femu间搭建NVMe over TCP环境
前置知识
这里不过多解释何为NVMe over TCP,简单来说就是使用TCP协议下发NVMe命令给远端的NVMe设备。
NVMe over TCP环境搭建时,发送NVMe命令的叫做host端,接收NVMe命令的叫target端
femu启动时,默认用的是主机端的IP地址,为了两个femu之间能够通信,需要两个femu都有独立的IP,可以通过在主机端搭建虚拟网桥和虚拟网卡使两个femu拥有独立IP
1. 检查内核配置
在开始之前需要确定NVME_TCP、NVME_TARGET等内核配置是被set的
$ cat /boot/config-`uname -r` | grep NVME
# NVME Support
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
# CONFIG_NVME_MULTIPATH is not set
# CONFIG_NVME_HWMON is not set
CONFIG_NVME_FABRICS=m
CONFIG_NVME_FC=m
CONFIG_NVME_TCP=m
CONFIG_NVME_TARGET=m
CONFIG_NVME_TARGET_LOOP=m
CONFIG_NVME_TARGET_FC=m
# CONFIG_NVME_TARGET_FCLOOP is not set
CONFIG_NVME_TARGET_TCP=m
# end of NVME Support
2. host端和target端安装nvme-cli
在host端和target端都安装上nvme-cli
sudo apt-get install nvme-cli
3. target端配置
- 首先在target端安装nvmet以及nvmet-tcp模块
sudo modprobe nvmet
sudo modprobe nvmet-tcp
- 然后使用如下命令去创建一个NVMe Target subsystem和namespace
cd /sys/kernel/config/nvmet/subsystems
sudo mkdir nvme-test-target
cd nvme-test-target/
echo 1 | sudo tee -a attr_allow_any_host > /dev/null
sudo mkdir namespaces/1
cd namespaces/1
- 找到target端的nvme设备名称,这里是/dev/nvme0n1
femu@fvm:~$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 vSSD0 FEMU BlackBox-SSD Controller 1 4.29 GB / 4.29 GB 512 B + 0 B 1.0
- 将nvme设备/dev/nvme0n1与刚刚创建的子系统进行关联
echo -n /dev/nvme0n1 |sudo tee -a device_path > /dev/null
echo 1|sudo tee -a enable > /dev/null
- 创建端口号,配置IP和其他参数
sudo mkdir /sys/kernel/config/nvmet/ports/1
cd /sys/kernel/config/nvmet/ports/1
echo 192.168.123.4 |sudo tee -a addr_traddr > /dev/null
echo tcp|sudo tee -a addr_trtype > /dev/null
echo 4420|sudo tee -a addr_trsvcid > /dev/null
echo ipv4|sudo tee -a addr_adrfam > /dev/null
- 最后将端口与子系统绑定
sudo ln -s /sys/kernel/config/nvmet/subsystems/nvme-test-target/ /sys/kernel/config/nvmet/ports/1/subsystems/nvme-test-target
- 应该可以看到nvme_tcp服务被启动
femu@fvm:/sys/kernel/config/nvmet/ports/1$ dmesg |grep "nvmet_tcp"
[ 1487.076439] nvmet_tcp: enabling port 1 (192.168.123.3:4420)
target端配置到此结束
4. host端配置
- 安装nvme、nvme-tcp模块
sudo modprobe nvme
sudo modprobe nvme-tcp
- 在主机端使用nvme discover命令查看是否能搜索到盘
kjay@kjay:~$ sudo nvme discover -t tcp -a 192.168.123.3 -s 4420
[sudo] password for kjay:
Discovery Log Number of Records 1, Generation counter 2
=====Discovery Log Entry 0======
trtype: tcp
adrfam: ipv4
subtype: nvme subsystem
treq: not specified, sq flow control disable supported
portid: 1
trsvcid: 4420
subnqn: nvme-test-target
traddr: 192.168.123.3
sectype: none
- 连接该nvme盘
sudo nvme connect -t tcp -n nvme-test-target -a 192.168.123.3 -s 4420 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1b4e28ba-2fa1-11d2-883f-0016d3ccabcd
- 可以看到盘被识别为/dev/nvme1n1
kjay@kjay:~$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 BTPY72440ASR128A INTEL SSDPEKKR128G7 1 128.04 GB / 128.04 GB 512 B + 0 B PSF109E
/dev/nvme1n1 713dcb7e1fef4c5d Linux 1 4.29 GB / 4.29 GB 512 B + 0 B 5.4.0-77
此时可以使用nvme-cli对/dev/nvme1n1下发nvme命令
5. 如何取消host端与target端的连接
sudo nvme disconnect /dev/nvme1n1 -n nvme-test-target
6. target端如何卸载nvmet模块
这个问题困扰了我蛮久,最后还是和学长一起解决了,具体细节这里不展开,有兴趣的话可以看看这个rm -r 与 rmdir 区别
使用以下命令:
sudo rm -rf /sys/kernel/config/nvmet/ports/1/subsystems/nvme-test-target
echo 0 > /sys/kernel/config/nvmet/subsystems/nvme-test-target/namespaces/1/enable
echo -n 0 > /sys/kernel/config/nvmet/subsystems/nvme-test-target/namespaces/1/device_path
sudo rmdir --ignore-fail-on-non-empty /sys/kernel/config/nvmet/subsystems/nvme-test-target/namespaces/1
sudo rmdir --ignore-fail-on-non-empty /sys/kernel/config/nvmet/subsystems/nvme-test-target
sudo rmdir --ignore-fail-on-non-empty /sys/kernel/config/nvmet/ports/1
sudo rmmod nvmet-tcp
sudo rmmod nvmet
使用rmdir而不是rm -rf来删除之前创建的目录