ceph over ipv6

需求

随着ipv6使用得越来越广,很多网络设施逐步地需要支持ipv6,而ceph作为可大规模部署的分布式存储系统,ipv6的支持是必选的,本文主要介绍ceph over ipv6的场景及其功能使用

环境情况

测试环境,一个ceph集群加一台主机进行功能验证

  • ceph环境:ceph version 12.2.11 luminous (stable)
  • 客户端机器:CentOS Linux release 7.5.1804 (Core)
  • ceph集群每个节点两个网卡,均配置ipv6地址
集群情况

简单起见,配置不多

[cephfsd@ceph1 ~]$ sudo ceph -s
  cluster:
    id:     db45806c-b322-450d-8f8a-3c07cdcd0b8e
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum ceph1,ceph2,ceph3
    mgr: ceph1(active), standbys: ceph2
    mds: firstcephfs-1/1/1 up  {0=ceph1=up:active}, 1 up:standby
    osd: 3 osds: 3 up, 3 in
    rgw: 3 daemons active

  data:
    pools:   8 pools, 176 pgs
    objects: 2.11k objects, 18.5MiB
    usage:   3.41GiB used, 20.6GiB / 24.0GiB avail
    pgs:     176 active+clean

使用ceph-deploy部署的,配置文件大部分都是自动生成的,如果手工部署,需要注意ms_bind_ipv6 rgw_frontends 这两个参数的配置

[global]
fsid = db45806c-b322-450d-8f8a-3c07cdcd0b8e
ms_bind_ipv6 = true
mon_initial_members = ceph1, ceph2, ceph3
mon_host = [2001:470:18:ac4::2],[2001:470:18:ac4::3],[2001:470:18:ac4::4]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public_network = 2001:470:18:ac4::2/64
cluster_network = 2002:470:18:ac4::2/64

rgw_frontends = "civetweb port=[::]:7480"

客户端功能验证

事实上,只要集群起来跑得没问题,功能应该是没问题的,有几个细节地方记录一下

对象存储功能

使用python的boto包进行测试,这里注意endpoint的指定方式,如果不带[]会有问题

#!/usr/bin/env python
import boto
import time
import boto.s3.connection

access_key = 'aaaaaa'
secret_key = 'bbbbbbbbb'
endpoint = '[2001:470:18:ac4::2]'
conn = boto.connect_s3(aws_access_key_id = access_key,
                       aws_secret_access_key = secret_key,
                       host = endpoint,
                       port = 7480,
                       is_secure=False,
                       calling_format = boto.s3.connection.OrdinaryCallingFormat())
conn.create_bucket('my-bucket3')
my_bucket = conn.get_bucket('my-bucket3')
newobj = my_bucket.new_key(str(time.time())+'.rst')
newobj.set_contents_from_filename('README.rst')
conn.close()

另外,s3cmd对接ipv6的时候,也是需要将ipv6部分用[]起来,否则会报错

[tanweijie@openattic ~]$ cat  .s3cfg |grep 'host_'
host_base = [2001:470:18:ac4::2]:7480
host_bucket = [2001:470:18:ac4::2]:7480/%(bucket)
cephfs功能

在crushmap中禁用掉chooseleaf_vary_rchooseleaf_stable后,在测试设备上挂载cephfs,写入文件、读取文件均无问题,要注意的是挂载的方式,ipv6地址依然需要[]起来

[root@openattic tanweijie]# mount -t ceph [2001:470:18:ac4::2]:6789:/ /media/ -o name=admin,secret=AQAq+3Rc******
[root@openattic tanweijie]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/centos-root       17G  2.3G   15G  14% /
devtmpfs                     484M     0  484M   0% /dev
tmpfs                        496M     0  496M   0% /dev/shm
tmpfs                        496M  6.8M  490M   2% /run
tmpfs                        496M     0  496M   0% /sys/fs/cgroup
/dev/sda1                   1014M  130M  885M  13% /boot
tmpfs                        100M     0  100M   0% /run/user/0
tmpfs                        100M     0  100M   0% /run/user/1001
[2001:470:18:ac4::2]:6789:/  6.6G     0  6.6G   0% /media
[root@openattic tanweijie]# cp -r openattic /media/
[root@openattic tanweijie]# cd /media
[root@openattic media]# ls
openattic
[root@openattic media]# rm -rf openattic
[root@openattic media]# ls
[root@openattic media]#
rbd功能

rbd的验证需要通过iscsi网关的方式提供出来,这里使用ceph-iscsi项目中的组件来实现,过程非常坑

简单起见,不使用HA模式,只在单节点上启用iscsi网关,首先在节点上安装必要的依赖和包

#!/bin/bash

yums='libnl3 
libnl3-devel 
kmod 
kmod-devel 
librbd1
pyparsing
python-kmod
python-pyudev
python-gobject
python-urwid
python-pyparsing
python-rados
python-rbd
python-netaddr
python-netifaces
python-crypto
python-requests
python-flask
pyOpenSSL
git
gcc
cmake
rpm-build'

for x in $yums;
do
    sudo yum install -y $x
done

git clone https://github.com/open-iscsi/tcmu-runner
git clone https://github.com/open-iscsi/rtslib-fb.git
git clone https://github.com/open-iscsi/targetcli-fb.git
git clone https://github.com/ceph/ceph-iscsi-config.git
git clone https://github.com/ceph/ceph-iscsi-cli.git

注意到,下载了5个git项目,一个个来进行安装

cd tcmu-runner
sudo ./extra/install_dep.sh
cd ./extra && ./make_runnerrpms.sh (可选)
cd ..
cmake -Dwith-glfs=false -Dwith-qcow=false -DSUPPORT_SYSTEMD=ON -DCMAKE_INSTALL_PREFIX=/usr
make
make install
systemctl daemon-reload
systemctl enable tcmu-runner
systemctl start tcmu-runner

cd rtslib-fb
python setup.py install

cd targetcli-fb
python setup.py install
mkdir /etc/target
mkdir /var/target

cd ceph-iscsi-config
python setup.py install --install-scripts=/usr/bin
cp usr/lib/systemd/system/rbd-target-gw.service /lib/systemd/system

systemctl daemon-reload
systemctl enable rbd-target-gw
systemctl start rbd-target-gw

cd ceph-iscsi-cli
python setup.py install --install-scripts=/usr/bin
cp usr/lib/systemd/system/rbd-target-api.service /lib/systemd/system

服务都起来后,开始配置,首先创建存储池用于rbd

ceph osd pool create rbd 64 64
rbd pool init rbd
rbd create rbd/mydisk0 --size 10G

然后是targetcli的配置

targetcli
cd /backstores/user:rbd
create cfgstring=rbd/mydisk0 name=disk0 size=10G
cd /iscsi
create iqn.2018-03.com.redhat.iscsi-gw:iscsi
cd iqn.2018-03.com.redhat.iscsi-gw:iscsi/tpg1/portals
create 2001:470:18:ac4::4 3260
cd ..
luns/ create /backstores/user:rbd/disk0
acls/ create iqn.1994-05.com.redhat:rh7-client

这样服务器就准备好了,接下来准备客户端

yum install iscsi-initiator-utils device-mapper-multipath

cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2018-03.com.redhat.iscsi-gw:client

cat /etc/iscsi/iscsid.conf #可以不启用认证

iscsiadm -m discovery -t st -p 2001:470:18:ac4::4
iscsiadm -m node -T iqn.2018-03.com.redhat.iscsi-gw:iscsi -p 2001:470:18:ac4::4 -l

登录成功后,即可查看到本地有新的磁盘,lsblk -l

对ceph感兴趣的朋友,可以关注"奋斗的cepher"公众号阅读更多好文

  • 6
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值