ceph部署手册

本手册详解讲解部署、运维和使用 Ceph 的过程。
部署:涉及 Ceph 资源规划、组件安装&配置、状态检查等,提供一个高性能、高可靠性、多功能的存储集群;
运维:扩容、下线节点、常见问题和故障、Troubleshooting 等;
应用:详细演示 磁盘快、对象、文件系统 的使用方式,以及作为 K8S 持久化存储的使用方式(PV PVC StorageClass) 等;

一、部署

本手册讲解使用 ceph-deploy 工具部署 luminous 版本 Ceph 集群的步骤。

主机规划如下:

IP主机名内容
172.27.132.65kube-node1mgr、mon、osd
172.27.132.66kube-node2osd
172.27.132.67kube-node3mds、osd

1. 节点初始化

节点初始化
配置软件源
sudo yum install -y epel-release
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
安装依赖的软件包
sudo yum install -y ntp ntpdate ntp-doc openssh-server
创建和配置 ceph 账户
在所有的 Ceph Node 创建 ceph 运行的专有账户:

sudo useradd -d /home/ceph -m ceph
sudo passwd ceph # 这里设置密码为 ceph
为 Ceph 用户添加 sudo 权限:

echo “ceph ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
配置主机别名
在所有节点上设置 hosts,使得各个 ceph node 可以通过 hostname 访问,注意访问自己的 hostname 的时候,不能解析到 127.0.0.1:

$ grep node /etc/hosts
172.27.132.65 kube-node1 kube-node1
172.27.132.66 kube-node2 kube-node2
172.27.132.67 kube-node3 kube-node3
关闭 SELinux
关闭 SELinux,否则后续 K8S 挂载目录时可能报错 Permission denied:

$ sudo setenforce 0
$ grep SELINUX /etc/selinux/config
SELINUX=disabled
修改配置文件,永久生效;
其它
关闭 requiretty:修改 /etc/sudoers 文件,注释 Defaults requiretty,或设置为:Defaults:ceph !requiretty

初始化 Ceph deploy 节点
按照规则,172.27.132.65 kube-node1 节点将作为 deploy 节点。

配置 kube-node1 节点的 ceph 账户可以免密码登陆到所有节点(包括自身):

su -l ceph
ssh-keygen -t rsa
ssh-copy-id ceph@kube-node1
ssh-copy-id ceph@kube-node2
ssh-copy-id ceph@kube-node3
设置 kube-node1 的 ceph 账户默认登录其它节点的用户名为 ceph:

cat >>/home/ceph/.ssh/config <<EOF
Host kube-node1
Hostname kube-node1
User ceph
Host kube-node2
Hostname kube-node2
User ceph
Host kube-node3
Hostname kube-node3
User ceph
EOF
chmod 600 ~/.ssh/config
安装 ceph-deploy 工具:

sudo yum update
sudo yum install ceph-deploy

2. 部署 monitor 节点

创建 Ceph 集群和部署 monitor 节点
如果未指明,本文档中的所有操作均在 deploy 节点 ceph 用户家目录 (/home/ceph) 下进行。

创建 deploy 工作目录,保存安装过程中生成的文件:

su -l ceph
mkdir my-cluster
cd my-cluster
创建 ceph 集群
创建名为 ceph 的集群:

[ceph@kube-node1 my-cluster]$ ceph-deploy new kube-node1 # 参数为初始的 monitor 节点(实际上只是在当前目录下生成 ceph.conf 和 ceph.mon.keyring 文件)
输出:

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy new kube-node1

[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[kube-node1][DEBUG ] connection detected need for sudo
[kube-node1][DEBUG ] connected to host: kube-node1
[kube-node1][DEBUG ] detect platform information from remote host
[kube-node1][DEBUG ] detect machine type
[kube-node1][DEBUG ] find the location of an executable
[kube-node1][INFO ] Running command: sudo /usr/sbin/ip link show
[kube-node1][INFO ] Running command: sudo /usr/sbin/ip addr show
[kube-node1][DEBUG ] IP addresses found: [u’172.30.53.0’, u’172.30.53.1’, u’172.27.132.65’]
[ceph_deploy.new][DEBUG ] Resolving host kube-node1
[ceph_deploy.new][DEBUG ] Monitor kube-node1 at 172.27.132.65
[ceph_deploy.new][DEBUG ] Monitor initial members are [‘kube-node1’]
[ceph_deploy.new][DEBUG ] Monitor addrs are [‘172.27.132.65’]
[ceph_deploy.new][DEBUG ] Creating a random mon key…
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring…
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf…
命令结束后,在当前工作目录自动生成了集群配置文件 ceph.conf、日志文件和用于 bootstrap monitor 节点的 ceph.mon.keyring 文件:

[ceph@kube-node1 my-cluster]$ ls .
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
修改 ceph.conf 文件中的默认配置,最终结果如下:

[ceph@kube-node1 my-cluster]$ cat ceph.conf

[global]
fsid = 0dca8efc-5444-4fa0-88a8-2c0751b47d28

初始 monitor 节点

mon_initial_members = kube-node1
mon_host = 10.64.3.9

cephx 认证授权

auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

副本数,应该 <= OSD 数目

osd pool default size = 3

最小副本数

osd pool default min size = 1

PG 和 PGP 默认值

osd pool default pg num = 128
osd pool default pgp num = 128

只启用 Centos Kernel 支持的 layering 特性

rbd_default_features = 1

osd crush chooseleaf type = 1
max mds = 5
mds max file size = 100000000000000
mds cache size = 1000000

文件系统调优

osd_mkfs_type = xfs
osd_mount_options_xfs = rw,noatime,inode64,logbsize=256k,delaylog
osd_mkfs_options_xfs = -f -i size=2048

Journal 调优

journal_max_write_entries = 1000
journal_queue_max_ops = 3000
journal_max_write_bytes = 1048576000
journal_queue_max_bytes = 1048576000

Op tracker

osd_enable_op_tracker = false

OSD Client

osd_client_message_size_cap = 0
osd_client_message_cap = 0

Objector

objecter_inflight_ops = 102400
objector_inflight_op_bytes = 1048576000

Throttles

ms_dispatch_throttle_bytes = 1048576000

OSD Threads

osd_op_threads = 32
osd_op_num_shards = 5
osd_op_num_threads_per_shard = 2

Network,适用于多个网卡的情况

public network = 10.0.0.0/8 # 适用于 ceph client 与集群间的通信
cluster network = 10.0.0.0/8 # 适用于 ceph OSD 之间的数据传输和通信
在所有节点上安装 ceph 软件包(ceph 和 ceph-radosgw):

–release 参数指定安装的版本为 luminous(不指定时默认为 jewel):

[ceph@kube-node1 my-cluster]$ ceph-deploy install --release luminous kube-node1 kube-node2 kube-node3
部署 monitor 节点
初始化 ceph-deploy new kube-node1 命令指定的初始 monitor 节点:

[ceph@kube-node1 my-cluster]$ ceph-deploy mon create-initial # create-initial/stat/remove
输出:

kube-node1][INFO ] monitor: mon.kube-node1 is running
[kube-node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.kube-node1.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.kube-node1

[kube-node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.kube-node1.asok mon_status
[ceph_deploy.mon][INFO ] mon.kube-node1 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys…

[kube-node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.kube-node1.asok mon_status
[kube-node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-kube-node1/keyring auth get client.admin
[kube-node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-kube-node1/keyring auth get client.bootstrap-mds
[kube-node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-kube-node1/keyring auth get client.bootstrap-mgr
[kube-node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-kube-node1/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[kube-node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-kube-node1/keyring auth get client.bootstrap-osd
[kube-node1][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-kube-node1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring ‘ceph.mon.keyring’ already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpw0Skr7
命令执行结束后,会在当前目录生成初始化 mds、osd、rgw 的 keyring 文件 {cluster-name}.bootstrap-{type}.keyring,同时创建了 client.admin 用户和它的 keyring,这些 keyring 都已经保存到集群中,供后续部署相应节点时使用:

[ceph@kube-node1 my-cluster]$ ls -l
total 476
-rw------- 1 ceph ceph 113 Jul 5 16:08 ceph.bootstrap-mds.keyring
-rw------- 1 ceph ceph 71 Jul 5 16:08 ceph.bootstrap-mgr.keyring
-rw------- 1 ceph ceph 113 Jul 5 16:08 ceph.bootstrap-osd.keyring
-rw------- 1 ceph ceph 113 Jul 5 16:08 ceph.bootstrap-rgw.keyring
-rw------- 1 ceph ceph 129 Jul 5 16:08 ceph.client.admin.keyring
-rw-rw-r-- 1 ceph ceph 201 Jul 5 14:15 ceph.conf
-rw-rw-r-- 1 ceph ceph 456148 Jul 5 16:08 ceph-deploy-ceph.log
-rw------- 1 ceph ceph 73 Jul 5 14:15 ceph.mon.keyring

[ceph@kube-node1 my-cluster]$ ls -l /var/lib/ceph/
total 0
drwxr-x— 2 ceph ceph 26 Apr 24 00:59 bootstrap-mds
drwxr-x— 2 ceph ceph 26 Apr 24 00:59 bootstrap-mgr
drwxr-x— 2 ceph ceph 26 Apr 24 00:59 bootstrap-osd
drwxr-x— 2 ceph ceph 6 Apr 24 00:59 bootstrap-rbd
drwxr-x— 2 ceph ceph 26 Apr 24 00:59 bootstrap-rgw
drwxr-x— 2 ceph ceph 6 Apr 24 00:59 mds
drwxr-x— 3 ceph ceph 29 Apr 24 00:59 mgr
drwxr-x— 3 ceph ceph 29 Apr 24 00:59 mon
drwxr-x— 2 ceph ceph 6 Apr 24 00:59 osd
drwxr-xr-x 2 root root 6 Apr 24 00:59 radosgw
drwxr-x— 2 ceph ceph 6 Apr 24 00:59 tmp

[ceph@kube-node1 my-cluster]$ ls /var/lib/ceph//
/var/lib/ceph/bootstrap-mds/ceph.keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring

/var/lib/ceph/mon/ceph-kube-node1:
done keyring kv_backend store.db systemd
推送 admin keyring 和 ceph.conf 集群配置文件到所有节点( /etc/ceph/ 目录下),这样后续执行 ceph 命令时不需要指定 monitor 地址和 ceph.client.admin.keyring 文件路径:

[ceph@kube-node1 my-cluster]$ ceph-deploy admin kube-node1 kube-node2 kube-node3
部署 manager 节点 (luminous+ 版本才需要部署 manager 节点)
[ceph@kube-node1 my-cluster] ceph-deploy mgr create kube-node1
输出:

ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf

[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts kube-node1:kube-node1
[kube-node1][DEBUG ] connection detected need for sudo
[kube-node1][DEBUG ] connected to host: kube-node1
[kube-node1][DEBUG ] detect platform information from remote host
[kube-node1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to kube-node1
[kube-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[kube-node1][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.kube-node1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-kube-node1/keyring
[kube-node1][INFO ] Running command: sudo systemctl enable ceph-mgr@kube-node1
[kube-node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@kube-node1.service to /usr/lib/systemd/system/ceph-mgr@.service.
[kube-node1][INFO ] Running command: sudo systemctl start ceph-mgr@kube-node1
[kube-node1][INFO ] Running command: sudo systemctl enable ceph.target
查看集群状态
切换到 montor 节点 kube-node1,修改 keyring 文件的权限,使非 root 可读:

$ ssh ceph@kube-node1

[ceph@kube-node1 ~]$ ls /etc/ceph/
ceph.client.admin.keyring ceph.conf rbdmap tmp018nTi

[ceph@kube-node1 ~]$ ls -l /etc/ceph/ceph.client.admin.keyring
-rw------- 1 root root 129 Mar 11 23:43 /etc/ceph/ceph.client.admin.keyring

[ceph@kube-node1 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

[ceph@kube-node1 ~]$ ls -l /etc/ceph/ceph.client.admin.keyring
-rw-r–r-- 1 root root 129 Mar 11 23:43 /etc/ceph/ceph.client.admin.keyring
查看当前集群状态:

[ceph@kube-node1 my-cluster]$ ceph -s
cluster:
id: b7b9e370-ea9b-4cc0-8b09-17167c876c24
health: HEALTH_ERR
64 pgs are stuck inactive for more than 60 seconds
64 pgs stuck inactive
64 pgs stuck unclean
no osds

services:
mon: 1 daemons, quorum kube-node1
mgr: kube-node1(active)
osd: 0 osds: 0 up, 0 in

data:
pools: 1 pools, 64 pgs
objects: 0 objects, 0 bytes
usage: 0 kB used, 0 kB / 0 kB avail
pgs: 100.000% pgs not active
64 creating
查看 monitor 节点信息:

[ceph@kube-node1 my-cluster]$ ceph mon dump
dumped monmap epoch 2
epoch 2
fsid b7b9e370-ea9b-4cc0-8b09-17167c876c24
last_changed 2018-07-05 16:34:09.194222
created 2018-07-05 16:07:57.975307
0: 172.27.132.65:6789/0 mon.kube-node1
扩容 montor 节点
安装 luminous 版本的 ceph 软件包:

[ceph@kube-node1 my-cluster]$ ceph-deploy install --release luminous kube-node4
创建新的 mon 节点:

[ceph@kube-node1 my-cluster]$ ceph-deploy mon create kube-node4
修改 ceph.conf 文件,添加扩容的 kube-node4 节点:

[ceph@kube-node1 my-cluster]$ cat ceph.conf

mon_initial_members = kube-node1,kube-node4
mon_host = 172.27.132.65,172.27.132.68

推送新的 ceph.conf 到所有节点:

[ceph@kube-node1 my-cluster]$ ceph-deploy config push kube-node1 kube-node2 kube-node3 kube-node4
登录所有 monitor 节点,重启 ceph-mon 服务:

[ceph@kube-node1 my-cluster]$ sudo systemctl restart ‘ceph-mon@kube-node1’
删除 montor 节点
停止 montor 服务:

[ceph@kube-node4 ~]$ sudo systemctl stop ‘ceph-mon@kube-node4’
将 monitor kube-node4 从集群删除:

[ceph@kube-node4 ~]$ ceph mon remove kube-node4
修改 ceph.conf 文件,删除 kube-node3 的 montor 信息:

$ cat ceph.conf

mon_initial_members = kube-node1
mon_host = 172.27.132.65

推送新的 ceph.conf 到所有节点:

[ceph@kube-node1 my-cluster]$ ceph-deploy config push kube-node1 kube-node2 kube-node3 kube-node4
登录所有 monitor 节点,重启 ceph-mon 服务:

[ceph@kube-node1 my-cluster]$ sudo systemctl restart ‘ceph-mon@kube-node1’

3. 部署 OSD 节点

部署 OSD 节点
安装 ceph 软件包(ceph 和 ceph-radosgw),–release 可以指定版本:

[ceph@kube-node1 my-cluster]$ ceph-deploy install --release luminous kube-node1 kube-node2 kube-node3
[ceph@kube-node1 my-cluster]$ ceph-deploy config push kube-node1 kube-node2 kube-node3
准备 OSD 数据盘磁盘
OSD 需要使用整块磁盘或分区来保存数据,如果机器已经挂载了数据盘,需要先卸载:

[ceph@kube-node1 my-cluster]$ df -h |grep /mnt
/dev/vda3 923G 33M 923G 1% /mnt/disk01

[ceph@kube-node1 my-cluster]$ sudo umount /dev/vda3 # 卸载数据盘分区
检查和卸载所有 OSD 节点的数据盘分区;
部署 OSD 节点
[ceph@kube-node1 my-cluster]$ ceph-deploy osd create --data /dev/vda3 kube-node1 kube-node2 kube-node3 # --data 指定数据盘分区
输出:

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf

[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vda3
[kube-node2][DEBUG ] connection detected need for sudo
[kube-node2][DEBUG ] connected to host: kube-node2
[kube-node2][DEBUG ] detect platform information from remote host
[kube-node2][DEBUG ] detect machine type
[kube-node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to kube-node2
[kube-node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[kube-node2][WARNIN] osd keyring does not exist yet, creating one
[kube-node2][DEBUG ] create a keyring file
[kube-node2][DEBUG ] find the location of an executable
[kube-node2][INFO ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vda3
[kube-node2][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[kube-node2][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 149b2781-077f-4146-82f3-4d8061d24043
[kube-node2][DEBUG ] Running command: vgcreate --force --yes ceph-4f6edf77-637a-4752-888e-df74d102cd4e /dev/vda3
[kube-node2][DEBUG ] stdout: Wiping xfs signature on /dev/vda3.
[kube-node2][DEBUG ] stdout: Physical volume “/dev/vda3” successfully created.
[kube-node2][DEBUG ] stdout: Volume group “ceph-4f6edf77-637a-4752-888e-df74d102cd4e” successfully created
[kube-node2][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n osd-block-149b2781-077f-4146-82f3-4d8061d24043 ceph-4f6edf77-637a-4752-888e-df74d102cd4e
[kube-node2][DEBUG ] stdout: Logical volume “osd-block-149b2781-077f-4146-82f3-4d8061d24043” created.
[kube-node2][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[kube-node2][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[kube-node2][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-0
[kube-node2][DEBUG ] Running command: ln -s /dev/ceph-4f6edf77-637a-4752-888e-df74d102cd4e/osd-block-149b2781-077f-4146-82f3-4d8061d24043 /var/lib/ceph/osd/ceph-1/block
[kube-node2][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[kube-node2][DEBUG ] stderr: got monmap epoch 2
[kube-node2][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQD24T1btVJ+MhAAu2BxGrSrv5uhSMvRIzGf3A==
[kube-node2][DEBUG ] stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[kube-node2][DEBUG ] stdout: added entity osd.1 auth auth(auid = 18446744073709551615 key=AQD24T1btVJ+MhAAu2BxGrSrv5uhSMvRIzGf3A== with 0 caps)
[kube-node2][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[kube-node2][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[kube-node2][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 149b2781-077f-4146-82f3-4d8061d24043 --setuser ceph --setgroup ceph
[kube-node2][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/vda3
[kube-node2][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-4f6edf77-637a-4752-888e-df74d102cd4e/osd-block-149b2781-077f-4146-82f3-4d8061d24043 --path /var/lib/ceph/osd/ceph-1
[kube-node2][DEBUG ] Running command: ln -snf /dev/ceph-4f6edf77-637a-4752-888e-df74d102cd4e/osd-block-149b2781-077f-4146-82f3-4d8061d24043 /var/lib/ceph/osd/ceph-1/block
[kube-node2][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-0
[kube-node2][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[kube-node2][DEBUG ] Running command: systemctl enable ceph-volume@lvm-1-149b2781-077f-4146-82f3-4d8061d24043
[kube-node2][DEBUG ] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-149b2781-077f-4146-82f3-4d8061d24043.service to /usr/lib/systemd/system/ceph-volume@.service.
[kube-node2][DEBUG ] Running command: systemctl start ceph-osd@1
[kube-node2][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 1
[kube-node2][DEBUG ] --> ceph-volume lvm create successful for: /dev/vda3
[kube-node2][INFO ] checking OSD status…
[kube-node2][DEBUG ] find the location of an executable
[kube-node2][INFO ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host kube-node2 is now ready for osd use.

调用 ceph-volume --cluster ceph lvm create --bluestore --data /dev/vda3 命令创建 LVM 的 VG 和 LV;
启动 OSD 服务;
查看 OSD 列表
[ceph@kube-node1 my-cluster]$ ceph-deploy osd list kube-node1
输出:

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy osd list kube-node1
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1b37400ab8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : [‘kube-node1’]
[ceph_deploy.cli][INFO ] func : <function osd at 0x7f1b37438230>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[kube-node1][DEBUG ] connected to host: kube-node1
[kube-node1][DEBUG ] detect platform information from remote host
[kube-node1][DEBUG ] detect machine type
[kube-node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Listing disks on kube-node1…
[kube-node1][DEBUG ] find the location of an executable
[kube-node1][INFO ] Running command: /usr/sbin/ceph-volume lvm list
[kube-node1][DEBUG ]
[kube-node1][DEBUG ]
[kube-node1][DEBUG ] ====== osd.0 =======
[kube-node1][DEBUG ]
[kube-node1][DEBUG ] [block] /dev/ceph-7856263d-442b-4e33-9737-b94faa68621b/osd-block-9f987e76-8640-4c3b-a9fd-06f701b63903
[kube-node1][DEBUG ]
[kube-node1][DEBUG ] type block
[kube-node1][DEBUG ] osd id 0
[kube-node1][DEBUG ] cluster fsid b7b9e370-ea9b-4cc0-8b09-17167c876c24
[kube-node1][DEBUG ] cluster name ceph
[kube-node1][DEBUG ] osd fsid 9f987e76-8640-4c3b-a9fd-06f701b63903
[kube-node1][DEBUG ] encrypted 0
[kube-node1][DEBUG ] cephx lockbox secret
[kube-node1][DEBUG ] block uuid bjGqMD-HRZe-vfXP-p2ma-Mofx-j4lK-Gcu0hA
[kube-node1][DEBUG ] block device /dev/ceph-7856263d-442b-4e33-9737-b94faa68621b/osd-block-9f987e76-8640-4c3b-a9fd-06f701b63903
[kube-node1][DEBUG ] vdo 0
[kube-node1][DEBUG ] crush device class None
查看集群 OSD 节点状态
切换到 OSD 节点:

$ ssh kube-node2
查看 OSD 进程和命令行参数:

[ceph@kube-node2 ~]$ ps -elf|grep ceph-osd|grep -v grep
4 S ceph 23498 1 0 80 0 - 351992 futex_ Jul09 ? 00:08:43 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
调整 keryring 文件的读权限,允许非 root 账号读取:

$ ssh ceph@kube-node1

[ceph@kube-node2 ~]$ ls /etc/ceph/
ceph.client.admin.keyring ceph.conf rbdmap tmp018nTi

[ceph@kube-node2 ~]$ ls -l /etc/ceph/ceph.client.admin.keyring
-rw------- 1 root root 129 Mar 11 23:43 /etc/ceph/ceph.client.admin.keyring

[ceph@kube-node2 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

[ceph@kube-node2 ~]$ ls -l /etc/ceph/ceph.client.admin.keyring
-rw-r–r-- 1 root root 129 Mar 11 23:43 /etc/ceph/ceph.client.admin.keyring
查看 Ceph 集群状态:

[ceph@kube-node1 my-cluster]$ ceph -s
cluster:
id: b7b9e370-ea9b-4cc0-8b09-17167c876c24
health: HEALTH_OK

services:
mon: 1 daemons, quorum kube-node1
mgr: kube-node1(active)
osd: 3 osds: 3 up, 3 in

data:
pools: 1 pools, 64 pgs
objects: 0 objects, 0 bytes
usage: 3080 MB used, 2765 GB / 2768 GB avail
pgs: 64 active+clean
查看 OSD 状态:

[ceph@kube-node2 ~]$ ceph osd stat
3 osds: 3 up, 3 in
查看 OSD 节点树:

[ceph@kube-node2 ~]$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 2.70419 root default
-9 0 host ambari
-2 0.90140 host kube-node1
0 hdd 0.90140 osd.0 up 1.00000 1.00000
-3 0.90140 host kube-node2
1 hdd 0.90140 osd.1 up 1.00000 1.00000
-4 0.90140 host kube-node3
2 hdd 0.90140 osd.2 up 1.00000 1.00000
Dump OSD 的详细信息:

[ceph@kube-node2 ~]$ ceph osd dump
epoch 180
fsid b7b9e370-ea9b-4cc0-8b09-17167c876c24
created 2018-07-05 16:07:58.315940
modified 2018-07-10 15:45:17.050188
flags sortbitwise,recovery_deletes,purged_snapdirs
crush_version 10
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client firefly
min_compat_client firefly
require_osd_release luminous
pool 0 ‘rbd’ replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 39 flags hashpspool stripe_width 0 application rbd
removed_snaps [1~3]
max_osd 4
osd.0 up in weight 1 up_from 151 up_thru 177 down_at 148 last_clean_interval [121,147) 172.27.132.65:6800/41937 172.27.132.65:6801/41937 172.27.132.65:6802/41937 172.27.132.65:6803/41937 exists,up 9f987e76-8640-4c3b-a9fd-06f701b63903
osd.1 up in weight 1 up_from 139 up_thru 177 down_at 136 last_clean_interval [129,135) 172.27.132.66:6800/23498 172.27.132.66:6801/23498 172.27.132.66:6802/23498 172.27.132.66:6803/23498 exists,up 149b2781-077f-4146-82f3-4d8061d24043
osd.2 up in weight 1 up_from 175 up_thru 177 down_at 172 last_clean_interval [124,171) 172.27.132.67:6801/28744 172.27.132.67:6802/28744 172.27.132.67:6803/28744 172.27.132.67:6804/28744 exists,up 9aa9379e-a0a9-472f-9c5d-0c36d02c9ebf
测试 OSD 对象存储
[ceph@kube-node2 ~]$ echo {Test-data} > testfile.txt # 测试文件
[ceph@kube-node2 ~]$ ceph osd pool create mytest 8 # 创建 pool
[ceph@kube-node2 ~]$ rbd pool init mytest # 初始化 pool
[ceph@kube-node2 ~]$ rados put test-object-1 testfile.txt --pool=mytest # 将文件保存到 pool
[ceph@kube-node2 ~]$ rados -p mytest ls # 查看 pool 中对象列表
test-object-1
[ceph@kube-node2 ~]$ ceph osd map mytest test-object-1 # 查看对象映射的 PG 和 OSD
osdmap e25 pool ‘mytest’ (1) object ‘test-object-1’ -> pg 1.74dc35e2 (1.2) -> up ([1,2,0], p1) acting ([1,2,0], p1)
[ceph@kube-node2 ~]$ rados rm test-object-1 --pool=mytest # 删除对象
[ceph@kube-node2 ~]$ ceph osd pool rm mytest # 删除 pool
扩容 OSD 节点
$ ceph-deploy install kube-node4 # 安装软件包
$ ceph-deploy osd create --data /dev/vda3 kube-node4 # --data 指定数据盘分区,必须先 umount
$ ceph-deploy config push kube-node4 # 将更新的 ceph 配置文件推送到指定的主机列表
删除 OSD 节点
$ sudo systemctl stop ‘ceph-osd@’ # 为当前 OSD 节点 ID
$ ceph-disk deactivate /dev/vda3
$ ceph-volume lvm zap /dev/vda3
$ ceph-disk destroy /dev/sdb # 从 ceph 中删除 osd
$ ceph osd out # 告诉 mon 这个节点已经不能服务了,需要在其他的osd上进行数据的恢复
$ ceph osd crush remove osd. # 完全从集群的分布当中剔除掉,让集群的crush进行一次重新计算,之前节点还占着这个crush weight,会影响到当前主机的host crush weight
$ ceph auth del osd. # 从认证当中去删除这个节点的信息
$ ceph osd rm # 从集群里面删除这个节点的记录
$ cd my-cluster # 进入 ceph-deploy 的工作目录
$ # 修改 ceph.conf 文件,推送到所有其它节点
$ ceph-deploy --overwrite-conf config push ceph-1 ceph-2 ceph-3
重启 OSD 服务
查看 OSD 进程 ID:

[ceph@kube-node2 ~]$ ps -elf|grep osd|grep -v grep
4 S ceph 23498 1 0 80 0 - 351992 futex_ Jul09 ? 00:08:47 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
根据 ID 重启 OSD 服务:

[ceph@kube-node2 ~]$ sudo systemctl restart ‘ceph-osd@1’ # @N,N 为 OSD ID;

4. 客户端使用 RBD

客户端使用 RBD
在客户端节点安装 ceph 软件包并拷贝 clien.admin 的 keyring 文件:

[ceph@kube-node1 my-cluster]$ ceph-deploy install --release luminous kube-node2
[ceph@kube-node1 my-cluster]$ ceph-deploy config push kube-node2
调整 keryring 文件的读权限,允许非 root 账号读取:

$ ssh ceph@kube-node1

[ceph@kube-node2 ~]$ ls /etc/ceph/
ceph.client.admin.keyring ceph.conf rbdmap tmp018nTi

[ceph@kube-node2 ~]$ ls -l /etc/ceph/ceph.client.admin.keyring
-rw------- 1 root root 129 Mar 11 23:43 /etc/ceph/ceph.client.admin.keyring

[ceph@kube-node2 ~]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

[ceph@kube-node2 ~]$ ls -l /etc/ceph/ceph.client.admin.keyring
-rw-r–r-- 1 root root 129 Mar 11 23:43 /etc/ceph/ceph.client.admin.keyring
创建 Pool 和镜像
新建一个 Pool:

[ceph@kube-node2 ~]$ ceph osd pool create mytest 8
[ceph@kube-node2 ~]$ rbd pool init mytest
[ceph@kube-node2 ~]$ ceph osd lspools
0 rbd,1 mytest,
在 Pool 中创建一个 4GB 的镜像:

[ceph@kube-node2 ~]$ rbd create foo --size 4096 -p mytest
[ceph@kube-node2 ~]$ rbd list -p mytest
foo
[ceph@kube-node2 ~]$ rbd info foo -p mytest
rbd image ‘foo’:
size 4096 MB in 1024 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.374974b0dc51
format: 2
features: layering
flags:
create_timestamp: Thu Jul 5 17:45:09 2018
使用 RBD 镜像
将 RBD 镜像 foo 映射到本地块设备:

[ceph@kube-node2 ~]$ sudo rbd map mytest/foo
/dev/rbd0
调整 RBD 磁盘的参数(结果值如下):

[ceph@kube-node2 ~]$ cat /sys/block/rbd0/queue/optimal_io_size
4194304
[ceph@kube-node2 ~]$ cat /sys/block/rbd0/alignment_offset
0
[ceph@kube-node2 ~]$ cat /sys/block/rbd0/queue/physical_block_size
512
格式化块设备并挂载:

[ceph@kube-node2 ~]$ sudo mkfs.ext4 -m0 /dev/rbd0
[ceph@kube-node2 ~]$ sudo mkdir /mnt/ceph-block-device
[ceph@kube-node2 ~]$ sudo mount /dev/rbd0 /mnt/ceph-block-device
[ceph@kube-node2 ~]$ cd /mnt/ceph-block-device
查看本地的映射关系:

[ceph@kube-node2 ~]$ rbd showmapped
id pool image snap device
0 mytest foo - /dev/rbd0
删除 RBD 镜像
[ceph@kube-node2 ~]$ umount /dev/rbd0 # umount 磁盘
[ceph@kube-node2 ~]$ rbd unmap mytest/foo # 删除磁盘映射
[ceph@kube-node2 ~]$ rbd rm mytest/foo # 删除镜像
Removing image: 100% complete…done.
使用 rbdmap 实现自动 map 和 unmap RBD 磁盘
Client 挂载 RBD Image 后,如果在关机前没有 rbd unmap,则会 hang 在 umount 该 RBD 设备上。

可以通过 rbdmap 解决该问题。rbdmap 是一个 shell 脚本,配置文件:/etc/ceph/rbdmap,格式如下:

[ceph@kube-node2 ~]$ cat /etc/ceph/rbdmap 
# RbdDevice        Parameters
#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring
mytest/foo --id admin --keyring /etc/ceph/ceph.client.admin.keyring

5. 部署 RGW 节点

部署 RGW 节点
安装 ceph-radosgw 软件包,–release 可以指定版本:

[ceph@kube-node1 ~]$ ceph-deploy install --release --rgw kube-node3
输出:

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /bin/ceph-deploy install --rgw kube-node2
[kube-node2][WARNIN] altered ceph.repo priorities to contain: priority=1
[kube-node2][INFO ] Running command: sudo yum -y install ceph-radosgw
[kube-node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[kube-node2][WARNIN] Not using downloaded repomd.xml because it is older than what we have:
[kube-node2][WARNIN] Current : Thu Apr 26 21:06:11 2018
[kube-node2][WARNIN] Downloaded: Fri Oct 6 01:41:59 2017
[kube-node2][WARNIN] Not using downloaded repomd.xml because it is older than what we have:
[kube-node2][WARNIN] Current : Thu Apr 26 21:02:10 2018
[kube-node2][WARNIN] Downloaded: Fri Oct 6 01:38:33 2017
[kube-node2][WARNIN] Not using downloaded repomd.xml because it is older than what we have:
[kube-node2][WARNIN] Current : Thu Apr 26 21:02:30 2018
[kube-node2][WARNIN] Downloaded: Fri Oct 6 01:38:39 2017
[kube-node2][DEBUG ] Loading mirror speeds from cached hostfile
[kube-node2][DEBUG ] * epel: mirrors.huaweicloud.com
[kube-node2][DEBUG ] 8 packages excluded due to repository priority protections
[kube-node2][DEBUG ] Package 2:ceph-radosgw-12.2.5-0.el7.x86_64 already installed and latest version
[kube-node2][DEBUG ] Nothing to do
[kube-node2][INFO ] Running command: sudo ceph --version
[kube-node2][DEBUG ] ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)
配置和启用 RGW 节点
ceph@kube-node1 my-cluster]$ ceph-deploy rgw create kube-node2 # rgw 默认监听 7480 端口
输出:

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf

[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts kube-node2:rgw.kube-node2
[kube-node2][DEBUG ] connection detected need for sudo
[kube-node2][DEBUG ] connected to host: kube-node2
[kube-node2][DEBUG ] detect platform information from remote host
[kube-node2][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.3.1611 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to kube-node2
[kube-node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[kube-node2][WARNIN] rgw keyring does not exist yet, creating one
[kube-node2][DEBUG ] create a keyring file
[kube-node2][INFO ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.kube-node2 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.kube-node2/keyring
[kube-node2][INFO ] Running command: sudo systemctl enable ceph-radosgw@rgw.kube-node2
[kube-node2][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.kube-node2.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[kube-node2][INFO ] Running command: sudo systemctl start ceph-radosgw@rgw.kube-node2
[kube-node2][INFO ] Running command: sudo systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host kube-node2 and default port 7480
创建和启动 ceph-radosgw 服务;
检查监听的端口
$ ssh root@kube-node2
[root@kube-node2 ~]# netstat -lnpt|grep radosgw
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 33110/radosgw
注意:可以在 ceph.conf 配置文件中修改 RGW 监听的地址,然后重启 rgw 进程生效:

[client]
rgw frontends = civetweb port=80

$ sudo systemctl restart ceph-radosgw.service
测试对象存储
测试端口连通性:

[ceph@kube-node1 ~]$ curl http://172.27.132.66:7480

<?xml version="1.0" encoding="UTF-8"?>anonymous

创建对象存储账号:

[ceph@kube-node1 ~]$ radosgw-admin user create --uid=demo --display-name=“ceph sgw demo user”
输出:

{
“user_id”: “demo”,
“display_name”: “ceph sgw demo user”,
“email”: “”,
“suspended”: 0,
“max_buckets”: 1000,
“auid”: 0,
“subusers”: [],
“keys”: [
{
“user”: “demo”,
“access_key”: “BY5C4TRTAH755NH8B8K8”,
“secret_key”: “bdsqOAntwrMJAWTVGngxDPAMXrx7zalSQk8YUwIq”
}
],
“swift_keys”: [],
“caps”: [],
“op_mask”: “read, write, delete”,
“default_placement”: “”,
“placement_tags”: [],
“bucket_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
},
“user_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
},
“temp_url_keys”: [],
“type”: “rgw”
}
s 创建子账号:

[ceph@kube-node1 ~]$ radosgw-admin subuser create --uid demo --subuser=demo:swift --access=full --secret=secretkey --key-type=swift
输出:

{
“user_id”: “demo”,
“display_name”: “ceph sgw demo user”,
“email”: “”,
“suspended”: 0,
“max_buckets”: 1000,
“auid”: 0,
“subusers”: [
{
“id”: “demo:swift”,
“permissions”: “full-control”
}
],
“keys”: [
{
“user”: “demo”,
“access_key”: “BY5C4TRTAH755NH8B8K8”,
“secret_key”: “bdsqOAntwrMJAWTVGngxDPAMXrx7zalSQk8YUwIq”
}
],
“swift_keys”: [
{
“user”: “demo:swift”,
“secret_key”: “secretkey”
}
],
“caps”: [],
“op_mask”: “read, write, delete”,
“default_placement”: “”,
“placement_tags”: [],
“bucket_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
},
“user_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
},
“temp_url_keys”: [],
“type”: “rgw”
}
生成子账号秘钥:

[ceph@kube-node1 ~]$ radosgw-admin key create --subuser=demo:swift --key-type=swift --gen-secret
输出:

{
“user_id”: “demo”,
“display_name”: “ceph sgw demo user”,
“email”: “”,
“suspended”: 0,
“max_buckets”: 1000,
“auid”: 0,
“subusers”: [
{
“id”: “demo:swift”,
“permissions”: “full-control”
}
],
“keys”: [
{
“user”: “demo”,
“access_key”: “BY5C4TRTAH755NH8B8K8”,
“secret_key”: “bdsqOAntwrMJAWTVGngxDPAMXrx7zalSQk8YUwIq”
}
],
“swift_keys”: [
{
“user”: “demo:swift”,
“secret_key”: “ttQcU1O17DFQ4I9xzKqwgUe7WIYYX99zhcIfU9vb”
}
],
“caps”: [],
“op_mask”: “read, write, delete”,
“default_placement”: “”,
“placement_tags”: [],
“bucket_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
},
“user_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
},
“temp_url_keys”: [],
“type”: “rgw”
}
测试 S3 bucket
安装库文件

[ceph@kube-node1 ~]$ sudo yum install python-boto
创建 S3 测试脚本:

import boto.s3.connection

access_key = ‘BY5C4TRTAH755NH8B8K8’
secret_key = ‘bdsqOAntwrMJAWTVGngxDPAMXrx7zalSQk8YUwIq’
conn = boto.connect_s3(
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
host=‘172.27.132.66’, port=7480,
is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
)

bucket = conn.create_bucket(‘my-new-bucket’)
for bucket in conn.get_all_buckets():
print “{name} {created}”.format(
name=bucket.name,
created=bucket.creation_date,
)
执行 S3 测试脚本:

[ceph@kube-node2 ~]$ python s3test.py
my-new-bucket 2018-07-11T12:26:05.329Z
测试 swift
安装 swift 客户端:

[ceph@kube-node1 ~]$ sudo yum install python2-pip
[ceph@kube-node1 ~]$ sudo pip install python-swiftclient
查看 bucket 列表:

[ceph@kube-node2 ~]$ swift -V 1.0 -A http://kube-node2:7480/auth -U demo:swift -K ttQcU1O17DFQ4I9xzKqwgUe7WIYYX99zhcIfU9vb list
my-new-bucket
查看 bucket 状态和统计信息:

[ceph@kube-node2 ~]$ radosgw-admin bucket stats --bucket my-new-bucket
{
“bucket”: “my-new-bucket”,
“zonegroup”: “b71dde3d-8d8d-4240-ad99-1c85182d3e9b”,
“placement_rule”: “default-placement”,
“explicit_placement”: {
“data_pool”: “”,
“data_extra_pool”: “”,
“index_pool”: “”
},
“id”: “1f3f02c4-fe58-4626-992b-c6c0fe4c8acf.64249.1”,
“marker”: “1f3f02c4-fe58-4626-992b-c6c0fe4c8acf.64249.1”,
“index_type”: “Normal”,
“owner”: “demo”,
“ver”: “0#1”,
“master_ver”: “0#0”,
“mtime”: “2018-07-11 20:26:05.332943”,
“max_marker”: “0#”,
“usage”: {},
“bucket_quota”: {
“enabled”: false,
“check_on_raw”: false,
“max_size”: -1,
“max_size_kb”: 0,
“max_objects”: -1
}
}

6. 部署 MDS 节点

部署 metadata 节点
在客户端节点安装 ceph 软件包并拷贝 client.admin 的 keyring 文件:

[ceph@kube-node1 my-cluster]$ ceph-deploy install --release luminous kube-node3
[ceph@kube-node1 my-cluster]$ ceph-deploy config push kube-node3
部署 metadata server
[ceph@kube-node1 my-cluster]$ ceph-deploy mds create kube-node3
检查 meta 服务
登录到部署的 kube-node3 节点,检查服务和端口状态:

$ ssh root@kube-node3
[root@kube-node3 ~]# systemctl status ‘ceph-mds@kube-node3.service’|grep ‘Active:’
Active: active (running) since 一 2018-07-09 15:45:08 CST; 2 days ago
[root@kube-node3 ~]# netstat -lnpt|grep ceph-mds
tcp 0 0 0.0.0.0:6800 0.0.0.0:* LISTEN 1251/ceph-mds
确保状态为 running,监听端口 6800;
创建 cephfs
创建两个池,最后的数字是 PG 的数量

[ceph@kube-node1 my-cluster]# ceph osd pool create cephfs_data 100
pool ‘cephfs_data’ created
[ceph@kube-node1 my-cluster]# ceph osd pool create cephfs_metadata 20
pool ‘cephfs_metadata’ created
创建 cephfs 文件系统,注意一个 ceph 只能创建一个cephfs 文件系统:

[ceph@kube-node1 my-cluster]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
创建秘钥文件
查询 client.admin 装好的秘钥:

[ceph@kube-node1 my-cluster]$ cd ~/my-cluster/
[ceph@kube-node1 my-cluster]$ grep key ceph.client.admin.keyring
key = AQDe0T1babvPNhAApxQdjXNU20vYqkDG+YWACw==
将上面的秘钥保存到一个文件中,如 admin.secret:

$ cat admin.secret
[client.admin]
key = AQDe0T1babvPNhAApxQdjXNU20vYqkDG+YWACw==
挂载和使用 cephfs
挂载 cephFS 有两种方式:

kernel driver 方式:

[ceph@kube-node3 ~]$ sudo mkdir /mnt/mycephfs
[ceph@kube-node3 ~]$ vi admin.secret
[ceph@kube-node3 ~]$ sudo mount -t ceph kube-node1:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret
1.fuse 方式:

[ceph@kube-node3 ~]$ sudo yum install ceph-fuse
[ceph@kube-node3 ~]$ sudo mkdir ~/mycephfs
[ceph@kube-node3 ~]$ sudo ceph-fuse -k ./ceph.client.admin.keyring -m kube-node1:6789 ~/mycephfs

二、运维

Troubleshooting

  1. 执行 ceph-deploy new 命令失败,提示 ImportError: No module named pkg_resources
    现象:

[ceph@kube-node1 my-cluster]$ ceph-deploy new kube-node1
Traceback (most recent call last):
File “/bin/ceph-deploy”, line 18, in
from ceph_deploy.cli import main
File “/usr/lib/python2.7/site-packages/ceph_deploy/cli.py”, line 1, in
import pkg_resources
ImportError: No module named pkg_resources
原因:

系统缺少 python2-pip 包。

解决办法:

[ceph@kube-node1 my-cluster]$ sudo yum install python2-pip
2. 执行 ceph-deploy disk zap 命令失败,提示 AttributeError: ‘Namespace’ object has no attribute ‘debug’
现象:

[ceph@kube-node1 my-cluster]$ ceph-deploy disk zap kube-node3 /dev/vda3

[kube-node3][DEBUG ] find the location of an executable
[ceph_deploy][ERROR ] Traceback (most recent call last):
[ceph_deploy][ERROR ] File “/usr/lib/python2.7/site-packages/ceph_deploy/util/decorators.py”, line 69, in newfunc
[ceph_deploy][ERROR ] return f(*a, **kw)
[ceph_deploy][ERROR ] File “/usr/lib/python2.7/site-packages/ceph_deploy/cli.py”, line 164, in _main
[ceph_deploy][ERROR ] return args.func(args)
[ceph_deploy][ERROR ] File “/usr/lib/python2.7/site-packages/ceph_deploy/osd.py”, line 438, in disk
[ceph_deploy][ERROR ] disk_zap(args)
[ceph_deploy][ERROR ] File “/usr/lib/python2.7/site-packages/ceph_deploy/osd.py”, line 336, in disk_zap
[ceph_deploy][ERROR ] if args.debug:
[ceph_deploy][ERROR ] AttributeError: ‘Namespace’ object has no attribute ‘debug’
原因:

代码 bugs;

解决办法:

sudo vim /usr/lib/python2.7/site-packages/ceph_deploy/osd.py

修改第 336 行 if args.debug:if False:
3. 执行 ceph 命令失败,提示 ERROR: missing keyring, cannot use cephx for authentication
现象:

[root@kube-node2 ~]# sudo ceph-volume lvm create --data /dev/vda3
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 987317ae-38c1-4236-a621-387d30ff9d36
stderr: 2018-07-05 17:08:49.355424 7fcec57d1700 -1 auth: unable to find a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory
stderr: 2018-07-05 17:08:49.355441 7fcec57d1700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
stderr: 2018-07-05 17:08:49.355443 7fcec57d1700 0 librados: client.bootstrap-osd initialization error (2) No such file or directory
stderr: [errno 2] error connecting to the cluster
–> RuntimeError: Unable to create a new OSD id
原因:

缺少 /var/lib/ceph/boostrap-XX/*.keyring 文件,这些文件默认位于 deploy 节点上。

解决办法:

在 deploy 节点上执行 ceph-deploy osd create --data /dev/vda3 kube-node2 命令。

  1. all OSDs are running luminous or later but require_osd_release < luminous
    现象:

[ceph@kube-node1 my-cluster]$ sudo ceph health
HEALTH_WARN all OSDs are running luminous or later but require_osd_release < luminous

[ceph@kube-node1 my-cluster]$ ceph -s
cluster:
id: b7b9e370-ea9b-4cc0-8b09-17167c876c24
health: HEALTH_WARN
all OSDs are running luminous or later but require_osd_release < luminous

services:
mon: 1 daemons, quorum kube-node1
mgr: kube-node1(active)
osd: 3 osds: 3 up, 3 in

data:
pools: 1 pools, 64 pgs
objects: 0 objects, 0 bytes
usage: 3079 MB used, 2765 GB / 2768 GB avail
pgs: 64 active+clean
原因:

集群版本信息不一致;

解决办法:

[ceph@kube-node1 my-cluster]$ ceph osd require-osd-release luminous
recovery_deletes is set
5. create pool 时提示 pg_num 值太大
现象:

[ceph@kube-node1 ~]$ ceph osd pool create k8s 128 128
Error ERANGE: pg_num 128 size 3 would mean 960 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)
原因:

OSD 数目太少,或者 mon_max_pg_per_osd 参数值太小;

解决办法:

扩容 OSD 示例;
调大 mon_max_pg_per_osd 参数值:
[ceph@kube-node1 my-cluster]$ ceph tell injectargs ‘–mon_max_pg_per_osd=350’
injectargs:mon_max_pg_per_osd = ‘350’ (not observed, change may require restart)
参考:

https://www.wanglf.net/ceph-pg-num-is-too-large.html http://blog.51cto.com/michaelkang/1727667

  1. application not enabled on 1 pool(s)
    现象:

创建 Pool 后,集群状态异常:

[root@kube-node1 my-cluster]# ceph -s
cluster:
id: b7b9e370-ea9b-4cc0-8b09-17167c876c24
health: HEALTH_WARN
application not enabled on 1 pool(s)

services:
mon: 1 daemons, quorum kube-node1
mgr: kube-node1(active), standbys: kube-node2
osd: 3 osds: 3 up, 3 in

data:
pools: 2 pools, 72 pgs
objects: 1 objects, 12 bytes
usage: 3081 MB used, 2765 GB / 2768 GB avail
pgs: 72 active+clean
原因:

创建的 Pool 没有自动打上 application 的 tag:
或者,Pool 没有初始化;
解决办法:

对创建的 Pool 打 tag:

[root@kube-node1 my-cluster]# ceph osd pool application enable mytest rbd # mytest 为 pool 的名称
enabled application ‘rbd’ on pool ‘mytest’
或者,在创建 Pool 后进行初始化:

ceph osd pool create my-pool 8
ceph pool init my-pool
参考:

https://ceph.com/community/new-luminous-pool-tags/

  1. RBD image 映射到本地失败:rbd: sysfs write failed
    现象:

映射 RBD 到本地失败:

[root@kube-node1 my-cluster]# sudo rbd map foo -p mytest
rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with “rbd feature disable”.
In some cases useful info is found in syslog - try “dmesg | tail”.
rbd: map failed: (6) No such device or address
原因:

CentoOS kernel 不支持全部特性:[layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten, journaling, data-pool]

解决方法:

disable RBD 的部分特性:
[root@kube-node1 my-cluster]# rbd feature disable mytest/foo fast-diff,object-map,exclusive-lock,deep-flatten
[root@kube-node1 my-cluster]# rbd info mytest/foo |grep features
features: layering
在 /etc/ceph/ceph.conf 配置文件中添加配置 rbd_default_features = 3,这样创建的 RBD 均只启用 layering 特性:
features 是如下几个值的和:

+1 for layering, +2 for stripingv2, +4 for exclusive lock, +8 for object map +16 for fast-diff, +32 for deep-flatten, +64 for journaling

到 v4.6 linux kernel 只支持 layering 和 stripingv2。

参考:

http://www.zphj1987.com/2016/06/07/rbd%E6%97%A0%E6%B3%95map-rbd-feature-disable/

  1. K8S Mount PV 对应的 RBD 镜像失败,提示 rbd: map failed exit status 6 rbd: sysfs write failed
    现象:

K8S Mount PV 对应的 RBD 镜像失败,提示 rbd: map failed exit status 6 rbd: sysfs write failed
[k8s@kube-node1 ~]$ kubectl get pods|grep prometheus-server
wishful-ladybird-prometheus-server-f744b8794-mxr5l 0/2 Init:0/1 0 55s

[k8s@kube-node1 ~]$ kubectl describe pods wishful-ladybird-prometheus-server-f744b8794-mxr5l|tail -10


Normal Scheduled 2m default-scheduler Successfully assigned wishful-ladybird-prometheus-server-f744b8794-mxr5l to kube-node2
Normal SuccessfulMountVolume 2m kubelet, kube-node2 MountVolume.SetUp succeeded for volume “config-volume”
Normal SuccessfulMountVolume 2m kubelet, kube-node2 MountVolume.SetUp succeeded for volume “wishful-ladybird-prometheus-server-token-8grpr”
Warning FailedMount 59s (x8 over 2m) kubelet, kube-node2 MountVolume.SetUp failed for volume “ceph-pv-8g” : rbd: map failed exit status 6 rbd: sysfs write failed
RBD image feature set mismatch. Try disabling features unsupported by the kernel with “rbd feature disable”.
In some cases useful info is found in syslog - try “dmesg | tail”.
rbd: map failed: (6) No such device or address
Warning FailedMount 2s kubelet, kube-node2 Unable to mount volumes for pod “wishful-ladybird-prometheus-server-f744b8794-mxr5l_default(7cdc39e7-80f0-11e8-9331-525400ce676d)”: timeout expired waiting for volumes to attach/mount for pod “default”/“wishful-ladybird-prometheus-server-f744b8794-mxr5l”. list of unattached/unmounted volumes=[storage-volume]
Warning FailedSync 2s kubelet, kube-node2 Error syncing pod

[k8s@kube-node1 ~]$ kubectl get pv ceph-pv-8g -o yaml|grep image
image: prometheus-server
直接 disable feature 失败,提示 Read-only file system:
[k8s@kube-node1 ~]$ rbd feature disable prometheus-server fast-diff,object-map,exclusive-lock,deep-flatten
rbd: failed to update image features: (30) Read-only file system
删除 RBD Image 失败:
[k8s@ambari ~]$ rbd rm prometheus-server
2018-07-06 18:05:04.968455 7f9ce1ffb700 -1 librbd::image::RemoveRequest: 0x7f9d05abc0f0 handle_exclusive_lock: cannot obtain exclusive lock - not removing
Removing image: 0% complete…failed.
rbd: error: image still has watchers
This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.

[k8s@ambari ~]$ rbd status prometheus-server
Watchers: none
原因:

CentoOS kernel 不支持全部特性:[layering, striping, exclusive-lock, object-map, fast-diff, deep-flatten, journaling, data-pool]

解决方法:

删除 RBD Image 的 Lock:
[k8s@ambari ~]$ rbd lock list prometheus-server
There is 1 exclusive lock on this image.
Locker ID Address
client.44142 kubelet_lock_magic_ambari.hadoop 172.27.132.67:0/710596833

[k8s@ambari ~]$ rbd lock rm prometheus-server kubelet_lock_magic_ambari.hadoop client.44142
[k8s@ambari ~]$ rbd rm prometheus-server
Removing image: 100% complete…done.
[k8s@ambari ~]$
9. unmap rbd 失败
现象:

[root@kube-node2 ~]# sudo rbd unmap foo
rbd: sysfs write failed
rbd: unmap failed: (16) Device or resource busy
原因:

未知;

解决办法:

使用 -o force 强制 unmap:

[root@kube-node2 ~]# sudo rbd unmap -o force foo
参考:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-July/019160.html

  1. 同时挂载一个 RBD,并发读写后,引起文件系统错误,提示 Input/output error
    现象:

[root@kube-node2 ~]# sudo rbd map foo # 第二次挂载同一个 RBD
[root@kube-node2 ~]# sudo mkdir /mnt/ceph-block-device
[root@kube-node2 ~]# sudo mount /dev/rbd0 /mnt/ceph-block-device
[root@kube-node2 ~]# ls /mnt/ceph-block-device/
ls: cannot access /mnt/ceph-block-device/test: Input/output error
lost+found test

[root@kube-node2 ~]# dmesg|tail
[598628.843913] rbd: loaded (major 251)
[598628.850614] libceph: mon0 172.27.132.65:6789 session established
[598628.851312] libceph: client14187 fsid b7b9e370-ea9b-4cc0-8b09-17167c876c24
[598628.860645] rbd: rbd0: capacity 4294967296 features 0x1
[598639.147675] EXT4-fs (rbd0): recovery complete
[598639.147682] EXT4-fs (rbd0): mounted filesystem with ordered data mode. Opts: (null)
[598675.693950] EXT4-fs (rbd0): mounted filesystem with ordered data mode. Opts: (null)
[598678.177275] EXT4-fs error (device rbd0): ext4_lookup:1441: inode #2: comm ls: deleted inode referenced: 12
[598680.603020] EXT4-fs error (device rbd0): ext4_lookup:1441: inode #2: comm ls: deleted inode referenced: 12
[598701.607815] EXT4-fs error (device rbd0): ext4_lookup:1441: inode #2: comm ls: deleted inode referenced: 12
原因:

RBD 不支持多次挂载的并发访问。

解决办法:

使用 fsck 修复。

[root@kube-node2 ~]# umount /dev/rbd1
[root@kube-node2 ~]# fsck -y /dev/rbd1
fsck from util-linux 2.23.2
e2fsck 1.42.9 (28-Dec-2013)
/dev/rbd1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Entry ‘test’ in / (2) has deleted/unused inode 12. Clear? yes

Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Inode bitmap differences: -12
Fix? yes

Free inodes count wrong for group #0 (8180, counted=8181).
Fix? yes

Free inodes count wrong (262132, counted=262133).
Fix? yes

/dev/rbd1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/rbd1: 11/262144 files (0.0% non-contiguous), 53326/1048576 blocks
[root@kube-node2 ~]# mount /dev/rbd1 /mnt/ceph-block-device/
[root@kube-node2 ~]# ls -l /mnt/ceph-block-device/
total 16
drwx------ 2 root root 16384 Jul 5 18:07 lost+found
[root@kube-node2 ~]#
11. too few PGs per OSD (16 < min 30)
现象:

$ ceph -s
cluster 85510587-14c6-4526-9636-83179bda2751
health HEALTH_WARN
too few PGs per OSD (16 < min 30)
monmap e3: 3 mons at {controller-01=10.90.3.7:6789/0,controller-02=10.90.3.2:6789/0,controller-03=10.90.3.5:6789/0}
election epoch 8, quorum 0,1,2 controller-02,controller-03,controller-01
osdmap e74: 12 osds: 12 up, 12 in
pgmap v38670: 1408 pgs, 10 pools, 18592 MB data, 4304 objects
56379 MB used, 20760 GB / 20815 GB avail
1408 active+clean
client io 5127 B/s wr, 2 op/s
原因:

Ceph集群目前每个osd的pg数阈值最小是 30,一旦每个osd上的pg数低于30,就会提示“too few PGs per OSD”:

$ ceph --show-config | grep mon_pg_warn_min_per_osd
mon_pg_warn_min_per_osd = 30
解决办法:

查看 Ceph pool
ceph osd lspools 0 rbd,

新建的集群只有一个 rbd 池。

查看rbd 池的PG和PGP值
ceph osd pool get rbd pg_num pg_num: 64

ceph osd pool get rbd pgp_num pgp_num: 64

查看 池的副本数
ceph osd dump | grep size

pool 0 ‘rbd’ replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0

pgs为64,因为是3副本的配置,所以当有12个osd的时候,每个osd上均分了64/12 *3=16个pgs,低于最低的配置30。

调整 rbd 池的pgs
pg num 的设置不能随便配设置,过大或者过小都不可以。如果过大,那么backfill和recovery的时候负载太大,如果过小,数据就没法很好的均匀分布。 PG总数=(OSD 总数 x100)/ 最大副本数

结果必须舍入到最接近2的N次幂的值。

修改池的PG和PGP:
ceph osd pool set rbd pg_num 512 ceph osd pool set rbd pgp_num 512

不能调整的太大,否则出现 too many PGs per OSD (352 > max 300) 的错误。

  1. too many PGs per OSD (352 > max 300)
    现象:

$ ceph -s
cluster 85510587-14c6-4526-9636-83179bda2751
health HEALTH_WARN
too many PGs per OSD (352 > max 300)
monmap e3: 3 mons at {controller-01=10.90.3.7:6789/0,controller-02=10.90.3.2:6789/0,controller-03=10.90.3.5:6789/0}
election epoch 8, quorum 0,1,2 controller-02,controller-03,controller-01
osdmap e74: 12 osds: 12 up, 12 in
pgmap v38670: 1408 pgs, 10 pools, 18592 MB data, 4304 objects
56379 MB used, 20760 GB / 20815 GB avail
1408 active+clean
client io 5127 B/s wr, 2 op/s
原因:

Ceph 集群目前每个osd 的 pg 数最大阈值是300,一旦每个osd 的pg数超过300,就会提示“too many PGs per OSD”

$ ceph --show-config | grep mon_pg_warn_max_per_osd
mon_pg_warn_max_per_osd = 300
解决办法:

查看目前的pg分布
pg num和per osd:把所有Pool的PG数量求和,除以OSD的个数,就得到平均每个OSD上的PG数量。

修改所有monitor节点的 ceph.conf
$ vim /etc/ceph/ceph.conf

[global]

mon_pg_warn_max_per_osd = 0 # 在global部分最后添加此行

重启所有的节点
$ restart ceph-mon-all

如果觉得觉得修改配置文件麻烦,也可以直接一条命令搞定:
ceph tell ‘mon.*’ injectargs “–mon_pg_warn_max_per_osd 0”

使用tell命令修改的配置只是临时的,mon 服务重启后会丢失。解决办法是把这个配置加到Ceph mon节点的配置文件里,然后重启mon服务。

  1. 启动 rgw 失败
    现象:

[root@kube-node2 ~]# journalctl -u ceph-radosgw@rgw.kube-node2|tail
7月 05 20:04:04 kube-node2 radosgw[29818]: 2018-07-05 20:04:04.799739 7fe6b8908e80 -1 ERROR: failed to initialize watch: (34) Numerical result out of range
7月 05 20:04:04 kube-node2 radosgw[29818]: 2018-07-05 20:04:04.802725 7fe6b8908e80 -1 Couldn’t init storage provider (RADOS)
7月 05 20:04:04 kube-node2 systemd[1]: ceph-radosgw@rgw.kube-node2.service: main process exited, code=exited, status=5/NOTINSTALLED
7月 05 20:04:04 kube-node2 systemd[1]: Unit ceph-radosgw@rgw.kube-node2.service entered failed state.
7月 05 20:04:04 kube-node2 systemd[1]: ceph-radosgw@rgw.kube-node2.service failed.
7月 05 20:04:05 kube-node2 systemd[1]: ceph-radosgw@rgw.kube-node2.service holdoff time over, scheduling restart.
7月 05 20:04:05 kube-node2 systemd[1]: start request repeated too quickly for ceph-radosgw@rgw.kube-node2.service
7月 05 20:04:05 kube-node2 systemd[1]: Failed to start Ceph rados gateway.
7月 05 20:04:05 kube-node2 systemd[1]: Unit ceph-radosgw@rgw.kube-node2.service entered failed state.
7月 05 20:04:05 kube-node2 systemd[1]: ceph-radosgw@rgw.kube-node2.service failed.
原因:

pg_num、pgp_num、mon_max_pg_per_osd 等参数值设置的不正确或太小。

解决办法:

[ceph@kube-node1 my-cluster]$ ceph tell injectargs ‘–mon_max_pg_per_osd=350’ # 还应该添加到 ceph.conf 文件中;
参考:

http://tracker.ceph.com/issues/22351#note-11

  1. s3test.py 返回 416 错误
    执行 http://docs.ceph.com/docs/master/install/install-ceph-gateway/ 的 s3 test.py 程序失败:

[k8s@kube-node1 cert]$ python s3test.py
Traceback (most recent call last):
File “s3test.py”, line 12, in
bucket = conn.create_bucket(‘my-new-bucket’)
File “/usr/lib/python2.7/site-packages/boto/s3/connection.py”, line 625, in create_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 416 Requested Range Not Satisfiable
原因:

pg_num、pgp_num、mon_max_pg_per_osd 等参数值设置的不正确或太小。

解决办法:

[ceph@kube-node1 my-cluster]$ ceph tell injectargs ‘–mon_max_pg_per_osd=350’ # 还应该添加到 ceph.conf 文件中;
参考:

https://tracker.ceph.com/issues/21497

  1. map rbd 失败,提示 timeout
    现象:

$ rbd feature disable foo fast-diff,object-map,exclusive-lock,deep-flatten
$ rbd map foo
rbd: sysfs write failed
In some cases useful info is found in syslog - try “dmesg | tail”.
rbd: map failed: (110) Connection timed out
内核日志:

$ dmesg|tail
[2937489.640621] libceph: mon1 172.27.128.101:6789 feature set mismatch, my 106b84a842a42 < server’s 40106b84a842a42, missing 400000000000000
[2937489.643198] libceph: mon1 172.27.128.101:6789 missing required protocol features
[2937742.427929] libceph: mon2 172.27.128.102:6789 feature set mismatch, my 106b84a842a42 < server’s 40106b84a842a42, missing 400000000000000
[2937742.430234] libceph: mon2 172.27.128.102:6789 missing required protocol features
[2937752.725957] libceph: mon2 172.27.128.102:6789 feature set mismatch, my 106b84a842a42 < server’s 40106b84a842a42, missing 400000000000000
[2937752.728805] libceph: mon2 172.27.128.102:6789 missing required protocol features
[2937762.737960] libceph: mon2 172.27.128.102:6789 feature set mismatch, my 106b84a842a42 < server’s 40106b84a842a42, missing 400000000000000
[2937762.740282] libceph: mon2 172.27.128.102:6789 missing required protocol features
[2937772.722343] libceph: mon2 172.27.128.102:6789 feature set mismatch, my 106b84a842a42 < server’s 40106b84a842a42, missing 400000000000000
[2937772.724659] libceph: mon2 172.27.128.102:6789 missing required protocol features
原因:

Linux kernel version v4.5 开始才支持 400000000000000 对应的 CRUSH_TUNABLES5;

解决办法:

升级内核到 4.5 或更高版本;
或更新 feature flag,不需要 400000000000000;
对于后一种方式,执行命令 ceph osd crush tunables hammer;

参考:

http://cephnotes.ksperis.com/blog/2014/01/21/feature-set-mismatch-error-on-ceph-kernel-client http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-July/019387.html

  1. kuberntes PVC 使用 StorageClass 创建 PV 失败
    现象:

$ kubectl get pv # 没有自动创建 PV
No resources found.

$ kubectl describe pvc | tail
Warning ProvisioningFailed 25m persistentvolume-controller Failed to provision volume with StorageClass “ceph”: failed to create rbd image: exit status 1, command output: 2018-07-26 17:35:05.749211 7f4e037687c0 -1 did not load config file, using default settings.
rbd: extraneous parameter --image-feature

$ kubectl get endpoints -n kube-system kube-controller-manager -o yaml|grep leader
control-plane.alpha.kubernetes.io/leader: ‘{“holderIdentity”:“m7-devops-128123”,“leaseDurationSeconds”:15,“acquireTime”:“2018-07-25T14:00:31Z”,“renewTime”:“2018-07-26T10:02:01Z”,“leaderTransitions”:3}’

$ journalctl -u kube-controller-manager |tail -4 # 查看 leader m7-devops-128123 的日志
7月 26 18:01:20 m7-devops-128123 kube-controller-manager[27948]: E0726 18:01:20.798621 27948 rbd.go:367] rbd: create volume failed, err: failed to create rbd image: exit status 1, command output: 2018-07-26 18:01:20.765861 7f912376db00 -1 did not load config file, using default settings.
7月 26 18:01:20 m7-devops-128123 kube-controller-manager[27948]: rbd: extraneous parameter --image-feature
7月 26 18:01:20 m7-devops-128123 kube-controller-manager[27948]: I0726 18:01:20.798671 27948 pv_controller.go:1317] failed to provision volume for claim “default/pvc-test-claim” with StorageClass “ceph”: failed to create rbd image: exit status 1, command output: 2018-07-26 18:01:20.765861 7f912376db00 -1 did not load config file, using default settings.
7月 26 18:01:20 m7-devops-128123 kube-controller-manager[27948]: rbd: extraneous parameter --image-feature
原因:

kuberntes 节点安装的 ceph-common 版本过低,与 ceph 集群不兼容。

解决办法:

配置节点 YUM 源,使用与 ceph 集群版本相匹配的源。然后更新安装 ceph-common。

$ cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for b a s e a r c h b a s e u r l = h t t p : / / d o w n l o a d . c e p h . c o m / r p m − l u m i n o u s / e l 7 / basearch baseurl=http://download.ceph.com/rpm-luminous/el7/ basearchbaseurl=http://download.ceph.com/rpmluminous/el7/basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://download.ceph.com/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1

$ yum clean all && yum update
17. 创建使用 PVC 的 POD 卡住,一直处于 ContainerCreating 或 Init:0/1 状态(多容器的情况)
现象:

[root@m7-devops-128071 prometheus]# kubectl get pods -n devops-monitoring -o wide|grep -v Running
NAME READY STATUS RESTARTS AGE IP NODE
monitoring-prometheus-alertmanager-67674fb84b-pxfbz 0/2 ContainerCreating 0 16m m7-devops-128107
monitoring-prometheus-server-5777d76b75-l5bdv 0/2 Init:0/1 0 16m m7-devops-128107
prometheus-alertmanager 和 prometheus-server 均使用名为 ceph 的 PVC StorageClass;
POD 一直没有 Running;
查看 StorageClass ceph 详情:

[root@m7-devops-128107 ~]# kubectl get storageclass ceph -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2018-07-26T09:49:30Z
name: ceph
resourceVersion: “462349”
selfLink: /apis/storage.k8s.io/v1/storageclasses/ceph
uid: 3116e9a4-90b9-11e8-b43c-0cc47a2af650
parameters:
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: default
imageFeatures: layering
imageFormat: “2”
monitors: 172.27.128.100:6789,172.27.128.101:6789,172.27.128.102:6789
pool: rbd
userId: admin
userSecretName: ceph-secret-admin
provisioner: kubernetes.io/rbd
reclaimPolicy: Delete
该 storageclass 使用的 adminSecret ceph-secret-admin 位于 adminSecretNamespace 指定的 default 命名空间中;
对应节点的 kubelet 日志:

[root@m7-devops-128107 ~]# journalctl -u kubelet -f
– Logs begin at 一 2018-07-30 18:42:12 CST. –
8月 01 17:19:41 m7-devops-128107 kubelet[6120]: E0801 17:19:41.578011 6120 rbd.go:504] failed to get secret from [“devops-monitoring”/“ceph-secret-admin”]
8月 01 17:19:41 m7-devops-128107 kubelet[6120]: E0801 17:19:41.578092 6120 rbd.go:126] Couldn’t get secret from devops-monitoring/&LocalObjectReference{Name:ceph-secret-admin,}
原因:

PVC 指定的 StorageClass 的 userSecretName 必须位于 PVC 所在的命名空间。而 ceph-secret-admin 在 devops-monitoring 命名空间中没有定义,所以出错。

解决办法:

在 PVC 所在的命名空间 devops-monitoring 中创建 ceph-secret-admin secret,注意类型必须是 kubernetes.io/rbd;

[root@m7-devops-128071 k8s]# Secret=$(awk ‘/key = / {print $3}’ /etc/ceph/ceph.client.admin.keyring | base64)
[root@m7-devops-128071 k8s]# cat > ceph-secret-admin.yaml <<EOF
apiVersion: v1
kind: Secret
type: kubernetes.io/rbd
metadata:
name: ceph-secret-admin
namespace: devops-monitoring
data:
key: $Secret
EOF
[root@m7-devops-128071 k8s]# kubectl create -f ceph-secret-admin.yaml

参考: https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值