部署Ceph系统为K8S提供存储平台

部署Ceph系统为K8S提供存储平台

0 前情提要

本文的搭建过程是在K8S系统上复用的,因此关于系统优化配置方面不再赘述。

说明:如不特殊说明,以下操作均在三台系统上执行。

1 新建磁盘

给虚拟机新增一块硬盘,由于我之前已经添加过一块盘了,所以盘符为sdc:

/dev/sdc

2 配置yum源

vim /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
gpgcheck=0

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
gpgcheck=0
vim /etc/yum.repos.d/epel.repo

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

3 创建普通用户并设置sudo免密

groupadd -g 3000 ceph
useradd -u 3000 -g ceph ceph
echo "ceph" | passwd --stdin ceph
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
chmod 0440 /etc/sudoers.d/ceph

4 新建的用户创建ssh免密登录

在master节点执行:

su - ceph
ssh-keygen
ssh-copy-id ceph@k8s-master
ssh-copy-id ceph@k8s-node1
ssh-copy-id ceph@k8s-node2

5 安装软件

sudo su - root    # master要从ceph用户切换到root用户下
yum install ceph-deploy -y
wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 https://archive.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-7
yum install python-pip -y
yum install ceph ceph-osd ceph-mds ceph-mon ceph-radosgw -y
yum install ntp -y
systemctl start ntpd
systemctl enable ntpd

Tps:安装时间同步服务的目的是为了防止后续集群因为时间不同步导致健康状态从OK转变为WARN。

6 创建集群

在master节点执行:

su - ceph
mkdir cephcluster
cd cephcluster/
# 初始化创建ceph集群
ceph-deploy new --cluster-network 192.168.0.0/24 --public-network 192.168.0.0/24 k8s-master k8s-node1 k8s-node2
# 初始化monitor服务
ceph-deploy mon create-initial
# 配置信息拷贝到三台节点
ceph-deploy admin k8s-master k8s-node1 k8s-node2
sudo chown -R ceph:ceph /etc/ceph
chown -R ceph:ceph /etc/ceph    # 在其它节点执行

查看状态:

ceph -s
  cluster:
    id:     14450b7d-84ce-40c4-8a1e-46af50457fc6
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum k8s-master,k8s-node1,k8s-node2 (age 65s)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

7 配置mgr服务

在master节点执行:

ceph-deploy mgr create k8s-master k8s-node1 k8s-node2

查看状态:

ceph -s
  cluster:
    id:     14450b7d-84ce-40c4-8a1e-46af50457fc6
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum k8s-master,k8s-node1,k8s-node2 (age 99s)
    mgr: k8s-master(active, since 23s), standbys: k8s-node2, k8s-node1
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:

8 配置osd服务

在master节点执行:

ceph-deploy osd create --data /dev/sdc k8s-master
ceph-deploy osd create --data /dev/sdc k8s-node1
ceph-deploy osd create --data /dev/sdc k8s-node2

9 配置mon服务

在master节点执行:

先查看Ceph集群中的mon服务状态:

ceph mon stat

e1: 3 mons at {k8s-master=[v2:192.168.0.234:3300/0,v1:192.168.0.234:6789/0],k8s-node1=[v2:192.168.0.235:3300/0,v1:192.168.0.235:6789/0],k8s-node2=[v2:192.168.0.236:3300/0,v1:192.168.0.236:6789/0]}, election epoch 10, leader 0 k8s-master, quorum 0,1,2 k8s-master,k8s-node1,k8s-node2
ceph mon_status --format json-pretty

{
    "name": "k8s-master",
    "rank": 0,
    "state": "leader",
    "election_epoch": 10,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_age": 495,
    "features": {
        "required_con": "2449958747315912708",
        "required_mon": [
            "kraken",
            "luminous",
            "mimic",
            "osdmap-prune",
            "nautilus"
        ],
        "quorum_con": "4611087854035861503",
        "quorum_mon": [
            "kraken",
            "luminous",
            "mimic",
            "osdmap-prune",
            "nautilus"
        ]
    },
    "outside_quorum": [],
    "extra_probe_peers": [
        {
            "addrvec": [
                {
                    "type": "v2",
                    "addr": "192.168.0.235:3300",
                    "nonce": 0
                },
                {
                    "type": "v1",
                    "addr": "192.168.0.235:6789",
                    "nonce": 0
                }
            ]
        },
        {
            "addrvec": [
                {
                    "type": "v2",
                    "addr": "192.168.0.236:3300",
                    "nonce": 0
                },
                {
                    "type": "v1",
                    "addr": "192.168.0.236:6789",
                    "nonce": 0
                }
            ]
        }
    ],
    "sync_provider": [],
    "monmap": {
        "epoch": 1,
        "fsid": "14450b7d-84ce-40c4-8a1e-46af50457fc6",
        "modified": "2021-03-02 18:32:42.613085",
        "created": "2021-03-02 18:32:42.613085",
        "min_mon_release": 14,
        "min_mon_release_name": "nautilus",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune",
                "nautilus"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "k8s-master",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.0.234:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.0.234:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.0.234:6789/0",
                "public_addr": "192.168.0.234:6789/0"
            },
            {
                "rank": 1,
                "name": "k8s-node1",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.0.235:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.0.235:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.0.235:6789/0",
                "public_addr": "192.168.0.235:6789/0"
            },
            {
                "rank": 2,
                "name": "k8s-node2",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "192.168.0.236:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "192.168.0.236:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "192.168.0.236:6789/0",
                "public_addr": "192.168.0.236:6789/0"
            }
        ]
    },
    "feature_map": {
        "mon": [
            {
                "features": "0x3ffddff8ffecffff",
                "release": "luminous",
                "num": 1
            }
        ],
        "osd": [
            {
                "features": "0x3ffddff8ffecffff",
                "release": "luminous",
                "num": 2
            }
        ],
        "client": [
            {
                "features": "0x3ffddff8ffecffff",
                "release": "luminous",
                "num": 2
            }
        ],
        "mgr": [
            {
                "features": "0x3ffddff8ffecffff",
                "release": "luminous",
                "num": 1
            }
        ]
    }
}

发现有3个mon服务,所以不必再次配置。

10 查看服务状态

在master节点执行:

systemctl list-units | grep ceph-mon
ceph-mon@k8s-master.service                                                                                                           loaded active running   Ceph cluster monitor daemon
ceph-mon.target                                                                                                                       loaded active active    ceph target allowing to start/stop all ceph-mon@.service instances at once

systemctl list-units | grep ceph-mgr
ceph-mgr@k8s-master.service                                                                                                           loaded active running   Ceph cluster manager daemon
ceph-mgr.target                                                                                                                       loaded active active    ceph target allowing to start/stop all ceph-mgr@.service instances at once

systemctl list-units | grep ceph-osd
var-lib-ceph-osd-ceph\x2d0.mount                                                                                                      loaded active mounted   /var/lib/ceph/osd/ceph-0
ceph-osd@0.service                                                                                                                    loaded active running   Ceph object storage daemon osd.0
ceph-osd.target 

查看状态:

ceph -s
  cluster:
    id:     14450b7d-84ce-40c4-8a1e-46af50457fc6
    health: HEALTH_WARN
            clock skew detected on mon.k8s-node1

  services:
    mon: 3 daemons, quorum k8s-master,k8s-node1,k8s-node2 (age 15m)
    mgr: k8s-master(active, since 41s), standbys: k8s-node2, k8s-node1
    osd: 3 osds: 3 up (since 10m), 3 in (since 10m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 597 GiB / 600 GiB avail
    pgs: 
                                                                                                         loaded active active    ceph target allowing to start/stop all ceph-osd@.service instances at once

发现health: HEALTH_WARN,解决方案:

su - ceph
echo "mon clock drift allowed = 2" >> ~/cephcluster/ceph.conf
echo "mon clock drift warn backoff = 30" >> ~/cephcluster/ceph.conf
ceph-deploy --overwrite-conf config push k8s-master k8s-node1 k8s-node2
sudo systemctl restart ceph-mon.target

再次查看状态:

ceph -s
  cluster:
    id:     14450b7d-84ce-40c4-8a1e-46af50457fc6
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum k8s-master,k8s-node1,k8s-node2 (age 2m)
    mgr: k8s-master(active, since 5m), standbys: k8s-node2, k8s-node1
    osd: 3 osds: 3 up (since 16m), 3 in (since 16m)

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 597 GiB / 600 GiB avail
    pgs:

这次状态正常了。

11 配置dashboard

在master节点执行:

yum -y install ceph-mgr-dashboard    # 三个节点都要执行安装操作
echo "mgr initial modules = dashboard" >> ~/cephcluster/ceph.conf
ceph-deploy --overwrite-conf config push k8s-master k8s-node1 k8s-node2
sudo systemctl restart ceph-mgr@k8s-master
ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
ceph dashboard set-login-credentials admin ceph123
******************************************************************
***          WARNING: this command is deprecated.              ***
*** Please use the ac-user-* related commands to manage users. ***
******************************************************************
Username and password updated
ceph mgr services
{
    "dashboard": "https://k8s-master:8443/"
}

打开浏览器,输入地址 https://192.168.0.234:8443/

输入账号面:admin,ceph123:

12 使用示例

https://kubernetes.io/zh/docs/concepts/storage/volumes/#cephfs
https://github.com/kubernetes/examples/tree/master/volumes/cephfs
https://github.com/kubernetes/examples/blob/master/volumes/cephfs/cephfs.yam

13 参考链接

https://www.cnblogs.com/weiwei2021/p/14060186.html
https://blog.csdn.net/weixin_43902588/article/details/109147778
https://www.cnblogs.com/huchong/p/12435957.html
https://www.cnblogs.com/sisimi/p/7700608.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值