K8s(动态存储)对接 Ceph14.2.13 Nautilus版本&Ceph集群部署(2)

1.修改主机名以及添加host(所有机器)
hostnamectl set-hostname ceph-node01
[root@ceph-node01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.8.100  ceph-node01
192.168.8.101  ceph-node02
192.168.8.102   ceph-node03
2. 生成 ssh 密钥并将其分发到ceph 集群的各个节点上面
# 生成密钥
ssh-keygen -t rsa -P ""
# copy 密钥
ssh-copy-id -i .ssh/id_rsa.pub <node-name>
3. 安装 NTP 服务,并做时间同步
yum -y install ntp
[root@ceph-node01 ~]# egrep -v "#|^$" /etc/ntp.conf 
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1 
restrict ::1
server ntp1.aliyun.com <<<<---------阿里云的
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor

#通过以上命令测试ntp 是否配置好即可,可以配置一台,其它节点服务器指向这一台即可
3 配置 yum 源(同步到所有机器)
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

#配置 ceph 源,需要自己根据阿里提供的进行自己编写,如下:
cat <<END >/etc/yum.repos.d/ceph.repo
[norch]
name=norch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch/
enabled=1
gpgcheck=0

[x86_64]
name=x86_64
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/x86_64/
enabled=1
gpgcheck=0
END
makecache建立一个缓存
yum makecache
yum clean all
4 管理节点安装 ceph-deploy
#Ceph 存储集群部署过程中可通过管理节点使用ceph-deploy全程进行,这里首先在管理节点安装ceph-deploy及其依赖的程序包,这里要注意安装 python-setuptools 工具包
yum install ceph-deploy python-setuptools python2-subprocess32
部署RADOS存储集群
#创建一个专属目录
mkdir /ceph-deploy && cd /ceph-deploy
初始化第一个MON节点,准备创建集群
ceph-deploy new --cluster-network 192.168.8.0/24 --public-network 192.168.8.0/24 ceph-node01
--cluster-network  内部数据同步使用;
--public-network 对外提供服务使用的;
生成三个配置文件,ceph.conf(配置文件)、ceph-deploy-ceph.log(日志文件)、 ceph.mon.keyring(认证文件)。
其他2个节点安装

ceph-deploy install {ceph-node} {…}
这里使用这种方式安装的话,会自动化的把软件安装包安装上去,这种安装方式不太好,因为它会重新配置yum源,包括我们的 epel yum源,还有 ceph 的 yum 源,都会指向他内置的yum源,这样会导致你访问到国外,下载很慢,建议手动安装,下面每台机器都手动安装即可,如下:

yum -y install ceph ceph-mds ceph-mgr ceph-osd ceph-radosgw ceph-mon
5. 配置文件和admin密钥copy到ceph集群各节点
如下
scp -r ceph.conf ceph.client.admin.keyring root@192.168.1.x:/etc/ceph/

创始化集群

ceph-deploy mon create-initial
ceph-deploy admin ceph-node01 ceph-node02 ceph-node03
ceph-deploy mgr create ceph-node01

查看

[root@ceph-node01 ceph-deploy]# ceph -s
  cluster:
    id:     8cba000d-bafe-4d00-839a-663d6086052e
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-node01 (age 2m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     

格式化磁盘

ceph-deploy disk zap ceph-node01 /dev/vdb

查看磁盘情况

[root@ceph-node01 ceph-deploy]# lsblk
NAME                                                                                                  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda                                                                                                     8:0    0  20G  0 disk 
├─sda1                                                                                                  8:1    0   1G  0 part /boot
├─sda2                                                                                                  8:2    0   2G  0 part [SWAP]
└─sda3                                                                                                  8:3    0  17G  0 part /
sdb                                                                                                     8:16   0   3G  0 disk 
└─ceph--6f173688--90b3--4656--a5a1--cd729b689185-osd--block--9a7a6909--6318--42a2--9910--e829ef139943 253:0    0   3G  0 lvm  
sr0                                                                                                    11:0    1   1G  0 rom  
6 添加 osd
[root@ceph-node01 ceph-deploy]# ceph-deploy osd create ceph-node01 --data /dev/sdb
。。。
[root@ceph-node01 ceph-deploy]# ceph-deploy osd create ceph-node02 --data /dev/sdb
。。。
[root@ceph-node01 ceph-deploy]# ceph-deploy osd create ceph-node03 --data /dev/sdb
。。。

查看 OSD

[root@ceph-node01 ceph-deploy]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME            STATUS REWEIGHT PRI-AFF 
-1       0.00870 root default                                 
-3       0.00290     host ceph-node01                         
 0   hdd 0.00290         osd.0            up  1.00000 1.00000 
-5       0.00290     host ceph-node02                         
 1   hdd 0.00290         osd.1            up  1.00000 1.00000 
-7       0.00290     host ceph-node03                         
 2   hdd 0.00290         osd.2            up  1.00000 1.00000 

或者

[root@ceph-node01 ceph-deploy]# ceph osd status
+----+-------------+-------+-------+--------+---------+--------+---------+-----------+
| id |     host    |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+-------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | ceph-node01 | 1026M | 2041M |    0   |     0   |    0   |     0   | exists,up |
| 1  | ceph-node02 | 1026M | 2041M |    0   |     0   |    0   |     0   | exists,up |
| 2  | ceph-node03 | 1026M | 2041M |    0   |     0   |    0   |     0   | exists,up |
+----+-------------+-------+-------+--------+---------+--------+---------+-----------+
[root@ceph-node01 ceph-deploy]# ceph osd stat
3 osds: 3 up (since 5h), 3 in (since 22h); epoch: e317
root@ceph-node01 ceph-deploy]# ceph osd dump
epoch 317
fsid 8cba000d-bafe-4d00-839a-663d6086052e
created 2020-11-11 18:43:55.982898
modified 2020-11-12 17:03:37.217045
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 7
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release nautilus
pool 1 'k8s' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 316 lfor 0/316/314 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
	removed_snaps [1~3]
pool 2 'ceph-demon' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 37 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
	removed_snaps [1~3]
max_osd 3
osd.0 up   in  weight 1 up_from 32 up_thru 300 down_at 31 last_clean_interval [5,28) [v2:192.168.8.100:6800/984,v1:192.168.8.100:6801/984] [v2:192.168.8.100:6802/984,v1:192.168.8.100:6803/984] exists,up 9a7a6909-6318-42a2-9910-e829ef139943
osd.1 up   in  weight 1 up_from 35 up_thru 316 down_at 34 last_clean_interval [9,28) [v2:192.168.8.101:6800/1490,v1:192.168.8.101:6801/1490] [v2:192.168.8.101:6802/1490,v1:192.168.8.101:6803/1490] exists,up a553b9d5-9ddf-4f04-ba73-1afcf5e6c437
osd.2 up   in  weight 1 up_from 30 up_thru 312 down_at 29 last_clean_interval [13,28) [v2:192.168.8.102:6800/1468,v1:192.168.8.102:6801/1468] [v2:192.168.8.102:6802/1468,v1:192.168.8.102:6803/1468] exists,up 5d3573c5-e39b-4c03-81da-3554d158c625

7.扩展 mon
[root@ceph-node01 ceph-deploy]# ceph-deploy mon add ceph-node02
[root@ceph-node01 ceph-deploy]# ceph-deploy mon add ceph-node03

由于 mon 需要使用 paxos 算法进行选举一个 leader,可以查看选举状态;

[root@ceph-node01 ceph-deploy]# ceph quorum_status
{"election_epoch":66,"quorum":[0,1,2],"quorum_names":["ceph-node01","ceph-node02","ceph-node03"],"quorum_leader_name":"ceph-node01","quorum_age":12145,"monmap":{"epoch":3,"fsid":"8cba000d-bafe-4d00-839a-663d6086052e","modified":"2020-11-11 19:03:23.550345","created":"2020-11-11 18:43:55.614361","min_mon_release":14,"min_mon_release_name":"nautilus","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus"],"optional":[]},"mons":[{"rank":0,"name":"ceph-node01","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.8.100:3300","nonce":0},{"type":"v1","addr":"192.168.8.100:6789","nonce":0}]},"addr":"192.168.8.100:6789/0","public_addr":"192.168.8.100:6789/0"},{"rank":1,"name":"ceph-node02","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.8.101:3300","nonce":0},{"type":"v1","addr":"192.168.8.101:6789","nonce":0}]},"addr":"192.168.8.101:6789/0","public_addr":"192.168.8.101:6789/0"},{"rank":2,"name":"ceph-node03","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.8.102:3300","nonce":0},{"type":"v1","addr":"192.168.8.102:6789","nonce":0}]},"addr":"192.168.8.102:6789/0","public_addr":"192.168.8.102:6789/0"}]}}

查看 mon 状态

[root@ceph-node01 ceph-deploy]# ceph mon stat
e3: 3 mons at {ceph-node01=[v2:192.168.8.100:3300/0,v1:192.168.8.100:6789/0],ceph-node02=[v2:192.168.8.101:3300/0,v1:192.168.8.101:6789/0],ceph-node03=[v2:192.168.8.102:3300/0,v1:192.168.8.102:6789/0]}, election epoch 66, leader 0 ceph-node01, quorum 0,1,2 ceph-node01,ceph-node02,ceph-node03

查看 mon 详情

[root@ceph-node01 ceph-deploy]# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid 8cba000d-bafe-4d00-839a-663d6086052e
last_changed 2020-11-11 19:03:23.550345
created 2020-11-11 18:43:55.614361
min_mon_release 14 (nautilus)
0: [v2:192.168.8.100:3300/0,v1:192.168.8.100:6789/0] mon.ceph-node01
1: [v2:192.168.8.101:3300/0,v1:192.168.8.101:6789/0] mon.ceph-node02
2: [v2:192.168.8.102:3300/0,v1:192.168.8.102:6789/0] mon.ceph-node03
k8s 创建一块资源提供给k8s使用

创建2G的资源池

[root@ceph-node01 ceph-deploy]# rbd create -p k8s --image rbd-demo2.img --size 2G

查看

[root@ceph-node01 ceph-deploy]# rbd -p k8s ls
rbd-demo2.img

创建个用户和密钥

[root@ceph-node01 ceph-deploy]# ceph auth get-or-create client.kube mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=k8s'
[client.kube]
	key = AQCoCaxfomqKCBAA7htV95TZtECWZOsOH5dnCA==
	
[root@ceph-node01 ceph-deploy]# ceph auth get-key client.admin | base64
QVFCcndLdGZtMCtaT2hBQWFZMVpZdlJZVEhXbE5TNS82SmlVY0E9PQ==

[root@ceph-node01 ceph-deploy]# ceph auth get-key client.kube | base64
QVFDb0NheGZvbXFLQ0JBQTdodFY5NVRadEVDV1pPc09INWRuQ0E9PQ==

上传到k8s所有节点

scp -r ceph.client.admin.keyring ceph.conf root@192.168.8.111:/etc/ceph/
节章1有对接k8s详细步骤
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值