裸机部署ceph

基于centos 8 stream 部署3节点ceph

手动裸机部署

参考博文

  1. cephfs系统部署规划

    HostnameServicesVersion
    ceph0(172.31.6.78)mds.ceph0, mgr.ceph0, mon.ceph0, client16.2.2
    ceph1(172.31.6.79)osd.016.2.2
    ceph2(172.31.6.80)osd.116.2.2
  2. 说明:

    1. centos8 stream默认运行chronyd,检查服务器是否运行该同步时间服务。
    2. yum install yum-plugin-priorities 提示无此包,亦无文件/etc/yum/pluginconf.d/priorities.conf。不用安装此包
    3. 需要使用conf文件夹中的配置文档与shell脚本
  3. 安装ansible并配置ansible hosts,内容如下:

    cat /etc/ansible/hosts|grep -A 5 ceph
    [ceph]
    172.31.6.78
    172.31.6.79
    172.31.6.80
    [ceph0]
    172.31.6.78
    [ceph1]
    172.31.6.79
    [ceph2]
    172.31.6.80
    
  4. 完成服务器之间root账户无密码ssh登录配置

  5. 完成服务器基本部署

    ansible ceph -m shell -a "rpm --import 'https://download.ceph.com/keys/release.asc'"
    ansible ceph -m shell -a "yum install -y epel-release"
    ansible ceph -m shell -a "yum install -y snappy leveldb gdisk gperftools-libs"
    ansible ceph -m copy -a "src=./ceph.repo dest=/etc/yum.repos.d/"
    #耗时:2021-05-08 18:16:51 18:26:04 CST
    ansible ceph -m shell -a "yum install -y ceph"
    ansible ceph0 -m shell -a "hostnamectl set-hostname ceph0"
    ansible ceph1 -m shell -a "hostnamectl set-hostname ceph1"
    ansible ceph2 -m shell -a "hostnamectl set-hostname ceph2"
    ansible ceph -m copy -a "src=./ceph_perpare_env.sh dest=/root/"
    ansible ceph -m shell -a "bash /root/ceph_perpare_env.sh"  
    
    ansible ceph -m copy -a "src=./ceph_hosts.conf dest=/root/"
    ansible ceph -m shell -a "cat /root/ceph_hosts.conf >> /etc/hosts"
    
    ansible ceph -m copy -a "src=./ceph.conf dest=/etc/ceph/"
    #需要修改ceph.conf中的uuid,host,ip
    ansible ceph -m shell -a "chown -R ceph:ceph /etc/ceph/"
    
  6. 完成主节点角色部署:mon,mgr,mds

    ansible ceph0 -m copy -a "src=./ceph_key.sh dest=/root/"
    ansible ceph0 -m shell -a "bash /root/ceph_key.sh" -vv
    
    ansible ceph0 -m copy -a "src=./ceph_deploy.sh dest=/root/"
    ansible ceph0 -m shell -a "bash /root/ceph_deploy.sh" -vv
    
    ansible ceph0 -m copy -a "src=./ceph_pool_cephfs.sh dest=/root/"
    
    ansible ceph0 -m shell -a "bash /root/ceph_pool_cephfs.sh" -vv
    
  7. 完成次节点角色部署:ods

    ansible ceph1 -m copy -a "src=./ceph_osd.sh dest=/root/" 
    ansible ceph2 -m copy -a "src=./ceph_osd.sh dest=/root/" 
    ansible ceph1 -m shell -a "bash /root/ceph_osd.sh" -vv 
    ansible ceph2 -m shell -a "bash /root/ceph_osd.sh" -vv 
    
  8. 测试服务器client角色部署与测试

    1. 部署

      ansible ceph0 -m copy -a "src=./ceph_mount_cephfs.sh dest=/root/"
      ansible ceph0 -m copy -a "src=./ceph_dashboard_env.sh dest=/root/"
      
      ansible ceph0 -m shell -a "bash /root/ceph_mount_cephfs.sh" -vv
      ansible ceph0 -m shell -a "bash /root/ceph_dashboard_env.sh" -vv
      
    2. 查看mount 挂载情况

      [root@ceph0 /]#  df -i |grep ceph
      172.31.6.78:/            129     -        -     - /home/echocephfs
      [root@ceph0 /]#  df -h |grep ceph  
      172.31.6.78:/         95G  8.0M   95G   1% /home/echocephfs
      
    3. 测试cephfs写速度

      [root@ceph0 echocephfs]# dd if=/dev/zero of=./test count=2 bs=2M 
      2+0 records in
      2+0 records out
      4194304 bytes (4.2 MB, 4.0 MiB) copied, 1.02905 s, 4.1 MB/s
      
    4. 登录dashboard,url: https://ceph0:8443/#/dashboard; usr/pwd: admin/admin

  9. 查看状态命令

[root@ceph0 echocephfs]# ceph mds stat
cephfs:1 {0=ceph0=up:active}
[root@ceph0 echocephfs]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph0 echocephfs]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME       STATUS  REWEIGHT  PRI-AFF
-1         0.19537  root default                             
-3         0.09769      host ceph1                           
  0    hdd  0.09769          osd.0       up   1.00000  1.00000
-5         0.09769      host ceph2                           
  1    hdd  0.09769          osd.1       up   1.00000  1.00000
[root@ceph0 echocephfs]# ceph mon stat
e1: 1 mons at {ceph0=v1:172.31.6.78:6789/0}, election epoch 3, leader 0 ceph0, quorum 0 ceph0
[root@ceph0 echocephfs]# ceph mgr stat
{
  "epoch": 1652,
  "available": false,
  "active_name": "",
  "num_standby": 0
}
[root@ceph0 echocephfs]# systemctl list-units --type=service|grep ceph
ceph-crash.service                 loaded active running Ceph crash dump collector                                                    
ceph-mon@ceph0.service             loaded active running Ceph cluster monitor daemon
[root@ceph0 echocephfs]# systemctl list-unit-files|grep enabled|grep ceph
  ceph-crash.service                         enabled  
  ceph-mds.target                            enabled  
  ceph-mgr.target                            enabled  
  ceph-mon.target                            enabled  
  ceph-osd.target                            enabled  
  ceph.target                                enabled  

  [root@ceph1 ~]# ls /var/lib/ceph/osd/
  ceph-0
  [root@ceph1 ~]# mount |grep osd
  tmpfs on /var/lib/ceph/osd/ceph-0 type tmpfs (rw,relatime,seclabel)
  1. 已处理问题:

  2. 部署后ceph-*服务未正常启动。 处理:服务器完成reboot now操作

  3. ceph-osd服务器未启动。 处理:检查 /etc/ceph/文件 ceph账户是否有权限

  4. keyring权限正常,无法启动mgr。 处理:检查keyring文件大小与内容。并重新拷贝keyring文件

  5. ceph-osd部署中断无法继续操作。 处理:卸载硬盘,并删除分区。参考命令如下:

    dmsetup remove --force /dev/mapper/ceph--2d09a493--fe93--4a28--9300--1e92004a268c-osd--block--d76081df--2f4a--490f--a6b2--c2a61e5ae523
    
  6. dashboard访问路径查找。 处理:查看访问地址:ceph mgr services

  7. 启动 mon mgr mds msgr2命令

    ceph mon enable-msgr2
    ceph-mgr -i ceph0
    ceph-mds --cluster ceph -i ceph0 -m ceph0:6789
    
  8. 待处理问题

  9. 开机自动挂载

手动容器部署

参考博文

优点:快速部署,操作简单

缺点:配置已经固化,改动调整麻烦

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值