分布式存储--ceph快速部署

下面简单记录下Centos7使用Ceph-deploy快速部署Ceph环境:

一、环境说明

主机名内网地址类型
ceph_admin172.16.32.2admins
ceph_mon1172.16.32.4mons
ceph_node1172.16.32.10osd
ceph_node2172.16.32.12osd

磁盘

主机名内网地址名称
ceph_node1172.16.32.10/dev/vdb
ceph_node1172.16.32.12/dev/vdb

操作系统:centos:7.7
ceph版本: 10.2.11

二、环境准备

2.1 服务器之间免登陆

[root@localhost ~]# ssh-keygen -t rsa 
[root@localhost ~]# ssh-copy-id  -i /root/.ssh/id_rsa.pub 172.16.32.4
[root@localhost ~]# ssh-copy-id  -i /root/.ssh/id_rsa.pub 172.16.32.10
[root@localhost ~]# ssh-copy-id  -i /root/.ssh/id_rsa.pub 172.16.32.12
[root@localhost ~]# ssh-copy-id  -i /root/.ssh/id_rsa.pub 172.16.32.2

2.1 设置主机名

# 在相对应主机上执行该命令,修改其主机名
[root@localhost ~]# hostnamectl set-hostname ceph_admin
[root@localhost ~]# hostnamectl set-hostname ceph_mon1
[root@localhost ~]# hostnamectl set-hostname ceph_node1
[root@localhost ~]# hostnamectl set-hostname ceph_node2

2.2 在ceph_admin主机安装ansible,并生成/etc/ansible/hosts文件

# 安装ansible
[root@ceph_admin ~]# yum -y install ansible
# 配置hosts
[root@ceph_admin ~]# cat > /etc/ansible/hosts <<EOF
[ceph_admins]
ceph_admin ansible_ssh_host=172.16.32.2

[ceph_mons]
ceph_mon1 ansible_ssh_host=172.16.32.4

[ceph_nodes]
ceph_node1 ansible_ssh_host=172.16.32.10
ceph_node2 ansible_ssh_host=172.16.32.12
EOF

# 修改操作系统的hosts文件
[root@ceph_admin ~]# echo '172.16.32.2 ceph_admin' >> /etc/hosts
[root@ceph_admin ~]# echo '172.16.32.4 ceph_mon1'>> /etc/hosts
[root@ceph_admin ~]# echo '172.16.32.10 ceph_node1'>> /etc/hosts
[root@ceph_admin ~]# echo '172.16.32.12 ceph_node2'>> /etc/hosts
# 分发hosts文件到各个主机
[root@ceph_admin ~]# ansible all -m copy -a "src=/etc/hosts dest=/etc/hosts"
# 关闭selinux、防火墙
[root@ceph_admin ~]# ansible all -m shell -a "setenforce 0"
[root@ceph_admin ~]# ansible all -m shell -a "systemctl disable firewalld && systemctl stop firewalld"

# 时间同步
[root@ceph_admin ~]# ansible all -m shell -a "yum -y install ntp ntpdate ntp-doc && systemctl restart ntpd && systemctl enable ntpd"

2.3配置阿里云YUM源、ceph存储YUM源

# 下载阿里云的base源和epel源
[root@ceph_admin ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@ceph_admin ~]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
 
添加ceph源
[root@ceph_admin ~]# cat > /etc/yum.repos.d/ceph.repo <<EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
EOF
# 分发YUM源到各个主机
[root@ceph_admin ~]# ansible all -m copy -a "src=/etc/yum.repos.d/epel.repo dest=/etc/yum.repos.d/epel.repo"
[root@ceph_admin ~]# ansible all -m copy -a "src=/etc/yum.repos.d/CentOS-Base.repo dest=/etc/yum.repos.d/CentOS-Base.repo"
[root@ceph_admin ~]# ansible all -m copy -a "src=/etc/yum.repos.d/ceph.repo dest=/etc/yum.repos.d/ceph.repo"

三、开始搭建ceph分布式存储前期准备

3.1、创建用户

[root@ceph_admin ~]# ansible all -m shell -a "useradd -d /home/cephuser -m cephuser &&echo 'cephuser'|passwd --stdin cephuser && echo 'cephuser ALL = (root) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/cephuser&& chmod 0440 /etc/sudoers.d/cephuser&& sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers"

3.2、cephuser用户免密登录

[root@ceph_admin ~]# su - cephuser
[cephuser@ceph_admin ~]# ssh-keygen -t rsa 
[cephuser@ceph_admin ~]# ssh-copy-id  -i /home/cephuser/.ssh/id_rsa.pub 172.16.32.4
[cephuser@ceph_admin ~]# ssh-copy-id  -i /home/cephuser/.ssh/id_rsa.pub 172.16.32.10
[cephuser@ceph_admin ~]# ssh-copy-id  -i /home/cephuser/.ssh/id_rsa.pub 172.16.32.12
[cephuser@ceph_admin ~]# ssh-copy-id  -i /home/cephuser/.ssh/id_rsa.pub 172.16.32.2

3.3、磁盘准备

[cephuser@ceph_admin ~]$ ansible ceph_node1 -m shell -a " mkfs.xfs -f /dev/vdb  &&  blkid -o value -s TYPE /dev/vdb "
[cephuser@ceph_admin ~]$ ansible ceph_node2 -m shell -a " mkfs.xfs -f /dev/vdb  &&  blkid -o value -s TYPE /dev/vdb "

四、部署ceph分布式集群

4.1、在管理节点上,创建一个目录,用 ceph-deploy 生成一个集群

[cephuser@ceph_admin ~]$ mkdir -p cluster
[cephuser@ceph_admin ~]$ cd cluster/
[cephuser@ceph_admin cluster]$ sudo yum update -y && sudo yum install ceph-deploy -y
# new后面跟的是mon节点的主机名,执行完之后,在当前目录下用 ls 和 cat 检查 ceph-deploy 的输出,应该有一个 Ceph 配置文件、一个 monitor 密钥环和一个日志文件。详情见 ceph-deploy new -h 。
[cephuser@ceph_admin cluster]$ ceph-deploy new ceph_mon1

# 把 Ceph 配置文件里的默认副本数从 3 改成 2 ,这样只有两个 OSD 也可以达到 active + clean 状态。把下面这行加入 [global] 段:(注意:mon_host必须和public network 网络是同网段内!)
systemd-private-6fe03e9b90ee491eb370980c917907bb-ntpd.service-jtOevK
[cephuser@ceph_admin cluster]$ cat ceph.conf 
[global]
public network = 172.16.32.0/20
osd pool default size = 2
fsid = 79b1519d-e4fe-43c3-ba2e-179a0b41f106
mon_initial_members = ceph_mon1
mon_host = 172.16.32.4
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

4.2、安装 Ceph

# 需要点时间,请耐心等待
[cephuser@ceph_admin cluster]$ ceph-deploy install ceph_admin ceph_mon1 ceph_node1 ceph_node2 

4.3、配置初始 monitor(s)、并收集所有密钥

[cephuser@ceph_admin cluster]# ceph-deploy mon create-initial
(完成上述操作后,当前目录里应该会出现这些密钥环:)
ceph.bootstrap-mds.keyring  
ceph.bootstrap-osd.keyring  
ceph.bootstrap-rgw.keyring  
ceph.client.admin.keyring
(官网说明:Note 如果此步失败并输出类似于如下信息 “Unable to find /etc/ceph/ceph.client.admin.keyring”,请确认 ceph.conf 中为 monitor 指定的 IP 是 Public IP,而不是 Private IP。)

4.4、添加OSD到集群

# 检查OSD节点上所有可用的磁盘
[cephuser@ceph_admin cluster]$ ceph-deploy disk list ceph_node1 ceph_node2 

# 使用zap选项删除所有osd节点上的分区
[cephuser@ceph_admin cluster]$ ceph-deploy disk zap ceph_node1:/dev/vdb ceph_node2:/dev/vdb

# 准备OSD(使用prepare命令)
[cephuser@ceph_admin cluster]$ ceph-deploy osd prepare ceph_node1:/dev/vdb ceph_node2:/dev/vdb 

#激活OSD(注意由于ceph对磁盘进行了分区,/dev/vdb磁盘分区为/dev/vdb1)
[cephuser@ceph_admin cluster]$ ceph-deploy osd activate ceph_node1:/dev/vdb1 ceph_node2:/dev/vdb1 

# 在三个osd节点上通过命令已显示磁盘已成功mount:
[root@ceph_admin ~]# ansible ceph_node1 -m shell -a "lsblk"
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default, this will change, but still be user configurable on deprecation. This feature will be removed in version 
2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
ceph_node1 | CHANGED | rc=0 >>
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   30G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   29G  0 part 
  ├─centos-root 253:0    0   27G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  
vdb               8:16   0   20G  0 disk 
├─vdb1            8:17   0   15G  0 part /var/lib/ceph/osd/ceph-0   ## 出现这个则表示磁盘已经挂载
└─vdb2            8:18   0    5G  0 part 
sr0              11:0    1  942M  0 rom ```

4.5、查看OSD

[cephuser@ceph_admin cluster]$ ceph-deploy disk list ceph_node1 ceph_node2

4.6、拷贝秘钥到管理、osd节点

# 用ceph-deploy把配置文件和admin密钥拷贝到管理节点和Ceph节点,这样你每次执行Ceph命令行时就无需指定monit节点地址和ceph.client.admin.keyring了

[cephuser@ceph_admin cluster]$ ceph-deploy admin ceph_admin ceph_node1 ceph_node2

# 修改密钥权限
[cephuser@ceph_admin cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

五、验证ceph集群

# 检查ceph状态
[cephuser@ceph_admin cluster]$ sudo ceph health
HEALTH_OK
[cephuser@ceph_admin cluster]$ sudo ceph -s
    cluster 33bfa421-8a3b-40fa-9f14-791efca9eb96
     health HEALTH_OK
     monmap e1: 1 mons at {ceph_admin=192.168.10.220:6789/0}
            election epoch 3, quorum 0 ceph_admin
     osdmap e14: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v29: 64 pgs, 1 pools, 0 bytes data, 0 objects
            100 MB used, 45946 MB / 46046 MB avail
                  64 active+clean

# 查看ceph osd运行状态
[cephuser@ceph_admin cluster]$ ceph osd stat

# 查看osd的目录树
[cephuser@ceph_admin cluster]# ceph osd tree
(至此,一个完整的ceph分布式存储系统已经搭建完成)

六、新建一个MDS(元数据服务器),必须部署至少一个元数据服务器才能使用 CephFS 文件系统

# 确定 pg_num 取值是强制性的,因为不能自动计算。下面是几个常用的值:
# 少于 5 个 OSD 时可把 pg_num 设置为 128
# OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
# OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
# OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
# 自己计算 pg_num 取值时可借助 pgcalc 工具
[cephuser@ceph_admin cluster]# ceph-deploy mds create  ceph_node1
[cephuser@ceph_admin cluster]# ceph osd pool create cephfs_data 128
[cephuser@ceph_admin cluster]# ceph osd pool create cephfs_metadata 128
[cephuser@ceph_admin cluster]# ceph fs new cephfs cephfs_metadata cephfs_data

七、ceph客户端挂载cephfs

[root@localhost ~]# cat > /etc/yum.repos.d/ceph.repo <<EOF
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
priority =1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
priority =1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
gpgcheck=0
priority=1
EOF

[root@localhost ~]# yum -y install ceph-common
[root@localhost ~]# mkdir -p /etc/ceph/ /mnt/mycephfs
[root@localhost ~]# cat >/etc/ceph/admin.sercet <<EOF
AQATSKdNGBnwLhAAnNDKnH65FmVKpXZJVasUeQ==
EOF
# 挂载
[root@localhost ~]# mount -t ceph 172.16.32.4:6789:/ /mnt/mycephfs -o name=admin,secret=/etc/ceph/admin.sercet
# 卸载
[root@localhost ~]# umount /mnt/mycephfs 

八、参考资料

官方文档:存储集群快速入门

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值