CentOS7.6部署ceph环境

CentOS7.6部署ceph环境

测试环境:

节点名称

节点IP

磁盘

节点功能

Node-1

10.10.1.10/24

/dev/sdb

监控节点

Node-2

10.10.1.20/24

/dev/sdb

OSD节点

Node-3

10.10.1.30/24

/dev/sdb

OSD节点

步骤:

  1. 主机信息配置

1.1. 修改三台主机的主机名

[root@Node-1 ~]# hostnamectl set-hostname Node-1

[root@Node-2 ~]# hostnamectl set-hostname Node-2

[root@Node-3 ~]# hostnamectl set-hostname Node-3

1.2. 修改三台主机的hosts文件,增加以下记录:

[root@Node-1 ~]# vi /etc/hosts

10.10.1.10  Node-1

10.10.1.20  Node-2

10.10.1.30  Node-3

1.3. 关闭三台主机的防火墙和Selinux

[root@Node-1 ~]# systemctl stop firewalld.sevice

[root@Node-1 ~]# systemctl disable firewalld.sevice

[root@Node-1 ~]# vi /etc/sysconfig/selinux

SELINUX=disabled

1.4. 创建集群用户cephd

[root@Node-1 ~]# useradd cephd

1.5. 在主节点上配置cephd无密码访问

[root@Node-1 ~]# ssh-keygen -t rsa

[root@Node-1 ~]# su – cephd

[cephd@node-1 ~]$ ssh-copy-id cephd@Node-2

[cephd@node-1 ~]$ ssh-copy-id cephd@Node-3

[cephd@node-1 ~]$ cd .ssh/

[cephd@node-1 .ssh]$ vi config

Host Node-1

   Hostname Node-1

   User     cephd

Host  Node-2

   Hostname Node-2

   User     cephd

Host  Node-3

   Hostname  Node-3

   User     cephd

1.6. 更换国内阿里云的yum源

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum clean all

yum makecache

1.7. 安装ceph

[root@Node-1 ~]# yum -y install ceph

1.8. 安装ceph-deploy

[root@Node-1 ~]# yum -y install ceph-deploy

1.9. 部署ceph集群并且创建cluster目录

[cephd@node-1 ~]$ mkdir cluster

[cephd@node-1 ~]$ cd cluster

[cephd@node-1 cluster]$ ceph-deploy new Node-1 Node-2 Node-3

[cephd@node-1 cluster]$ vi ceph.conf

[global]

fsid = 77472f89-02d6-4424-8635-67482f090b09 

mon_initial_members = Node-1

mon_host = 10.10.1.10

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default size = 2

public network=10.10.1.0/24

2.0.安装ceph

[cephd@node-1 cluster]$ sudo ceph-deploy install Node-1 Node-2 Node-3

2.1.配置初始monitor

[cephd@node-1 cluster]$ sudo ceph-deploy mon create-initial

[cephd@node-1 cluster]$ ls -l

total 164

-rw------- 1 cephd cephd     71 Jun 21 10:31 ceph.bootstrap-mds.keyring

-rw------- 1 cephd cephd     71 Jun 21 10:31 ceph.bootstrap-mgr.keyring

-rw------- 1 cephd cephd     71 Jun 21 10:31 ceph.bootstrap-osd.keyring

-rw------- 1 cephd cephd     71 Jun 21 10:31 ceph.bootstrap-rgw.keyring

-rw------- 1 cephd cephd     63 Jun 21 10:31 ceph.client.admin.keyring

-rw-rw-r-- 1 cephd cephd    249 Jun 21 10:20 ceph.conf

-rw-rw-r-- 1 cephd cephd 139148 Jul  5 19:20 ceph-deploy-ceph.log

-rw------- 1 cephd cephd     73 Jun 21 10:18 ceph.mon.keyring

[cephd@node-1 cluster]$

2.2.检查群集状态

[cephd@node-1 cluster]$ ceph -s

  cluster:

    id:     77472f89-02d6-4424-8635-67482f090b09

    health: HEALTH_OK

 

  services:

    mon: 1 daemons, quorum Node-1

    mgr: Node-1(active), standbys: Node-2, Node-3

    mds: bjdocker-1/1/1 up  {0=Node-1=up:active}, 2 up:standby

    osd: 3 osds: 3 up, 3 in

 

  data:

    pools:   2 pools, 128 pgs

    objects: 23 objects, 5.02MiB

    usage:   3.07GiB used, 207GiB / 210GiB avail

    pgs:     128 active+clean

 

[cephd@node-1 cluster]$

2.3.创建POOL

[cephd@node-1 cluster]$ ceph osd pool create  store 64

[cephd@node-1 cluster]$ ceph osd pool create  app 64

[root@node-1 ~]# rados df

POOL_NAME USED    OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD      WR_OPS WR     

app            0B       0      0      0                  0       0        0      0      0B  47077 91.8GiB

store     5.02MiB      23      0     46                  0       0        0    126 13.9MiB   3698 6.78MiB

total_objects    23

total_used       3.07GiB

total_avail      207GiB

total_space      210GiB

[root@node-1 ~]#

2.4.创建OSD

ceph-deploy osd create --data /dev/sdb Node-1

ceph-deploy osd create --data /dev/sdb Node-2

ceph-deploy osd create --data /dev/sdb Node-3

2.5.每台主机创建挂载点/data

[root@node-1 ~]# mkdir /data

2.5.创建cephfs

[cephd@node-1 cluster]$ sudo ceph-deploy mds create Node-1 Node-2 Node-3

[cephd@node-1 cluster]$ sudo ceph fs new bjdocker app store

[cephd@node-1 cluster]$ ceph mds stat

bjdocker-1/1/1 up  {0=Node-1=up:active}, 2 up:standby

[cephd@node-1 cluster]$

2.6.cephfs文件系统挂载

mount -t ceph 10.10.1.10:6789,10.10.1.20:6789,10.10.1.30:6789:/ /data -o name=admin,secret=AQBO6gxdoWbLMBAAJlpIoLRpHlBFNCyVAejV+g==

[cephd@node-1 cluster]$ cat ceph.client.admin.keyring

[client.admin]

        key = AQBO6gxdoWbLMBAAJlpIoLRpHlBFNCyVAejV+g==

[cephd@node-1 cluster]$

[cephd@node-1 cluster]$ df -h

Filesystem                                         Size  Used Avail Use% Mounted on

/dev/mapper/centos-root                             50G  2.8G   48G   6% /

devtmpfs                                           3.9G     0  3.9G   0% /dev

tmpfs                                              3.9G     0  3.9G   0% /dev/shm

tmpfs                                              3.9G  8.9M  3.9G   1% /run

tmpfs                                              3.9G     0  3.9G   0% /sys/fs/cgroup

/dev/mapper/centos-home                             67G   33M   67G   1% /home

/dev/sda1                                         1014M  163M  852M  17% /boot

tmpfs                                              799M     0  799M   0% /run/user/0

10.10.1.10:6789,10.10.1.20:6789,10.10.1.30:6789:/   99G     0   99G   0% /data

[cephd@node-1 cluster]$

集群PG的计算

PG 总数=(OSD 总数* 100 )/最大副本数

集群的Pool的PG数

PG总数=((OSD总数*100)/最大副本数 )/ 池数

安装失败后清理:

ceph-deploy purgedata [HOST] [HOST...]

ceph-deploy forgetkeys

命令:

[root@node-1 ceph]# ceph –s   //集群健康状况

[root@node-1 ceph]# ceph osd tree   //查看osd

 

转载于:https://www.cnblogs.com/networking/p/11144620.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值