ceph部署及对接cinder

一、环境

系统:centos7.6

主机名 主机IP 磁盘配比 角色
node1
public-ip:10.0.0.130
cluster-ip:192.168.2.130

sda,sdb,sdc
sda是系统盘,另外两块数据盘 ceph-deploy,monitor,mgr,osd
node2
public-ip:10.0.0.131
cluster-ip:192.168.2.131

sda,sdb,sdc
sda是系统盘,另外两块数据盘 monitor,mgr,osd
node3
public-ip:10.0.0.132
cluster-ip:192.168.2.132

sda,sdb,sdc
sda是系统盘,另外两块数据盘 monitor,mgr,osd

二、设置主机名

主机名设置,三台主机分别执行属于自己的命令

node1

[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# hostname node1

node2

[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# hostname node2

node3

[root@localhost ~]# hostnamectl set-hostname node3
[root@localhost ~]# hostname node3

执行完毕后要想看到效果,需要关闭当前命令行窗口,重新打开即可看到设置效果

三、设置hosts文件

在3台机器上都执行下面命令,添加映射

echo "10.0.0.130 node1 " >> /etc/hosts
echo "10.0.0.131 node2 " >> /etc/hosts
echo "10.0.0.132 node3 " >> /etc/hosts

四、创建用户并设置免密登录

创建用户(三台机器上都运行)

useradd -d /home/admin -m admin
echo “123456” | passwd admin --stdin
#sudo权限
echo “admin ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/admin
sudo chmod 0440 /etc/sudoers.d/admin
设置免密登录 (只在node1上执行)

[root@node1 ~]# su - admin
[admin@node1 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/admin/.ssh/id_rsa):
Created directory ‘/home/admin/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/admin/.ssh/id_rsa.
Your public key has been saved in /home/admin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:qfWhuboKeoHQOOMLOIB5tjK1RPjgw/Csl4r6A1FiJYA admin@admin.ops5.bbdops.com
The key’s randomart image is:
±–[RSA 2048]----+
|+o… |
|E.+ |
|% |
|X+X . |
|=@.+ S . |
|X.
o + . |
|oBo. . o . |
|ooo. . |
|+o…oo. |
±—[SHA256]-----+
[admin@node1 ~]$ ssh-copy-id admin@node1
[admin@node1 ~]$ ssh-copy-id admin@node2
[admin@node1 ~]$ ssh-copy-id admin@node3

五、配置时间同步

三台都执行

yum -y install ntpdate
ntpdate -u cn.ntp.org.cn

crontab -e
*/20 * * * * ntpdate -u cn.ntp.org.cn > /dev/null 2>&1

systemctl reload crond.service

六、安装ceph-deploy并安装ceph软件包

配置ceph清华源
cat > /etc/yum.repos.d/ceph.repo<<‘EOF’
[Ceph]
name=Ceph packages for b a s e a r c h b a s e u r l = h t t p s : / / m i r r o r . t u n a . t s i n g h u a . e d u . c n / c e p h / r p m − m i m i c / e l 7 / basearch baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/ basearchbaseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpmmimic/el7/basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirror.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirror.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
EOF

安装ceph-deploy
[root@node1 ~]# sudo yum install ceph-deploy

初始化mon点
ceph需要epel源的包,所以安装的节点都需要yum install epel-release

[admin@node1 ~]$ mkdir my-cluster
[admin@node1 ~]$ cd my-cluster

[admin@node1 my-cluster]$ ceph-deploy new node1 node2 node3
Traceback (most recent call last):
File “/bin/ceph-deploy”, line 18, in
from ceph_deploy.cli import main
File “/usr/lib/python2.7/site-packages/ceph_deploy/cli.py”, line 1, in
import pkg_resources
ImportError: No module named pkg_resources
#以上出现报错,是因为没有pip,安装pip
[admin@node1 my-cluster]$ sudo yum install epel-release
[admin@node1 my-cluster]$ sudo yum install python-pip
#重新初始化
[admin@node1 my-cluster]$ ceph-deploy new node1 node2 node3
[admin@node1 my-cluster]$ ls
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
[admin@node1 my-cluster]$ cat ceph.conf
[global]
fsid = a1132f78-cdc5-43d0-9ead-5b590c60c53d
mon_initial_members = node1, node2, node3
mon_host = 10.28.103.211,10.28.103.212,10.28.103.213
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

修改ceph.conf,添加如下配置

public network = 10.28.103.0/24
cluster network = 172.30.103.0/24
osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 128
osd pool default pgp num = 128
osd pool default crush rule = 0
osd crush chooseleaf type = 1
max open files = 131072
ms bind ipv6 = false
[mon]
mon clock drift allowed = 10
mon clock drift warn backoff = 30
mon osd full ratio = .95
mon osd nearfull ratio = .85
mon osd down out interval = 600
mon osd report timeout = 300
mon allow pool delete = true
[osd]
osd recovery max active = 3
osd max backfills = 5
osd max scrubs = 2
osd mkfs type = xfs
osd mkfs options xfs = -f -i size=1024
osd mount options xfs = rw,noatime,inode64,logbsize=256k,delaylog
filestore max sync interval = 5
osd op threads = 2

安装Ceph软件到指定节点
[admin@node1 my-cluster]$ ceph-deploy install --no-adjust-repos node1 node2 node3
–no-adjust-repos是直接使用本地源,不生成官方源。

部署初始的monitors,并获得keys

[admin@node1 my-cluster]$ ceph-deploy mon create-initial

做完这一步,在当前目录下就会看到有如下的keyrings:

[admin@node1 my-cluster]$ ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring

将配置文件和密钥复制到集群各节点

配置文件就是生成的ceph.conf,而密钥是ceph.client.admin.keyring,当使用ceph客户端连接至ceph集群时需要使用的密默认密钥,这里我们所有节点都要复制,命令如下。

[admin@node1 my-cluster]$ ceph-deploy admin node1 node2 node3

七、部署ceph-mgr
#在L版本的Ceph中新增了manager daemon,如下命令部署一个Manager守护进程
[admin@node1 my-cluster]$ ceph-deploy mgr create node1
八、创建osd
在node1上执行以下命令

14、关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

15、ceph安装
改为阿里源后,
yum -y install centos-release-ceph-luminous && sudo yum -y install ceph-common

15、创建osd
sudo ceph-deploy osd create node1:/dev/sdb

检查osd状态
[admin@node1 ~]$ sudo ceph health
HEALTH_OK
[admin@node1 ~]$ sudo ceph -s
cluster:
id: af6bf549-45be-419c-92a4-8797c9a36ee8
health: HEALTH_OK

services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node1(active)
osd: 6 osds: 6 up, 6 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.0 GiB used, 108 GiB / 114 GiB avail
pgs:

默认情况下ceph.client.admin.keyring文件的权限为600,属主和属组为root,如果在集群内节点使用cephadmin用户直接直接ceph命令,将会提示无法找到/etc/ceph/ceph.client.admin.keyring文件,因为权限不足。

如果使用sudo ceph不存在此问题,为方便直接使用ceph命令,可将权限设置为644。在集群节点上面node1 admin用户下执行下面命令。

[admin@node1 my-cluster]$ ceph -s
2020-03-08 07:59:36.062 7f52d08e0700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
2020-03-08 07:59:36.062 7f52d08e0700 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
[errno 2] error connecting to the cluster
[admin@node1 my-cluster]$ sudo chmod 644 /etc/ceph/ceph.client.admin.keyring
[admin@node1 my-cluster]$ ceph -s
cluster:
id: af6bf549-45be-419c-92a4-8797c9a36ee8
health: HEALTH_OK

services:
mon: 3 daemons, quorum node1,node2,node3
mgr: node1(active)
osd: 6 osds: 6 up, 6 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 6.1 GiB used, 108 GiB / 114 GiB avail
pgs:

[admin@node1 my-cluster]$

查看osds
[admin@node1 ~]$ sudo ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.11151 root default
-3 0.03717 host node1
0 hdd 0.01859 osd.0 up 1.00000 1.00000
3 hdd 0.01859 osd.3 up 1.00000 1.00000
-5 0.03717 host node2
1 hdd 0.01859 osd.1 up 1.00000 1.00000
4 hdd 0.01859 osd.4 up 1.00000 1.00000
-7 0.03717 host node3
2 hdd 0.01859 osd.2 up 1.00000 1.00000
5 hdd 0.01859 osd.5 up 1.00000 1.00000

九、开启MGR监控模块
方式一:命令操作
ceph mgr module enable dashboard
如果以上操作报错如下:

Error ENOENT: all mgr daemons do not support module ‘dashboard’, pass --force to force enablement
则因为没有安装ceph-mgr-dashboard,在mgr的节点上安装。

yum install ceph-mgr-dashboard
方式二:配置文件

编辑ceph.conf文件

vi ceph.conf
[mon]
mgr initial modules = dashboard
#推送配置
[admin@admin my-cluster]$ ceph-deploy --overwrite-conf config push node1 node2 node3
#重启mgr
sudo systemctl restart ceph-mgr@node1

web登录配置
默认情况下,仪表板的所有HTTP连接均使用SSL/TLS进行保护。

#要快速启动并运行仪表板,可以使用以下内置命令生成并安装自签名证书:
[root@node1 my-cluster]# ceph dashboard create-self-signed-cert
Self-signed certificate created

#创建具有管理员角色的用户:
[root@node1 my-cluster]# ceph dashboard set-login-credentials admin admin
Username and password updated

#查看ceph-mgr服务:
[root@node1 my-cluster]# ceph mgr services
{
“dashboard”: “https://node1:8443/”
}

15、ceph安装

改为阿里源后,
yum -y install centos-release-ceph-luminous && sudo yum -y install ceph-common

16、创建osd
sudo ceph-deploy osd create node1:/dev/sdb

1、ceph对接openstack初始化环境
参考链接
https://blog.csdn.net/CN_TangZheng/article/details/104745364/

切换为 su - admin

(1)控制节点创建client.cinder并设置权限
ceph auth get-or-create client.cinder mon ‘allow r’ osd ‘allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rx pool=images’

[client.cinder]
key = AQDrobJfdko9BRAAtITBbg0777yof2vFxuGFNA==

(2)控制节点创建client.glance并设置权限
ceph auth get-or-create client.glance mon ‘allow r’ osd ‘allow class-read object_prefix rbd_children,allow rwx pool=images’
[client.glance]
key = AQBAorJfXXdbAxAAWs95j7rmKWjNJmBSbAi3vA==

(3)传送秘钥到对接的节点,因为glance自身就装在控制节点所以不需要发送到其他的节点
chown glance.glance /etc/ceph/ceph.client.glance.keyring ‘//设置属主属组’

(4)传送秘钥到对接的节点,将client.cinder节点 因为这个默认也是安装在controller上 ,所以不需要传递到其他节点

ceph auth get-or-create client.cinder | sudo tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key = AQBQ7GRett8kAhAA4Q2fFNQybe0RJaEubK8eFQ==
chown cinder.cinder /etc/ceph/ceph.client.cinder.keyring

openstack 对接

参考链接:https://www.cnblogs.com/zengzhihua/p/9995456.html

1、openstack
    1、然后安装 Ceph 客户端
        yum install -y ceph
        
    2、将任意一个 Ceph 集群节点的 Ceph 配置文件和 ceph.client.admin.keyring 拷贝到所有 OpenStack 节点 (139.159.3.12  ceph节点)
        scp root@139.159.3.12:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
        
        scp root@139.159.3.12:/etc/ceph/ceph.conf /etc/ceph/
    
    3、ceph 创建存储池
        创建 Glance、Nova、Cinder 对应存储池:
            ceph osd pool create images 2
            ceph osd pool create vms 2
            ceph osd pool create volumes 2
        
    4、配置存储池鉴权
        控制节点创建client.cinder并设置权限
        sudo ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=volumes,allow rwx pool=vms,allow rx pool=images' -o /etc/ceph/ceph.client.cinder.keyring
        控制节点创建client.glance并设置权限
        sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=images' -o /etc/ceph/ceph.client.glance.keyring

    5、将生成的 key 文件拷贝到其他所有 OpenStack节点:
        scp *.keyring root@43.255.84.107:/etc/ceph/
        
    6、在 OpenStack 控制节点修改密钥文件拥有者为对应的组件用户:
        chown glance:glance /etc/ceph/ceph.client.glance.keyring
        chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
        
        
    7、在运行 nova-compute 的节点上,将密钥添加到 libvcirt,删除临时的密钥文件:
        ceph auth get-key client.cinder | tee client.cinder.key
        
        uuidgen
        1b96c3fa-0eb2-4d1d-9e16-53d555ec2d6a
        // 注:uuidgen 只需要运行一次即可,所有涉及 uuid 的地方都共用这个生成的 uuid

cat > secret.xml <<EOF

1b96c3fa-0eb2-4d1d-9e16-53d555ec2d6a

client.cinder secret


EOF

// 注:以上 cat 段落是整个拷贝一次执行

        执行命令
        virsh secret-define --file secret.xml
       
        结果    Secret 1b96c3fa-0eb2-4d1d-9e16-53d555ec2d6a created
        
        执行命令 
        virsh secret-set-value --secret 1b96c3fa-0eb2-4d1d-9e16-53d555ec2d6a --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
        
        // 注:出现删除提示,输入y,回车
        
    8、在openstack上配置 cinder

openstack-config --set /etc/cinder/cinder.conf DEFAULT “enabled_backends” “ceph”
openstack-config --set /etc/cinder/cinder.conf ceph “volume_driver” “cinder.volume.drivers.rbd.RBDDriver”
openstack-config --set /etc/cinder/cinder.conf ceph “volume_backend_name” “ceph”
openstack-config --set /etc/cinder/cinder.conf ceph “rbd_pool” “volumes”
openstack-config --set /etc/cinder/cinder.conf ceph “rbd_ceph_conf” “/etc/ceph/ceph.conf”
openstack-config --set /etc/cinder/cinder.conf ceph “rbd_flatten_volume_from_snapshot” “false”
openstack-config --set /etc/cinder/cinder.conf ceph “rbd_max_clone_depth” “5”
openstack-config --set /etc/cinder/cinder.conf ceph “rados_connect_timeout” “-1”
openstack-config --set /etc/cinder/cinder.conf ceph “glance_api_version” “2”
openstack-config --set /etc/cinder/cinder.conf ceph “rbd_user” “cinder”
openstack-config --set /etc/cinder/cinder.conf ceph “rbd_secret_uuid” “1b96c3fa-0eb2-4d1d-9e16-53d555ec2d6a”

        -bash: openstack-config: command not found
        执行安装:
            yum install -y openstack-utils
            
    9. 重启 OpenStack 各服务
    
    重启cinder
        systemctl restart openstack-cinder-api
        systemctl restart openstack-cinder-scheduler
        systemctl restart openstack-cinder-volume
        
    
    在控制节点,重启各服务:

    sudo service openstack-glance-api restart
    sudo service openstack-nova-api restart
    sudo service openstack-cinder-api restart
    sudo service openstack-cinder-scheduler restart
    在计算节点,重启 Nova 服务:

    sudo service openstack-nova-compute restart
    在存储节点,重启 Cinder 服务:

    sudo service openstack-cinder-volume restart                
            
    10、在各 OpenStack 节点上运行命令:
       执行命令 ceph status

cluster:
id: 037e0e71-a470-4673-93ad-42f66c97d511
health: HEALTH_WARN
too few PGs per OSD (3 < min 30)

services:
mon: 3 daemons, quorum node3,node1,node2
mgr: node1(active)
osd: 6 osds: 6 up, 6 in

data:
pools: 3 pools, 6 pgs
objects: 0 objects, 0 B
usage: 7.2 GiB used, 2.9 TiB / 2.9 TiB avail
pgs: 6 active+clean

       执行命令 ceph -s

cluster:
id: 037e0e71-a470-4673-93ad-42f66c97d511
health: HEALTH_WARN
too few PGs per OSD (3 < min 30)

services:
mon: 3 daemons, quorum node3,node1,node2
mgr: node1(active)
osd: 6 osds: 6 up, 6 in

data:
pools: 3 pools, 6 pgs
objects: 0 objects, 0 B
usage: 7.2 GiB used, 2.9 TiB / 2.9 TiB avail
pgs: 6 active+clean

    10、查看cinder卷的类型有几个

        [root@ct ceph(keystone_admin)]# cinder type-list

±-------------------------------------±------±------------±----------+
| ID | Name | Description | Is_Public |
±-------------------------------------±------±------------±----------+
| c33f5b0a-b74e-47ab-92f0-162be0bafa1f | iscsi | - | True |
±-------------------------------------±------±------------±----------+

    11、命令行创建cinder 的ceph存储后端相应的type
    
    [root@ct ceph(keystone_admin)]# cinder type-create  ceph	'//创建ceph类型'

±-------------------------------------±-----±------------±----------+
| ID | Name | Description | Is_Public |
±-------------------------------------±-----±------------±----------+
| e1137ee4-0a45-4a21-9c16-55dfb7e3d737 | ceph | - | True |
±-------------------------------------±-----±------------±----------+
[root@ct ceph(keystone_admin)]# cinder type-list ‘//查看类型’
±-------------------------------------±------±------------±----------+
| ID | Name | Description | Is_Public |
±-------------------------------------±------±------------±----------+
| c33f5b0a-b74e-47ab-92f0-162be0bafa1f | iscsi | - | True |
| e1137ee4-0a45-4a21-9c16-55dfb7e3d737 | ceph | - | True |
±-------------------------------------±------±------------±----------+
[root@ct ceph(keystone_admin)]# cinder type-key ceph set volume_backend_name=ceph ‘//设置后端的存储类型 volume_backend_name=ceph一定要顶格写不能有空格’

vim 匹配替换

%s/43.255.84.107/43.247.89.91/g

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值