Ceph-ansible 安装 ceph (rbd + rgw)

实验环境:

3 台 monitor 节点:centos7.6 操作系统

3台 osd 节点: centos7.6 操作系统,每台 20 块磁盘

所有节点上配置好 ceph yum repo 源(luminous 版本)

ceph 集群结构:集群只开启 rbd、rgw 服务(块存储、对象存储网关),3 个 monitor 节点还充当 rgw、mgr 节点

1、github 下载版本 ceph-ansible-stable-3.2 并解压

ceph-ansible

2、按照 requirement.txt 要求安装 ansible 2.6.0

pip install "ansible==2.6.0"

3、创建 inventory 文件

[root@ansible002 ceph-ansible-stable-3.2]# cat ceph-host 
mon1 ansible_host=192.168.1.201
mon2 ansible_host=192.168.1.202
mon3 ansible_host=192.168.1.203
osd1 ansible_host=192.168.1.137
osd2 ansible_host=192.168.1.138
osd3 ansible_host=192.168.1.139

[mons]
mon1
mon2
mon3

[osds]
osd1
osd2
osd3

[rgws]
mon1
mon2
mon3

[mgrs]
mon1
mon2
mon3

[all:vars]
ansible_user=root
ansible_ssh_pass=123456
#ansible_sudo_pass=123456

4、创建 playbook

mv site.yml.sample site.yml

根据集群需求修改 playbook,本实验只安装 ceph rbd 和 rgw,没有部署 cephfs,所以 -hosts 下面值保留了 mon, osds, rgws, mgrs

---
# Defines deployment design and assigns role to server groups

- hosts:
  - mons
  - osds
  #- mdss
  - rgws
  #- nfss
  #- restapis
  #- rbdmirrors
  #- clients
  - mgrs
  #- iscsigws
  #- iscsi-gws # for backward compatibility only!

  gather_facts: false
  any_errors_fatal: true
  become: true
....

5、配置 ceph 集群

[root@ansible002 ceph-ansible-stable-3.2]# cat group_vars/all.yml
public_network: "192.168.1.0/24"
cluster_network: "192.168.1.0/24"
devices:


- '/dev/sdb'
  - '/dev/sdc'
  - '/dev/sdd'
  - '/dev/sde'
  - '/dev/sdf'
  - '/dev/sdg'
  - '/dev/sdh'
  - '/dev/sdi'
  - '/dev/sdj'
  - '/dev/sdk'
  - '/dev/sdl'
  - '/dev/sdm'
  - '/dev/sdn'
  - '/dev/sdo'
  - '/dev/sdp'
  - '/dev/sdq'
  - '/dev/sdr'
  - '/dev/sds'
  - '/dev/sdt'
  - '/dev/sdu'
osd_scenario: lvm
cluster: ceph

mon_group_name: mons
osd_group_name: osds
rgw_group_name: rgws
mgr_group_name: mgrs
centos_package_dependencies: # centos 操作系统依赖包
  - python-pycurl
  - epel-release
  - python-setuptools
  - libselinux-python
ceph_origin: distro   # 使用自己配置好的 yum repo
ceph_stable_release: luminous # ceph 版本
monitor_interface: eth0 # mon 节点网卡
osd_objectstore: bluestore

## Rados Gateway options
radosgw_frontend_type: civetweb # For additional frontends see: 
http://docs.ceph.com/docs/mimic/radosgw/frontends/

radosgw_civetweb_port: 8080 # rgw 端口
radosgw_civetweb_num_threads: 512
radosgw_civetweb_options: "num_threads={{ radosgw_civetweb_num_threads }}"
radosgw_frontend_port: "{{ radosgw_civetweb_port if radosgw_frontend_type == 'civetweb' else '8080' }}"
radosgw_frontend_options: "{{ radosgw_civetweb_options if radosgw_frontend_type == 'civetweb' else '' }}"
radosgw_thread_pool_size: 512
radosgw_interface: eth0
rgw_multisite: false
rgw_zone: default

osd_scenario: collocated 默认不会使用 bluestore

6、开始安装

[root@ansible002 ceph-ansible-stable-3.2]# ansible-playbook -i ceph-host site.yml

7、查看 ceph 状态

[root@rgw01-backup ~]# ceph -s
  cluster:
    id:     0e38e7c6-a704-4132-b0e3-76b87f18d8fa
    health: HEALTH_WARN
            too few PGs per OSD (1 < min 30)
            clock skew detected on mon.rgw02-backup
 
  services:
    mon: 3 daemons, quorum rgw01-backup,rgw02-backup,rgw03-backup
    mgr: rgw01-backup(active), standbys: rgw03-backup, rgw02-backup
    osd: 60 osds: 60 up, 60 in
    rgw: 3 daemons active


data:
    pools:   4 pools, 32 pgs
    objects: 222 objects, 2.77KiB
    usage:   61.0GiB used, 98.2TiB / 98.2TiB avail
    pgs:     32 active+clean
[root@rgw01-backup ~]# ceph osd pool ls
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log

8、测试 rbd 块存储 (在 mon 节点上执行)

[root@rgw01-backup ~]# ceph osd pool create uat 8 8
pool 'uat' created
[root@rgw01-backup ~]# ceph osd pool ls
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
uat

在 uat 上创建一个 rbd 镜像

[root@rgw01-backup ~]# rbd create rbd1 -p uat --size 1024 --image-format 1
rbd: image format 1 is deprecated

如果报错 librbd: pool not configured for self-managed RBD snapshot support

rbd create rbd1 -p uat --size 1024 --image-format 1 --rbd_validate_pool=false

查看创建结果

[root@rgw01-backup ~]# rbd list -p uat
rbd1
[root@rgw01-backup ~]# rbd info uat/rbd1
rbd image 'rbd1':
	size 1GiB in 256 objects
	order 22 (4MiB objects)
	block_name_prefix: rb.0.1200.327b23c6
	format: 1

映射块设备,lsblk 会发现多了一个 /dev/rbd0 设备

[root@rgw01-backup ~]# rbd map uat/rbd1
/dev/rbd0
[root@rgw01-backup ~]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   500G  0 disk 
├─sda1   8:1    0     1G  0 part /boot
├─sda2   8:2    0 498.8G  0 part /
└─sda3   8:3    0   200M  0 part /boot/efi
sr0     11:0    1  1024M  0 rom  
rbd0   253:0    0     1G  0 disk

查看映射块设备

[root@rgw01-backup ~]# rbd showmapped
id pool image snap device    
0  uat  rbd1  -    /dev/rbd0

挂载文件系统使用

[root@rgw01-backup ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@rgw01-backup ~]# mkdir bb
[root@rgw01-backup ~]# mount /dev/rbd0 bb
[root@rgw01-backup ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       499G  1.8G  497G   1% /
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G     0  7.8G   0% /dev/shm
tmpfs           7.8G  8.9M  7.8G   1% /run
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda1      1014M  137M  878M  14% /boot
/dev/sda3       200M  8.0K  200M   1% /boot/efi
tmpfs           1.6G     0  1.6G   0% /run/user/0
/dev/rbd0      1014M   33M  982M   4% /root/bb
[root@rgw01-backup ~]# vim bb/uat.txy

取消挂载

[root@rgw01-backup ~]# umount bb
[root@rgw01-backup ~]# rbd unmap uat/rbd1
[root@rgw01-backup ~]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   500G  0 disk 
├─sda1   8:1    0     1G  0 part /boot
├─sda2   8:2    0 498.8G  0 part /
└─sda3   8:3    0   200M  0 part /boot/efi
sr0     11:0    1  1024M  0 rom

9、测试 radosgw 对象存储

创建 radosgw 用户,主要获取 access_key 和 secret_key

[root@rgw01-backup ~]# radosgw-admin user create --uid=uat --display-name="My Test"
{
    "user_id": "uat",
    "display_name": "My Test",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "uat",
            "access_key": "0MKBBALAM9C5UO7BP7M5",
            "secret_key": "FtG7RAB1ya8hZNbdfnzosC8ZNb6Vqthc6xxVqIre"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

使用 s3cmd 测试

[root@rgw01-backup ~]# yum install -y s3cmd

配置 s3cmd ,主要是这 5 个配置

  Access Key: 0MKBBALAM9C5UO7BP7M5
  Secret Key: FtG7RAB1ya8hZNbdfnzosC8ZNb6Vqthc6xxVqIre
  S3 Endpoint: 192.168.1.201:8080
  Use HTTPS protocol [Yes]: NO
  DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: ''
[root@rgw01-backup ~]#  s3cmd --configure
...
Access Key: 0MKBBALAM9C5UO7BP7M5
Secret Key: FtG7RAB1ya8hZNbdfnzosC8ZNb6Vqthc6xxVqIre
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 192.168.1.201:8080
...
Encryption password: 
...
Use HTTPS protocol [Yes]: NO
...
HTTP Proxy server name: 

New settings:
  Access Key: 0MKBBALAM9C5UO7BP7M5
  Secret Key: FtG7RAB1ya8hZNbdfnzosC8ZNb6Vqthc6xxVqIre
  Default Region: US
  S3 Endpoint: 192.168.1.201:8080
  DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: ''
...
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

创建桶

[root@rgw01-backup ~]# s3cmd ls s3://
[root@rgw01-backup ~]# s3cmd mb s3://my-bucket
Bucket 's3://my-bucket/' created

上传对象

[root@rgw01-backup ~]# echo "hello ceph" > a.txt
[root@rgw01-backup ~]# s3cmd put a.txt s3://my-bucket/a.txt
upload: 'a.txt' -> 's3://my-bucket/a.txt'  [1 of 1]
 11 of 11   100% in    1s     6.63 B/s  done

下载对象

[root@rgw01-backup ~]# s3cmd get s3://my-bucket/a.txt b.txt
download: 's3://my-bucket/a.txt' -> 'b.txt'  [1 of 1]
 11 of 11   100% in    0s   252.99 B/s  done
[root@rgw01-backup ~]# cat b.txt 
hello ceph

也可以使用 python 脚本测试

[root@bakmtr01 ~]# cat s3test.py 
import boto.s3.connection

access_key = '6LBCLUQYZ9BPWKCI5VXM'
secret_key = 'wBcWEOCIrtckj7A6RHxLyhoqqNI05lJZTAFVpzzt'
conn = boto.connect_s3(
        aws_access_key_id=access_key,
        aws_secret_access_key=secret_key,
        host='192.168.1.85', port=6780,
        is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
       )

bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
    print "{name} {created}".format(
        name=bucket.name,
        created=bucket.creation_date,
    )

9、ceph 在 pool 下面创建 namespace

ceph auth get-or-create-key client.joker mon 'allow r' osd 'allow rw pool=uat namespace=ns1' -o joker.keyring
ceph -n client.joker --keyring=joker.keyring health

Currently, namespaces are only useful for applications written on top of librados. Ceph clients such as block device and object storage do not currently support this feature.

User Management

参考文章:

ceph-ansible Installation

User Management

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值