Ceph-ansible 安装 ceph (rbd + rgw)

实验环境:

3 台 monitor 节点:centos7.6 操作系统

3台 osd 节点: centos7.6 操作系统,每台 20 块磁盘

所有节点上配置好 ceph yum repo 源(luminous 版本)

ceph 集群结构:集群只开启 rbd、rgw 服务(块存储、对象存储网关),3 个 monitor 节点还充当 rgw、mgr 节点

1、github 下载版本 ceph-ansible-stable-3.2 并解压

ceph-ansible

2、按照 requirement.txt 要求安装 ansible 2.6.0

pip install "ansible==2.6.0"

3、创建 inventory 文件

[root@ansible002 ceph-ansible-stable-3.2]# cat ceph-host 
mon1 ansible_host=192.168.1.201
mon2 ansible_host=192.168.1.202
mon3 ansible_host=192.168.1.203
osd1 ansible_host=192.168.1.137
osd2 ansible_host=192.168.1.138
osd3 ansible_host=192.168.1.139

[mons]
mon1
mon2
mon3

[osds]
osd1
osd2
osd3

[rgws]
mon1
mon2
mon3

[mgrs]
mon1
mon2
mon3

[all:vars]
ansible_user=root
ansible_ssh_pass=123456
#ansible_sudo_pass=123456

4、创建 playbook

mv site.yml.sample site.yml

根据集群需求修改 playbook,本实验只安装 ceph rbd 和 rgw,没有部署 cephfs,所以 -hosts 下面值保留了 mon, osds, rgws, mgrs

---
# Defines deployment design and assigns role to server groups

- hosts:
  - mons
  - osds
  #- mdss
  - rgws
  #- nfss
  #- restapis
  #- rbdmirrors
  #- clients
  - mgrs
  #- iscsigws
  #- iscsi-gws # for backward compatibility only!

  gather_facts: false
  any_errors_fatal: true
  become: true
....

5、配置 ceph 集群

[root@ansible002 ceph-ansible-stable-3.2]# cat group_vars/all.yml
public_network: "192.168.1.0/24"
cluster_network: "192.168.1.0/24"
devices:


- '/dev/sdb'
  - '/dev/sdc'
  - '/dev/sdd'
  - '/dev/sde'
  - '/dev/sdf'
  - '/dev/sdg'
  - '/dev/sdh'
  - '/dev/sdi'
  - '/dev/sdj'
  - '/dev/sdk'
  - '/dev/sdl'
  - '/dev/sdm'
  - '/dev/sdn'
  - '/dev/sdo'
  - '/dev/sdp'
  - '/dev/sdq'
  - '/dev/sdr'
  - '/dev/sds'
  - '/dev/sdt'
  - '/dev/sdu'
osd_scenario: lvm
cluster: ceph

mon_group_name: mons
osd_group_name: osds
rgw_group_name: rgws
mgr_group_name: mgrs
centos_package_dependencies: # centos 操作系统依赖包
  - python-pycurl
  - epel-release
  - python-setuptools
  - libselinux-python
ceph_origin: distro   # 使用自己配置好的 yum repo
ceph_stable_release: luminous # ceph 版本
monitor_interface: eth0 # mon 节点网卡
osd_objectstore: bluestore

## Rados Gateway options
radosgw_frontend_type: civetweb # For additional frontends see: 
http://docs.ceph.com/docs/mimic/radosgw/frontends/

radosgw_civetweb_port: 8080 # rgw 端口
radosgw_civetweb_num_threads: 512
radosgw_civetweb_options: "num_threads={{ radosgw_civetweb_num_threads }}"
radosgw_frontend_port: "{{ radosgw_civetweb_port if radosgw_frontend_type == 'civetweb' else '8080' }}"
radosgw_frontend_options: "{{ radosgw_civetweb_options if radosgw_frontend_type == 'civetweb' else '' }}"
radosgw_thread_pool_size: 512
radosgw_interface: eth0
rgw_multisite: false
rgw_zone: default

osd_scenario: collocated 默认不会使用 bluestore

6、开始安装

[root@ansible002 ceph-ansible-stable-3.2]# ansible-playbook -i ceph-host site.yml

7、查看 ceph 状态

[root@rgw01-backup ~]# ceph -s
  cluster:
    id:     0e38e7c6-a704-4132-b0e3-76b87f18d8fa
    health: HEALTH_WARN
            too few PGs per OSD (1 < min 30)
            clock skew detected on mon.rgw02-backup
 
  services:
    mon: 3 daemons, quorum rgw01-backup,rgw02-backup,rgw03-backup
    mgr: rgw01-backup(active), standbys: rgw03-backup, rgw02-backup
    osd: 60 osds: 60 up, 60 in
    rgw: 3 daemons active


data:
    pools:   4 pools, 32 pgs
    objects: 222 objects, 2.77KiB
    usage:   61.0GiB used, 98.2TiB / 98.2TiB avail
    pgs:     32 active+clean
[root@rgw01-backup ~]# ceph osd pool ls
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log

8、测试 rbd 块存储 (在 mon 节点上执行)

[root@rgw01-backup ~]# ceph osd pool create uat 8 8
pool 'uat' created
[root@rgw01-backup ~]# ceph osd pool ls
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
uat

在 uat 上创建一个 rbd 镜像

[root@rgw01-backup ~]# rbd create rbd1 -p uat --size 1024 --image-format 1
rbd: image format 1 is deprecated

如果报错 librbd: pool not configured for self-managed RBD snapshot support

rbd create rbd1 -p uat --size 1024 --image-format 1 --rbd_validate_pool=false

查看创建结果

[root@rgw01-backup ~]# rbd list -p uat
rbd1
[root@rgw01-backup ~]# rbd info uat/rbd1
rbd image 'rbd1':
	size 1GiB in 256 objects
	order 22 (4MiB objects)
	block_name_prefix: rb.0.1200.327b23c6
	format: 1

映射块设备,lsblk 会发现多了一个 /dev/rbd0 设备

[root@rgw01-backup ~]# rbd map uat/rbd1
/dev/rbd0
[root@rgw01-backup ~]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   500G  0 disk 
├─sda1   8:1    0     1G  0 part /boot
├─sda2   8:2    0 498.8G  0 part /
└─sda3   8:3    0   200M  0 part /boot/efi
sr0     11:0    1  1024M  0 rom  
rbd0   253:0    0     1G  0 disk

查看映射块设备

[root@rgw01-backup ~]# rbd showmapped
id pool image snap device    
0  uat  rbd1  -    /dev/rbd0

挂载文件系统使用

[root@rgw01-backup ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=8, agsize=32768 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@rgw01-backup ~]# mkdir bb
[root@rgw01-backup ~]# mount /dev/rbd0 bb
[root@rgw01-backup ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       499G  1.8G  497G   1% /
devtmpfs        7.8G     0  7.8G   0% /dev
tmpfs           7.8G     0  7.8G   0% /dev/shm
tmpfs           7.8G  8.9M  7.8G   1% /run
tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda1      1014M  137M  878M  14% /boot
/dev/sda3       200M  8.0K  200M   1% /boot/efi
tmpfs           1.6G     0  1.6G   0% /run/user/0
/dev/rbd0      1014M   33M  982M   4% /root/bb
[root@rgw01-backup ~]# vim bb/uat.txy

取消挂载

[root@rgw01-backup ~]# umount bb
[root@rgw01-backup ~]# rbd unmap uat/rbd1
[root@rgw01-backup ~]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   500G  0 disk 
├─sda1   8:1    0     1G  0 part /boot
├─sda2   8:2    0 498.8G  0 part /
└─sda3   8:3    0   200M  0 part /boot/efi
sr0     11:0    1  1024M  0 rom

9、测试 radosgw 对象存储

创建 radosgw 用户,主要获取 access_key 和 secret_key

[root@rgw01-backup ~]# radosgw-admin user create --uid=uat --display-name="My Test"
{
    "user_id": "uat",
    "display_name": "My Test",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "uat",
            "access_key": "0MKBBALAM9C5UO7BP7M5",
            "secret_key": "FtG7RAB1ya8hZNbdfnzosC8ZNb6Vqthc6xxVqIre"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw"
}

使用 s3cmd 测试

[root@rgw01-backup ~]# yum install -y s3cmd

配置 s3cmd ,主要是这 5 个配置

  Access Key: 0MKBBALAM9C5UO7BP7M5
  Secret Key: FtG7RAB1ya8hZNbdfnzosC8ZNb6Vqthc6xxVqIre
  S3 Endpoint: 192.168.1.201:8080
  Use HTTPS protocol [Yes]: NO
  DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: ''
[root@rgw01-backup ~]#  s3cmd --configure
...
Access Key: 0MKBBALAM9C5UO7BP7M5
Secret Key: FtG7RAB1ya8hZNbdfnzosC8ZNb6Vqthc6xxVqIre
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 192.168.1.201:8080
...
Encryption password: 
...
Use HTTPS protocol [Yes]: NO
...
HTTP Proxy server name: 

New settings:
  Access Key: 0MKBBALAM9C5UO7BP7M5
  Secret Key: FtG7RAB1ya8hZNbdfnzosC8ZNb6Vqthc6xxVqIre
  Default Region: US
  S3 Endpoint: 192.168.1.201:8080
  DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: ''
...
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'

创建桶

[root@rgw01-backup ~]# s3cmd ls s3://
[root@rgw01-backup ~]# s3cmd mb s3://my-bucket
Bucket 's3://my-bucket/' created

上传对象

[root@rgw01-backup ~]# echo "hello ceph" > a.txt
[root@rgw01-backup ~]# s3cmd put a.txt s3://my-bucket/a.txt
upload: 'a.txt' -> 's3://my-bucket/a.txt'  [1 of 1]
 11 of 11   100% in    1s     6.63 B/s  done

下载对象

[root@rgw01-backup ~]# s3cmd get s3://my-bucket/a.txt b.txt
download: 's3://my-bucket/a.txt' -> 'b.txt'  [1 of 1]
 11 of 11   100% in    0s   252.99 B/s  done
[root@rgw01-backup ~]# cat b.txt 
hello ceph

也可以使用 python 脚本测试

[root@bakmtr01 ~]# cat s3test.py 
import boto.s3.connection

access_key = '6LBCLUQYZ9BPWKCI5VXM'
secret_key = 'wBcWEOCIrtckj7A6RHxLyhoqqNI05lJZTAFVpzzt'
conn = boto.connect_s3(
        aws_access_key_id=access_key,
        aws_secret_access_key=secret_key,
        host='192.168.1.85', port=6780,
        is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
       )

bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
    print "{name} {created}".format(
        name=bucket.name,
        created=bucket.creation_date,
    )

9、ceph 在 pool 下面创建 namespace

ceph auth get-or-create-key client.joker mon 'allow r' osd 'allow rw pool=uat namespace=ns1' -o joker.keyring
ceph -n client.joker --keyring=joker.keyring health

Currently, namespaces are only useful for applications written on top of librados. Ceph clients such as block device and object storage do not currently support this feature.

User Management

参考文章:

ceph-ansible Installation

User Management

要使用ceph-ansible安装14.2.0版本的Ceph,可以按照以下步骤进行操作: 1. 安装ansibleceph-ansible 在执行ceph-ansible之前,需要先安装ansibleceph-ansible。可以使用以下命令安装: ``` $ sudo apt-get update $ sudo apt-get install software-properties-common $ sudo apt-add-repository ppa:ansible/ansible $ sudo apt-get update $ sudo apt-get install ansible $ sudo apt-get install ceph-ansible ``` 2. 准备inventory文件 在使用ceph-ansible之前,需要先准备一个inventory文件,该文件包含了Ceph集群的各个节点的信息,例如IP地址、主机名等。可以从ceph-ansible的源代码中复制一个样例文件,并根据自己的实际情况进行修改。例如: ``` $ cp /usr/share/ceph-ansible/infrastructure-playbooks/sample-inventory/hosts /etc/ansible/hosts ``` 3. 修改inventory文件 在修改inventory文件之前,需要先了解一些基本的配置项。在14.2.0版本的Ceph中,可以配置的一些基本选项包括: ``` [mons] mon1 mon2 mon3 [osds] osd1 osd2 osd3 [mdss] mds1 [rgws] rgw1 [grafanas] grafana1 [clients] client1 ``` 这里,[mons]、[osds]、[mdss]、[rgws]、[grafanas]和[clients]是各个节点的组名,mon1、osd1、mds1等是各个节点的主机名,可以根据自己的实际情况进行修改。 除了节点信息外,还需要配置一些其他选项,例如Ceph的版本号、网络接口、存储池等。可以在inventory文件中添加如下内容: ``` [all:vars] ceph_version=14.2.0 public_network=192.168.1.0/24 cluster_network=192.168.2.0/24 [osds] osd1 osd_journal_size=1024 osd2 osd_journal_size=1024 osd3 osd_journal_size=1024 [mons] mon1 mon_initial_members=mon1,mon2,mon3 mon2 mon_initial_members=mon1,mon2,mon3 mon3 mon_initial_members=mon1,mon2,mon3 [mdss] mds1 [rgws] rgw1 [grafanas] grafana1 [clients] client1 ``` 这里,ceph_version指定了要安装Ceph版本号;public_network和cluster_network分别指定了Ceph集群的公共网络和集群网络;osd_journal_size指定了每个OSD的journal大小;mon_initial_members指定了每个mon节点的初始成员。 4. 执行ceph-ansible 准备好inventory文件后,就可以执行ceph-ansible了。可以使用以下命令: ``` $ cd /usr/share/ceph-ansible $ ansible-playbook site.yml ``` 执行过程中,ceph-ansible会自动下载14.2.0版本的Ceph软件包,并在各个节点上进行安装安装完成后,可以使用以下命令检查Ceph集群的状态: ``` $ ceph -s ``` 如果一切正常,应该能够看到Ceph集群的状态信息。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值