Ceph-对象存储环境部署

6 篇文章 1 订阅

对象存储简单介绍

作为文件系统的磁盘,操作系统不能直接访问对象存储。相反,它只能通过应用程序级别的API访问。

对象存储的部署

# 1. 在集群的各个节点中安装ceph-radogw
$ sudo yum -y install ceph-radosgw

# 1.1 查看软件包是否有安装
$ rpm -qa|grep ceph-radosgw

# 2. 在部署节点进行部署
# 此目录是部署ceph集群时创建的, 该目录下包含着集群中各节点的信息以及秘钥。
# 具体的可看链接
$ cd my-cluster
$ ceph-deploy rgw create ceph-master ceph-node01 ceph-node02 ceph-node03

# 3. 查看master节点的信息可以看到
$ curl 192.168.47.128:7480
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

# 4. 修改端口为80
$ cat ceph.conf
[global]
fsid = b6c9b4e1-1197-4535-a889-7eaa1a83f46b
mon_initial_members = ceph-master, ceph-node01, ceph-node02, ceph-node03
mon_host = 192.168.47.128,192.168.47.140,192.168.47.129,192.168.47.130
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[client.rgw.ceph-master]
rgw_frontends = "civetweb port=80"

[client.rgw.ceph-node01]
rgw_frontends = "civetweb port=80"

[client.rgw.ceph-node02]
rgw_frontends = "civetweb port=80"

[client.rgw.ceph-node03]
rgw_frontends = "civetweb port=80"

# 5. 推送更新
$ ceph-deploy --overwrite-conf config push ceph-master ceph-node01 ceph-node02 ceph-node03

# 6. 重新启动服务
$ systemctl restart ceph-radosgw@rgw.ceph-master.service
$ systemctl restart ceph-radosgw@rgw.ceph-node01.service
$ systemctl restart ceph-radosgw@rgw.ceph-node02.service
$ systemctl restart ceph-radosgw@rgw.ceph-node03.service

# 7. 创建池
# 下载两个脚本, pool 和 create_pool
$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/pool
$ wget https://raw.githubusercontent.com/aishangwei/ceph-demo/master/ceph-deploy/rgw/create_pool.sh
$ chmod +x create_pool.sh

# 修改create_pool中的相关参数
$ ./create_pool.sh
$ cat create_pool.sh
PG_NUM=8
PGP_NUM=8
SIZE=4								# size表示副本个数, 目前集群中有4台主机,所以将size置为4
for i in `cat /home/cephadm/my-cluster/pool`
        do
        ceph osd pool create $i $PG_NUM
        ceph osd pool set $i size $SIZE
        done

for i in `cat /home/cephadm/my-cluster/pool`
        do
        ceph osd pool set $i pgp_num $PGP_NUM
        done

# 8. 测试是否能够访问集群
# 这里需要注意--name的参数是ceph.conf配置文件中的master名字。还有,路径对应的是ceph-rgw.ceph-master与--name的参数也要匹配。
$  ceph -s -k /var/lib/ceph/radosgw/ceph-rgw.ceph-master/keyring --name client.rgw.ceph-master
  cluster:
    id:     b6c9b4e1-1197-4535-a889-7eaa1a83f46b
    health: HEALTH_WARN
            11 pools have pg_num > pgp_num
            too many PGs per OSD (272 > max 250)

  services:
    mon: 4 daemons, quorum ceph-master,ceph-node02,ceph-node03,ceph-node01
    mgr: ceph-master(active), standbys: ceph-node03, ceph-node01, ceph-node02
    osd: 12 osds: 11 up, 11 in
    rgw: 4 daemons active

  data:
    pools:   17 pools, 998 pgs
    objects: 189  objects, 2.2 KiB
    usage:   11 GiB used, 539 GiB / 550 GiB avail
    pgs:     998 active+clean

通过s3 API访问对象存储进行测试

# 1. 创建radogw用户, 非master机器,选择了另一台客户机ceph-node01
	$ sudo radosgw-admin user create --uid=radosgw --display-name='radosgw' -k /var/lib/ceph/radosgw/ceph-rgw.ceph-node01/keyring --name client.rgw.ceph-node01
{
    "user_id": "radosgw",
    "display_name": "radosgw",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "radosgw",
            "access_key": "661YYBNBJESEXVIEVDO1",
            "secret_key": "CV3Y63PzO38MMy1BhaXEivDoeECrRjDNN4HA9DyU"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}
# 需要注意, "access_key", "secret_key"需要保留下来。 找回,可以用radosgw-admin user info --uid -k -name
$ radosgw-admin user info --uid radosgw -k /var/lib/ceph/radosgw/ceph-rgw.ceph-master/keyring --name client.rgw.ceph-master

# 2. 安装s3cmd客户端
$ yum install s3cmd -y				# 我这里在master及node节点中都进行了安装

# 3. 生成配置文件
# 其中, Access key 和 Secret key 对应了创建radogw用户产生的秘钥信息。
# 测试环境中, 在选择use https protobuf时选择no , 在选择是否使用的凭据进行测试时选择no ,  最后保存设置。
$  s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [661YYBNBJESEXVIEVDO1]:
Secret Key [CV3Y63PzO38MMy1BhaXEivDoeECrRjDNN4HA9DyU]:
Default Region [US]:

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [192.168.47.128:80]:

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket).192.168.47.128:80]:

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
  Access Key: 661YYBNBJESEXVIEVDO1
  Secret Key: CV3Y63PzO38MMy1BhaXEivDoeECrRjDNN4HA9DyU
  Default Region: US
  S3 Endpoint: 192.168.47.128:80
  DNS-style bucket+hostname:port template for accessing a bucket: %(bucket).192.168.47.128:80
  Encryption password:
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
# 创建桶并放入文件测试
$ s3cmd mb s3://first-bucket
Bucket 's3://first-bucket/' created

# 查看
$ s3cmd ls
2020-06-09 12:16  s3://first-bucket

# 放入文件到桶中
$ s3cmd put /etc/hosts s3://first-bucket
upload: '/etc/hosts' -> 's3://first-bucket/hosts'  [1 of 1]
 449 of 449   100% in    1s   247.56 B/s  done

# 查看文件
$ s3cmd ls s3://first-bucket
2020-06-09 12:22          449  s3://first-bucket/hosts

参考文档

[相关报错信息可以从该文章中查询] (https://www.cnblogs.com/flytor/p/11380026.html)

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值