ceph cluster map

    ceph monitor 负责监控整个集群的健康状况,以及维护集群成员关系状态,对等节点的状态,和集群的配置信息。cluster map是多个map的组合,包括monitor map,osd map,pg map,crush map 以及 mds map。

1    monitor map:它包含监视节点端到端的信息,包括ceph集群ID,monitor 节点名称,IP地址和端口等。

    ceph mon dump

[root@ceph-admin opt]# ceph mon dump
dumped monmap epoch 4
epoch 4
fsid 53fe37a5-7ee7-4190-a8ea-a0221648294c
last_changed 2017-09-27 10:14:53.474525
created 2017-09-04 15:17:43.852911
0: 172.18.1.231:6789/0 mon.ceph-admin
1: 172.18.1.232:6789/0 mon.ceph-node1
您在 /var/spool/mail/root 中有新邮件

2    OSD map :它保存一些常用的信息,包括集群ID,OSD map 自创建以来最新版本号自己最后修改时间,以及存储池相关的信息,包括存储名称,ID,状态,副本级别和PG。它还保存着OSD信息,比如数量,状态,权重,最后清理间隔以及OSD节点信息。

    ceph osd dump

[root@ceph-admin opt]# ceph osd dump
epoch 2473
fsid 53fe37a5-7ee7-4190-a8ea-a0221648294c
created 2017-09-04 15:17:50.966548
modified 2017-10-12 13:55:20.042095
flags sortbitwise,require_jewel_osds
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 1621 flags hashpspool stripe_width 0
	removed_snaps [1~3]
pool 9 'cephfs_data' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 10 pgp_num 10 last_change 1623 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 10 'cephfs_metadata' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 10 pgp_num 10 last_change 1625 flags hashpspool stripe_width 0
pool 12 'test_pool7' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 1628 flags hashpspool stripe_width 0
	removed_snaps [1~3]
pool 17 'm8beta' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 1682 flags hashpspool stripe_width 0
	removed_snaps [1~3]
pool 18 'm8dev' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 80 pgp_num 80 last_change 1689 flags hashpspool stripe_width 0
	removed_snaps [1~3]
pool 19 '.rgw.root' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2366 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 20 'default.rgw.control' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2368 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 21 'default.rgw.data.root' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2370 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 22 'default.rgw.gc' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2371 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 23 'default.rgw.log' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2372 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 24 'default.rgw.users.uid' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2375 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 25 'default.rgw.users.email' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2377 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 26 'default.rgw.users.keys' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 2379 owner 18446744073709551615 flags hashpspool stripe_width 0
pool 27 'vmpool1' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 2469 flags hashpspool stripe_width 0
	removed_snaps [1~3]
max_osd 16
osd.0 up   in  weight 1 up_from 1961 up_thru 2466 down_at 1959 last_clean_interval [1809,1958) 172.18.1.232:6800/19718 172.18.1.232:6801/19718 172.18.1.232:6802/19718 172.18.1.232:6803/19718 exists,up 68aa9d74-3d45-49df-8c94-7ae5f0a8c48b
osd.1 up   in  weight 1 up_from 2315 up_thru 2466 down_at 2311 last_clean_interval [2310,2314) 172.18.1.233:6800/2950 172.18.1.233:6804/1002950 172.18.1.233:6805/1002950 172.18.1.233:6806/1002950 exists,up 9bd0bc80-853c-46ab-8790-656827285750
osd.2 up   in  weight 1 up_from 1962 up_thru 2466 down_at 1955 last_clean_interval [1945,1954) 172.18.1.231:6800/11239 172.18.1.231:6801/11239 172.18.1.231:6802/11239 172.18.1.231:6803/11239 exists,up 6de000f4-63bd-4b10-b076-29c0de5b4364
blacklist 172.18.2.4:0/1125443 expires 2017-10-12 14:55:19.169719
blacklist 172.18.2.9:0/1028870 expires 2017-10-12 14:54:20.118506

3    PG map:它保存信息包括PG版本,时间戳,OSD map的最新版本号,容量已满百分比,容量将满百分比,它还记录了每个PG的ID,对象数量,状态,状态时间戳。

    ceph pg dump

4    crush map : 它保存的信息包括集群设备列表,bucket列表,故障域分层结构,为故障域定义的规则等。

    ceph osd crush dump

[root@ceph-admin opt]# ceph osd crush dump
{
    "devices": [
        {
            "id": 0,
            "name": "osd.0"
        },
        {
            "id": 1,
            "name": "osd.1"
        },
        {
            "id": 2,
            "name": "osd.2"
        }
    ],
    "types": [
        {
            "type_id": 0,
            "name": "osd"
        },
        {
            "type_id": 1,
            "name": "host"
        },
        {
            "type_id": 2,
            "name": "chassis"
        },
        {
            "type_id": 3,
            "name": "rack"
        },
        {
            "type_id": 4,
            "name": "row"
        },
        {
            "type_id": 5,
            "name": "pdu"
        },
        {
            "type_id": 6,
            "name": "pod"
        },
        {
            "type_id": 7,
            "name": "room"
        },
        {
            "type_id": 8,
            "name": "datacenter"
        },
        {
            "type_id": 9,
            "name": "region"
        },
        {
            "type_id": 10,
            "name": "root"
        }
    ],
    "buckets": [
        {
            "id": -1,
            "name": "default",
            "type_id": 10,
            "type_name": "root",
            "weight": 38337,
            "alg": "straw",
            "hash": "rjenkins1",
            "items": [
                {
                    "id": -2,
                    "weight": 12779,
                    "pos": 0
                },
                {
                    "id": -3,
                    "weight": 12779,
                    "pos": 1
                },
                {
                    "id": -4,
                    "weight": 12779,
                    "pos": 2
                }
            ]
        },
        {
            "id": -2,
            "name": "ceph-admin",
            "type_id": 1,
            "type_name": "host",
            "weight": 12779,
            "alg": "straw",
            "hash": "rjenkins1",
            "items": [
                {
                    "id": 2,
                    "weight": 12779,
                    "pos": 0
                }
            ]
        },
        {
            "id": -3,
            "name": "ceph-node1",
            "type_id": 1,
            "type_name": "host",
            "weight": 12779,
            "alg": "straw",
            "hash": "rjenkins1",
            "items": [
                {
                    "id": 0,
                    "weight": 12779,
                    "pos": 0
                }
            ]
        },
        {
            "id": -4,
            "name": "ceph-node2",
            "type_id": 1,
            "type_name": "host",
            "weight": 12779,
            "alg": "straw",
            "hash": "rjenkins1",
            "items": [
                {
                    "id": 1,
                    "weight": 12779,
                    "pos": 0
                }
            ]
        }
    ],
    "rules": [
        {
            "rule_id": 0,
            "rule_name": "replicated_ruleset",
            "ruleset": 0,
            "type": 1,
            "min_size": 1,
            "max_size": 10,
            "steps": [
                {
                    "op": "take",
                    "item": -1,
                    "item_name": "default"
                },
                {
                    "op": "chooseleaf_firstn",
                    "num": 0,
                    "type": "host"
                },
                {
                    "op": "emit"
                }
            ]
        }
    ],
    "tunables": {
        "choose_local_tries": 0,
        "choose_local_fallback_tries": 0,
        "choose_total_tries": 50,
        "chooseleaf_descend_once": 1,
        "chooseleaf_vary_r": 1,
        "chooseleaf_stable": 0,
        "straw_calc_version": 1,
        "allowed_bucket_algs": 22,
        "profile": "firefly",
        "optimal_tunables": 0,
        "legacy_tunables": 0,
        "minimum_required_version": "firefly",
        "require_feature_tunables": 1,
        "require_feature_tunables2": 1,
        "has_v2_rules": 0,
        "require_feature_tunables3": 1,
        "has_v3_rules": 0,
        "has_v4_buckets": 0,
        "require_feature_tunables5": 0,
        "has_v5_rules": 0
    }
}

5    mds map:它保存的信息包括MDS map 当前版本号,MDS map 的创建和修改时间,数据和元数据存储池ID,集群MDS数量以及MDS状态。

    ceph mds map

转载于:https://my.oschina.net/wangzilong/blog/1549599

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值