ceph 常用命令

rgw

查看桶信息

radosgw-admin bucket stats

通过将其追加到文件中来过滤信息

radosgw-admin bucket stats > /tmp/bucket-stats.txt

桶的数量

grep '"bucket":' /tmp/bucket-stats.txt |wc -l
257

存储桶拥有者的数量

grep '"owner":' /tmp/bucket-stats.txt | uniq
        "owner": "mys3-user",
        "owner": "obc-default-ceph-bkt-openbayes-juicefs-6a2b2c57-d393-4529-8620-c0af6c9c30f8",
        "owner": "mys3-user",

“USER.NAME” 拥有的存储桶数量(将 USER.NAME 替换为现有的用户名)
例如

grep '"owner": "mys3-user"' /tmp/bucket-stats.txt | wc -l
256

包含数据的存储桶数量

grep '"size_actual":' /tmp/bucket-stats.txt | wc -l
257

空桶数量

grep '"usage": {}' /tmp/bucket-stats.txt | wc -l
0

查找全部规划中的分片操作

radosgw-admin reshard list
[]

查看用户

radosgw-admin user list
[
    "dashboard-admin",
    "obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0",
    "cosi",
    "rgw-admin-ops-user"
]

查看用户信息

能查到用户配额等信息

radosgw-admin user info --uid=obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0
{
    "user_id": "obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0",
    "display_name": "obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0",
    "email": "",
    "suspended": 0,
    "max_buckets": 1,
    "subusers": [],
    "keys": [
        {
            "user": "obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0",
            "access_key": "YK9RFGZ2PBMSJ3X8PQJQ",
            "secret_key": "6RhGSydhqL3tAQJJTKuB20AEbZyqU9XV2gXSMQbY"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": true,
        "check_on_raw": false,
        "max_size": 2000000000,
        "max_size_kb": 1953125,
        "max_objects": 1000
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}

修改用户对象数量配额

-1 表示无限配额

radosgw-admin quota set --quota-scope=user   --uid=obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0 --max-objects=-1

启用用户配额

radosgw-admin quota enable --quota-scope=user --uid=obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0

启用桶配额

radosgw-admin quota enable --quota-scope=bucket --uid=obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0

修改用户最大可用桶数量

radosgw-admin user modify --uid obc-default-ceph-bucket-43c712bc-2cb5-46ef-b9da-76f442aea3b0 --max-buckets 10

查看 rgw zone

其中可以看到很多信息,比如什么信息存储在哪个 pool 里

 radosgw-admin zone get

输出

{
    "id": "9b5c0c9f-541d-4176-8527-89b4dae02ac2",
    "name": "mys3",
    "domain_root": "mys3.rgw.meta:root",
    "control_pool": "mys3.rgw.control",
    "gc_pool": "mys3.rgw.log:gc",
    "lc_pool": "mys3.rgw.log:lc",
    "log_pool": "mys3.rgw.log",
    "intent_log_pool": "mys3.rgw.log:intent",
    "usage_log_pool": "mys3.rgw.log:usage",
    "roles_pool": "mys3.rgw.meta:roles",
    "reshard_pool": "mys3.rgw.log:reshard",
    "user_keys_pool": "mys3.rgw.meta:users.keys",
    "user_email_pool": "mys3.rgw.meta:users.email",
    "user_swift_pool": "mys3.rgw.meta:users.swift",
    "user_uid_pool": "mys3.rgw.meta:users.uid",
    "otp_pool": "mys3.rgw.otp",
    "system_key": {
        "access_key": "",
        "secret_key": ""
    },
    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": "mys3.rgw.buckets.index",
                "storage_classes": {
                    "STANDARD": {
                        "data_pool": "mys3.rgw.buckets.data"
                    }
                },
                "data_extra_pool": "mys3.rgw.buckets.non-ec",
                "index_type": 0,
                "inline_data": "true"
            }
        }
    ],
    "realm_id": "",
    "notif_pool": "mys3.rgw.log:notif"
}

挂载 cephfs 文件系统参数

官网地址
https://docs.ceph.com/en/reef/man/8/mount.ceph/?highlight=dns

recover_session=<no|clean>

Set auto reconnect mode in the case where the client is blocklisted. The available modes are no and clean. The default is no.

no: never attempt to reconnect when client detects that it has been blocklisted. Blocklisted clients will not attempt to reconnect and their operations will fail too.

clean: client reconnects to the Ceph cluster automatically when it detects that it has been blocklisted. During reconnect, client drops dirty data/metadata, invalidates page caches and writable file handles. After reconnect, file locks become stale because the MDS loses track of them. If an inode contains any stale file locks, read/write on the inode is not allowed until applications release all stale file locks.
译文
设置客户端被列入黑名单时的自动重连模式。可用模式为no和clean。默认为no。

no:当客户端检测到自己已被列入黑名单时,绝不会尝试重新连接。列入黑名单的客户端不会尝试重新连接,其操作也将失败。

clean:客户端在检测到 Ceph 集群已被列入黑名单时会自动重新连接到该集群。在重新连接期间,客户端会删除脏数据/元数据,使页面缓存和可写文件句柄无效。重新连接后,文件锁会变得陈旧,因为 MDS 会失去对它们的跟踪。如果某个 inode 包含任何陈旧的文件锁,则不允许对该 inode 进行读取/写入,直到应用程序释放所有陈旧的文件锁。

查看 pool 中的 image

rbd ls -p replicapool
csi-snap-3af1fae6-f5ea-4390-b533-65e5749e2371
csi-vol-05025e3b-2e41-46b5-a6c2-5a4e1f4216a5
csi-vol-1a7eb761-e8e8-4a75-8cf3-dcf34b93d96a
csi-vol-2cba6bf7-4cc0-492a-9b82-0f3c4f4b7b35
csi-vol-2ef5219f-9b17-46b3-b8ff-565d39d21cc2
csi-vol-2f721cbc-68f8-484a-bf87-3a618fe6987c
csi-vol-30c50463-ad86-4d3c-9a96-bda6d1c449f4
csi-vol-4c3c2752-af71-402c-8332-47c1caade5d6
csi-vol-571d504a-6358-40b8-b4fa-d6363e87e46a-bak
csi-vol-68f6f81a-fe76-4c50-bfd9-980c1b249600
csi-vol-6971f688-d195-4862-a652-6a1ebd19f9bd
csi-vol-6a5a704d-7851-40bb-ae04-e1d083074997
csi-vol-70bc9b49-c4d1-4d13-9189-1b408dcba9f8
csi-vol-82721c30-d7e6-491a-bb25-25d38d48a4ee
csi-vol-8d088751-5b00-487a-8a92-bf56e24af398-bak
csi-vol-adfd99c7-b375-4cef-b8c6-728d2cc91665
csi-vol-cf145f83-b55c-4e21-9460-ccffcda1e142
csi-vol-cfd4cec3-e0e5-4daa-afa9-1e60513ca1f7
csi-vol-d3f4e281-4dc0-492e-a7b3-a45fb3b4cc68
csi-vol-da689ca0-99fe-4800-a266-22bcc638d13e
csi-vol-e56fa92b-3be5-4656-8896-a951506718c3
csi-vol-e5ae1c58-3d17-4202-9aac-0e222000eceb
csi-vol-ea426b9f-0582-4613-8602-b39a2e7e1ad1
csi-vol-eacda6e3-1941-46f0-bd60-e0cf5c1a9d76

查看 rbd 类型 pool 中的对象

rados ls  -p replicapool|head -10
rbd_data.8478f5902ee4d6.00000000000172b6
rbd_data.59ba7f32e8d138.0000000000006cde
rbd_data.8478f5902ee4d6.00000000000292f1
rbd_data.59ba7f32e8d138.0000000000000a35
rbd_data.8478f5902ee4d6.0000000000018ba0
rbd_data.59ba7f32e8d138.0000000000033380
rbd_data.3ecccff586abee.00000000000042c9
rbd_data.8478f5902ee4d6.000000000002137f
rbd_data.8478f5902ee4d6.000000000002b366
rbd_data.59ba7f32e8d138.00000000000144ac

查看对象位于哪个 pg

ceph osd map replicapool rbd_data.8478f5902ee4d6.00000000000172b6

输出

osdmap e233278 pool 'replicapool' (18) object 'rbd_data.8478f5902ee4d6.00000000000172b6' -> pg 18.f1c00000 (18.0) -> up ([9,29], p9) acting ([9,29], p9)

查看 image 信息

rbd info replicapool/csi-snap-3af1fae6-f5ea-4390-b533-65e5749e2371

输出

rbd image 'csi-snap-3af1fae6-f5ea-4390-b533-65e5749e2371':
	size 40 GiB in 10240 objects
	order 22 (4 MiB objects)
	snapshot_count: 1
	id: 8478f57a18329a
	block_name_prefix: rbd_data.8478f57a18329a
	format: 2
	features: layering, deep-flatten, operations
	op_features: clone-parent, clone-child, snap-trash
	flags:
	create_timestamp: Wed Jun  5 06:10:44 2024
	access_timestamp: Wed Jun  5 06:10:44 2024
	modify_timestamp: Wed Jun  5 06:10:44 2024
	parent: replicapool/csi-vol-cfd4cec3-e0e5-4daa-afa9-1e60513ca1f7@04933c7d-1b3c-4d05-8e2d-c6c46ad4a0f3
	overlap: 40 GiB
rbd info replicapool/csi-vol-05025e3b-2e41-46b5-a6c2-5a4e1f4216a5

输出

rbd image 'csi-vol-05025e3b-2e41-46b5-a6c2-5a4e1f4216a5':
	size 1 TiB in 262144 objects
	order 22 (4 MiB objects)
	snapshot_count: 0
	id: 89f487d8e91a4
	block_name_prefix: rbd_data.89f487d8e91a4
	format: 2
	features: layering
	op_features:
	flags:
	create_timestamp: Tue Nov 21 15:56:22 2023
	access_timestamp: Tue Nov 21 15:56:22 2023
	modify_timestamp: Tue Nov 21 15:56:22 2023

修改 osd 的 CLASS

ceph osd crush rm-device-class  osd.2	
ceph osd crush set-device-class nvme osd.2

剔除 osd

ceph osd reweight osd.1 0
ceph osd crush rm osd.1
ceph osd out osd.1
停止 osd 服务
ceph osd rm osd.1

删除 pg

ceph osd force-create-pg pgid --yes-i-really-mean-it

查看配置

bash-4.4$ ceph config get  mds

输出

WHO     MASK  LEVEL     OPTION                                VALUE         RO
global        basic     log_to_file                           true
global        advanced  mds_beacon_grace                      360.000000
global        basic     mds_cache_memory_limit                107374182400
mds           advanced  mds_cache_trim_decay_rate             0.100000
mds           advanced  mds_cache_trim_threshold              2560000
mds           advanced  mds_log_max_segments                  2048
global        advanced  mon_allow_pool_delete                 true
global        advanced  mon_allow_pool_size_one               true
global        advanced  mon_cluster_log_file
global        advanced  mon_data_avail_warn                   20
global        advanced  osd_fast_shutdown                     false
global        advanced  osd_op_thread_suicide_timeout         900
global        advanced  osd_op_thread_timeout                 300
global        basic     rgw_dynamic_resharding                false
global        basic     rgw_max_concurrent_requests           8192
global        advanced  rgw_max_dynamic_shards                9973
global        dev       rgw_override_bucket_index_max_shards  9973
bash-4.4$ ceph config get  mds mds_cache_memory_limit

输出

107374182400
bash-4.4$ ceph config show mon.a mds_cache_memory_limit

输出

107374182400

bucket 常用命令

查看 realm (区域)

radosgw-admin realm list

输出

{
    "default_info": "43c462f5-5634-496e-ad4e-978d28c2x9090",
    "realms": [
        "myrgw"
    ]
}
radosgw-admin realm get
{
    "id": "2cfc7b36-43b6-4a9b-a89e-2a2264f54733",
    "name": "mys3",
    "current_period": "4999b859-83e2-42f9-8d3c-c7ae4b9685ff",
    "epoch": 2
}

查看区域组

radosgw-admin zonegroups list

或者

 radosgw-admin zonegroup list

输出

{
    "default_info": "f3a96381-12e2-4e7e-8221-c1d79708bc59",
    "zonegroups": [
        "myrgw"
    ]
}
radosgw-admin zonegroup get
{
    "id": "ad97bbae-61f1-41cb-a585-d10dd54e86e4",
    "name": "mys3",
    "api_name": "mys3",
    "is_master": "true",
    "endpoints": [
        "http://rook-ceph-rgw-mys3.rook-ceph.svc:80"
    ],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "9b5c0c9f-541d-4176-8527-89b4dae02ac2",
    "zones": [
        {
            "id": "9b5c0c9f-541d-4176-8527-89b4dae02ac2",
            "name": "mys3",
            "endpoints": [
                "http://rook-ceph-rgw-mys3.rook-ceph.svc:80"
            ],
            "log_meta": "false",
            "log_data": "false",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD"
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "2cfc7b36-43b6-4a9b-a89e-2a2264f54733",
    "sync_policy": {
        "groups": []
    }
}

查看 zone

radosgw-admin zone list
{
    "default_info": "9b5c0c9f-541d-4176-8527-89b4dae02ac2",
    "zones": [
        "mys3",
        "default"
    ]
}
radosgw-admin zone get
{
    "id": "9b5c0c9f-541d-4176-8527-89b4dae02ac2",
    "name": "mys3",
    "domain_root": "mys3.rgw.meta:root",
    "control_pool": "mys3.rgw.control",
    "gc_pool": "mys3.rgw.log:gc",
    "lc_pool": "mys3.rgw.log:lc",
    "log_pool": "mys3.rgw.log",
    "intent_log_pool": "mys3.rgw.log:intent",
    "usage_log_pool": "mys3.rgw.log:usage",
    "roles_pool": "mys3.rgw.meta:roles",
    "reshard_pool": "mys3.rgw.log:reshard",
    "user_keys_pool": "mys3.rgw.meta:users.keys",
    "user_email_pool": "mys3.rgw.meta:users.email",
    "user_swift_pool": "mys3.rgw.meta:users.swift",
    "user_uid_pool": "mys3.rgw.meta:users.uid",
    "otp_pool": "mys3.rgw.otp",
    "system_key": {
        "access_key": "",
        "secret_key": ""
    },
    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": "mys3.rgw.buckets.index",
                "storage_classes": {
                    "STANDARD": {
                        "data_pool": "mys3.rgw.buckets.data"
                    }
                },
                "data_extra_pool": "mys3.rgw.buckets.non-ec",
                "index_type": 0,
                "inline_data": "true"
            }
        }
    ],
    "realm_id": "",
    "notif_pool": "mys3.rgw.log:notif"
}

查看bucket 名字

radosgw-admin bucket list

查看某个bucket的详细信息
说明:对象 id ,有多少个对象,存储限制等信息都能查到。

radosgw-admin bucket stats --bucket=ceph-bkt-9
{
    "bucket": "ceph-bkt-9",
    "num_shards": 9973,
    "tenant": "",
    "zonegroup": "f3a96381-12e2-4e7e-8221-c1d79708bc59",
    "placement_rule": "default-placement",
    "explicit_placement": {
        "data_pool": "",
        "data_extra_pool": "",
        "index_pool": ""
    },
    "id": "af3f8be9-99ee-44b7-9d17-5b616dca80ff.45143.53",
    "marker": "af3f8be9-99ee-44b7-9d17-5b616dca80ff.45143.53",
    "index_type": "Normal",
    "owner": "mys3-juicefs",
    
    "ver": "0#536,1#475,省略",
    "master_ver": "0#0,1#0,2#0,3#0,4#0,省略",
    "mtime": "0.000000",
    "creation_time": "2023-11-03T16:58:09.692764Z",
    "max_marker": "0#,1#,2#,3#,省略",
    "usage": {
        "rgw.main": {
            "size": 88057775893,
            "size_actual": 99102711808,
            "size_utilized": 88057775893,
            "size_kb": 85993922,
            "size_kb_actual": 96779992,
            "size_kb_utilized": 85993922,
            "num_objects": 4209803
        }
    },
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    }
}

查看bucket的配置信息,例如索引分片值

radosgw-admin bucket limit check

说明:由于输出太多,所以只显示50行
radosgw-admin bucket limit check|head -50

[
    {
        "user_id": "dashboard-admin",
        "buckets": []
    },
    {
        "user_id": "obc-default-ceph-bkt-openbayes-juicefs-6a2b2c57-d393-4529-8620-c0af6c9c30f8",
        "buckets": [
            {
                "bucket": "ceph-bkt-20d5f58a-7501-4084-baca-98d9e68a7e57",
                "tenant": "",
                "num_objects": 355,
                "num_shards": 11,
                "objects_per_shard": 32,
                "fill_status": "OK"
            }
        ]
    },
    {
        "user_id": "rgw-admin-ops-user",
        "buckets": []
    },
    {
        "user_id": "mys3-user",
        "buckets": [
            {
                "bucket": "ceph-bkt-caa8a9d1-c278-4015-ba2d-354e142c0",
                "tenant": "",
                "num_objects": 80,
                "num_shards": 11,
                "objects_per_shard": 7,
                "fill_status": "OK"
            },
            {
                "bucket": "ceph-bkt-caa8a9d1-c278-4015-ba2d-354e142c1",
                "tenant": "",
                "num_objects": 65,
                "num_shards": 11,
                "objects_per_shard": 5,
                "fill_status": "OK"
            },
            {
                "bucket": "ceph-bkt-caa8a9d1-c278-4015-ba2d-354e142c10",
                "tenant": "",
                "num_objects": 83,
                "num_shards": 11,
                "objects_per_shard": 7,
                "fill_status": "OK"
            },
            {

查看存储使用情况命令

ceph df

输出

--- RAW STORAGE ---
CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
hdd    900 GiB  834 GiB  66 GiB    66 GiB       7.29
TOTAL  900 GiB  834 GiB  66 GiB    66 GiB       7.29
 
--- POOLS ---
POOL                     ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr                      1    1  449 KiB        2  1.3 MiB      0    255 GiB
replicapool               2   32   19 GiB    5.87k   56 GiB   6.79    255 GiB
myfs-metadata             3   16   34 MiB       33  103 MiB   0.01    255 GiB
myfs-replicated           4   32  1.9 MiB        9  5.8 MiB      0    255 GiB
.rgw.root                26    8  5.6 KiB       20  152 KiB      0    383 GiB
default.rgw.log          27   32    182 B        2   24 KiB      0    255 GiB
default.rgw.control      28   32      0 B        8      0 B      0    255 GiB
default.rgw.meta         29   32      0 B        0      0 B      0    255 GiB
ceph osd df

输出

ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META     AVAIL    %USE  VAR   PGS  STATUS
 0    hdd  0.09769   1.00000  100 GiB  6.4 GiB  4.8 GiB  3.2 MiB  1.5 GiB   94 GiB  6.35  0.87   89      up
 3    hdd  0.19530   1.00000  200 GiB   15 GiB   14 GiB   42 MiB  1.1 GiB  185 GiB  7.61  1.04  152      up
 1    hdd  0.09769   1.00000  100 GiB  7.3 GiB  5.3 GiB  1.5 MiB  1.9 GiB   93 GiB  7.27  1.00   78      up
 4    hdd  0.19530   1.00000  200 GiB   15 GiB   14 GiB  4.2 MiB  1.1 GiB  185 GiB  7.32  1.00  157      up
 2    hdd  0.09769   1.00000  100 GiB  9.9 GiB  7.6 GiB  1.2 MiB  2.3 GiB   90 GiB  9.94  1.36   73      up
 5    hdd  0.19530   1.00000  200 GiB   12 GiB   11 GiB   43 MiB  1.1 GiB  188 GiB  6.18  0.85  158      up
                       TOTAL  900 GiB   66 GiB   57 GiB   95 MiB  9.1 GiB  834 GiB  7.31                   
MIN/MAX VAR: 0.85/1.36  STDDEV: 1.24
rados df
POOL_NAME                   USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED    RD_OPS       RD     WR_OPS       WR  USED COMPR  UNDER COMPR
.mgr                     1.3 MiB        2       0       6                   0        0         0   2696928  5.5 GiB     563117   29 MiB         0 B          0 B
.rgw.root                152 KiB       20       0      40                   0        0         0       428  443 KiB         10    7 KiB         0 B          0 B
default.rgw.control          0 B        8       0      24                   0        0         0         0      0 B          0      0 B         0 B          0 B
default.rgw.log           24 KiB        2       0       6                   0        0         0         0      0 B          0      0 B         0 B          0 B
default.rgw.meta             0 B        0       0       0                   0        0         0         0      0 B          0      0 B         0 B          0 B
myfs-metadata            103 MiB       33       0      99                   0        0         0  18442579   10 GiB     272672  194 MiB         0 B          0 B
myfs-replicated          5.8 MiB        9       0      27                   0        0         0        24   24 KiB         33  1.9 MiB         0 B          0 B
mys3.rgw.buckets.data    307 MiB    18493       0   36986                   0        0         0    767457  942 MiB    2713288  1.2 GiB         0 B          0 B
mys3.rgw.buckets.index    20 MiB     2827       0    5654                   0        0         0   7299856  6.2 GiB    1208180  598 MiB         0 B          0 B
mys3.rgw.buckets.non-ec      0 B        0       0       0                   0        0         0         0      0 B          0      0 B         0 B          0 B
mys3.rgw.control             0 B        8       0      16                   0        0         0         0      0 B          0      0 B         0 B          0 B
mys3.rgw.log              76 MiB      342       0     684                   0        0         0   4944901  4.5 GiB    3764847  1.1 GiB         0 B          0 B
mys3.rgw.meta            4.3 MiB      526       0    1052                   0        0         0   4617928  3.8 GiB     658074  321 MiB         0 B          0 B
mys3.rgw.otp                 0 B        0       0       0                   0        0         0         0      0 B          0      0 B         0 B          0 B
replicapool               56 GiB     5873       0   17619                   0        0         0   4482521   65 GiB  132312964  1.3 TiB         0 B          0 B

total_objects    28143
total_used       65 GiB
total_avail      835 GiB
total_space      900 GiB
ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         0.87895  root default                              
-5         0.29298      host node01                           
 0    hdd  0.09769          osd.0        up   1.00000  1.00000
 3    hdd  0.19530          osd.3        up   1.00000  1.00000
-3         0.29298      host node02                           
 1    hdd  0.09769          osd.1        up   1.00000  1.00000
 4    hdd  0.19530          osd.4        up   1.00000  1.00000
-7         0.29298      host node03                           
 2    hdd  0.09769          osd.2        up   1.00000  1.00000
 5    hdd  0.19530          osd.5        up   1.00000  1.00000
ceph osd find 1

说明:1为 osd 的 id 号

{
    "osd": 1,
    "addrs": {
        "addrvec": [
            {
                "type": "v2",
                "addr": "10.96.12.109:6800",
                "nonce": 701714258
            },
            {
                "type": "v1",
                "addr": "10.96.12.109:6801",
                "nonce": 701714258
            }
        ]
    },
    "osd_fsid": "9b165ff1-1116-4dd8-ab04-59abb6e5e3b5",
    "host": "node02",
    "pod_name": "rook-ceph-osd-1-5cd7b7fd9b-pq76v",
    "pod_namespace": "rook-ceph",
    "crush_location": {
        "host": "node02",
        "root": "default"
    }
}

pg 常用命令

ceph pg ls-by-osd 0|head -20
PG     OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES      OMAP_BYTES*  OMAP_KEYS*  LOG   STATE         SINCE  VERSION        REPORTED       UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                               
2.5        185         0          0        0  682401810           75           8  4202  active+clean     7h  89470'3543578  89474:3900145  [2,4,0]p2  [2,4,0]p2  2023-11-15T01:09:50.285147+0000  2023-11-15T01:09:50.285147+0000                    4  periodic scrub scheduled @ 2023-11-16T12:33:31.356087+0000     
2.9        178         0          0        0  633696256          423          13  2003  active+clean   117m  89475'2273592  89475:2445503  [4,0,5]p4  [4,0,5]p4  2023-11-15T06:22:49.007240+0000  2023-11-12T21:00:41.277161+0000                    1  periodic scrub scheduled @ 2023-11-16T13:29:45.298222+0000     
2.c        171         0          0        0  607363106          178          12  4151  active+clean    14h  89475'4759653  89475:4985220  [2,4,0]p2  [2,4,0]p2  2023-11-14T17:41:46.959311+0000  2023-11-13T07:10:45.084379+0000                    1  periodic scrub scheduled @ 2023-11-15T23:58:48.840924+0000     
2.f        174         0          0        0  641630226          218           8  4115  active+clean    12h  89475'4064519  89475:4177515  [2,0,4]p2  [2,0,4]p2  2023-11-14T20:11:34.002882+0000  2023-11-13T13:19:50.306895+0000                    1  periodic scrub scheduled @ 2023-11-16T02:52:50.646390+0000     
2.11       172         0          0        0  637251602            0           0  3381  active+clean     7h  89475'4535730  89475:4667861  [0,4,5]p0  [0,4,5]p0  2023-11-15T00:41:28.325584+0000  2023-11-08T22:50:59.120985+0000                    1  periodic scrub scheduled @ 2023-11-16T05:10:15.810837+0000     
2.13       198         0          0        0  762552338          347          19  1905  active+clean     5h  89475'6632536  89475:6895777  [5,0,4]p5  [5,0,4]p5  2023-11-15T03:06:33.483129+0000  2023-11-15T03:06:33.483129+0000                    5  periodic scrub scheduled @ 2023-11-16T10:29:19.975736+0000     
2.16       181         0          0        0  689790976           75           8  3427  active+clean    18h  89475'5897648  89475:6498260  [0,2,1]p0  [0,2,1]p0  2023-11-14T14:07:00.475337+0000  2023-11-13T08:59:03.104478+0000                    1  periodic scrub scheduled @ 2023-11-16T01:55:30.581835+0000     
2.1b       181         0          0        0  686268416          437          16  1956  active+clean     5h  89475'4001434  89475:4376306  [5,0,4]p5  [5,0,4]p5  2023-11-15T02:36:36.002761+0000  2023-11-15T02:36:36.002761+0000                    4  periodic scrub scheduled @ 2023-11-16T09:15:09.271395+0000     
3.2          0         0          0        0          0            0           0    68  active+clean     4h       67167'68    89474:84680  [4,5,0]p4  [4,5,0]p4  2023-11-15T04:01:14.378817+0000  2023-11-15T04:01:14.378817+0000                    1  periodic scrub scheduled @ 2023-11-16T09:26:55.350003+0000     
3.3          2         0          0        0         34         4880          10    71  active+clean     6h       71545'71    89474:97438  [0,4,5]p0  [0,4,5]p0  2023-11-15T01:55:57.633258+0000  2023-11-12T07:28:22.391454+0000                    1  periodic scrub scheduled @ 2023-11-16T02:46:05.613867+0000     
3.6          1         0          0        0          0            0           0  1987  active+clean    91m    89475'54154   89475:145435  [4,0,5]p4  [4,0,5]p4  2023-11-15T06:48:38.818739+0000  2023-11-08T20:05:08.257800+0000                    1  periodic scrub scheduled @ 2023-11-16T15:08:59.546203+0000     
3.8          0         0          0        0          0            0           0    44  active+clean    16h       83074'44    89474:84245  [5,1,0]p5  [5,1,0]p5  2023-11-14T15:26:04.057142+0000  2023-11-13T03:51:42.271364+0000                    1  periodic scrub scheduled @ 2023-11-15T19:49:15.168863+0000     
3.b          3         0          0        0    8388608            0           0  2369  active+clean    24h    29905'26774  89474:3471652  [4,0,5]p4  [4,0,5]p4  2023-11-14T07:50:38.682896+0000  2023-11-10T20:06:19.530705+0000                    1  periodic scrub scheduled @ 2023-11-15T12:35:50.298157+0000     
3.f          4         0          0        0    4194880            0           0  4498  active+clean    15h    42287'15098   89474:905369  [0,5,4]p0  [0,5,4]p0  2023-11-14T17:15:38.681549+0000  2023-11-10T14:00:49.535978+0000                    1  periodic scrub scheduled @ 2023-11-15T22:26:56.705010+0000     
4.6          0         0          0        0          0            0           0   380  active+clean     2h      20555'380    89474:84961  [5,1,0]p5  [5,1,0]p5  2023-11-15T05:29:28.833076+0000  2023-11-09T09:41:36.198863+0000                    1  periodic scrub scheduled @ 2023-11-16T11:28:34.901957+0000     
4.a          0         0          0        0          0            0           0   274  active+clean    16h      20555'274    89474:91274  [0,1,2]p0  [0,1,2]p0  2023-11-14T16:09:50.743410+0000  2023-11-14T16:09:50.743410+0000                    1  periodic scrub scheduled @ 2023-11-15T18:12:35.709178+0000     
4.b          0         0          0        0          0            0           0   352  active+clean     6h      20555'352    89474:85072  [4,0,5]p4  [4,0,5]p4  2023-11-15T01:49:06.361454+0000  2023-11-12T12:44:50.143887+0000                    7  periodic scrub scheduled @ 2023-11-16T03:42:06.193542+0000     
4.10         1         0          0        0       2474            0           0   283  active+clean    17h      20555'283    89474:89904  [4,2,0]p4  [4,2,0]p4  2023-11-14T14:57:49.174637+0000  2023-11-09T18:56:58.241925+0000                    1  periodic scrub scheduled @ 2023-11-16T02:56:36.556523+0000     
4.14         0         0          0        0          0            0           0   304  active+clean    33h      20555'304    89474:85037  [5,1,0]p5  [5,1,0]p5  2023-11-13T22:55:24.034723+0000  2023-11-11T09:51:00.248512+0000                    1  periodic scrub scheduled @ 2023-11-15T09:18:55.094605+0000
ceph pg ls-by-pool myfs-replicated|head -10
PG    OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES   OMAP_BYTES*  OMAP_KEYS*  LOG  STATE         SINCE  VERSION    REPORTED     UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                               
4.0         0         0          0        0       0            0           0  294  active+clean    14m  20555'294  89474:89655  [4,3,5]p4  [4,3,5]p4  2023-11-15T08:06:30.504646+0000  2023-11-11T14:10:37.423797+0000                    1  periodic scrub scheduled @ 2023-11-16T20:00:48.189584+0000     
4.1         0         0          0        0       0            0           0  282  active+clean    19h  20555'282  89474:91316  [2,3,4]p2  [2,3,4]p2  2023-11-14T13:11:39.095045+0000  2023-11-08T02:29:45.827302+0000                    1  periodic deep scrub scheduled @ 2023-11-15T23:05:45.143337+0000
4.2         0         0          0        0       0            0           0  228  active+clean    30h  20555'228  89474:84866  [5,3,4]p5  [5,3,4]p5  2023-11-14T01:51:16.091750+0000  2023-11-14T01:51:16.091750+0000                    1  periodic scrub scheduled @ 2023-11-15T13:37:08.420266+0000     
4.3         0         0          0        0       0            0           0  228  active+clean    12h  20555'228  89474:91622  [2,3,1]p2  [2,3,1]p2  2023-11-14T19:23:46.585302+0000  2023-11-07T22:06:51.216573+0000                    1  periodic deep scrub scheduled @ 2023-11-16T02:02:54.588932+0000
4.4         1         0          0        0    2474            0           0  236  active+clean    18h  20555'236  89474:35560  [1,5,3]p1  [1,5,3]p1  2023-11-14T13:42:45.498057+0000  2023-11-10T13:03:03.664431+0000                    1  periodic scrub scheduled @ 2023-11-15T22:08:15.399060+0000     
4.5         0         0          0        0       0            0           0  171  active+clean    23h  20555'171  89474:88153  [3,5,1]p3  [3,5,1]p3  2023-11-14T09:01:04.687468+0000  2023-11-09T23:45:29.913888+0000                    6  periodic scrub scheduled @ 2023-11-15T13:08:21.849161+0000     
4.6         0         0          0        0       0            0           0  380  active+clean     2h  20555'380  89474:84961  [5,1,0]p5  [5,1,0]p5  2023-11-15T05:29:28.833076+0000  2023-11-09T09:41:36.198863+0000                    1  periodic scrub scheduled @ 2023-11-16T11:28:34.901957+0000     
4.7         0         0          0        0       0            0           0  172  active+clean    18h  20555'172  89474:77144  [1,5,3]p1  [1,5,3]p1  2023-11-14T13:52:17.458837+0000  2023-11-09T16:56:57.755836+0000                   17  periodic scrub scheduled @ 2023-11-16T01:10:07.099940+0000     
4.8         0         0          0        0       0            0           0  272  active+clean    15h  20555'272  89474:84994  [5,3,4]p5  [5,3,4]p5  2023-11-14T17:14:47.534009+0000  2023-11-14T17:14:47.534009+0000                    1  periodic scrub scheduled @ 2023-11-15T19:30:59.254042+0000 
ceph pg ls-by-primary 0|head -10
PG     OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES      OMAP_BYTES*  OMAP_KEYS*  LOG   STATE         SINCE  VERSION        REPORTED       UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                               
2.11       172         0          0        0  637251602            0           0  3375  active+clean     7h  89475'4536024  89475:4668155  [0,4,5]p0  [0,4,5]p0  2023-11-15T00:41:28.325584+0000  2023-11-08T22:50:59.120985+0000                    1  periodic scrub scheduled @ 2023-11-16T05:10:15.810837+0000     
2.16       181         0          0        0  689790976           75           8  3380  active+clean    18h  89475'5898101  89475:6498713  [0,2,1]p0  [0,2,1]p0  2023-11-14T14:07:00.475337+0000  2023-11-13T08:59:03.104478+0000                    1  periodic scrub scheduled @ 2023-11-16T01:55:30.581835+0000     
3.3          2         0          0        0         34         4880          10    71  active+clean     6h       71545'71    89474:97438  [0,4,5]p0  [0,4,5]p0  2023-11-15T01:55:57.633258+0000  2023-11-12T07:28:22.391454+0000                    1  periodic scrub scheduled @ 2023-11-16T02:46:05.613867+0000     
3.f          4         0          0        0    4194880            0           0  4498  active+clean    15h    42287'15098   89474:905369  [0,5,4]p0  [0,5,4]p0  2023-11-14T17:15:38.681549+0000  2023-11-10T14:00:49.535978+0000                    1  periodic scrub scheduled @ 2023-11-15T22:26:56.705010+0000     
4.a          0         0          0        0          0            0           0   274  active+clean    16h      20555'274    89474:91274  [0,1,2]p0  [0,1,2]p0  2023-11-14T16:09:50.743410+0000  2023-11-14T16:09:50.743410+0000                    1  periodic scrub scheduled @ 2023-11-15T18:12:35.709178+0000     
4.1b         0         0          0        0          0            0           0   188  active+clean     9h      20572'188    89474:60345  [0,4,5]p0  [0,4,5]p0  2023-11-14T22:45:32.243017+0000  2023-11-09T15:22:58.954604+0000                   15  periodic scrub scheduled @ 2023-11-16T05:26:22.970008+0000     
26.0         4         0          0        0       2055            0           0     4  active+clean    16h       74696'14    89474:22375    [0,5]p0    [0,5]p0  2023-11-14T16:07:57.126669+0000  2023-11-09T12:57:29.272721+0000                    1  periodic scrub scheduled @ 2023-11-15T17:12:43.441862+0000     
26.3         1         0          0        0        104            0           0     1  active+clean    10h        74632'8    89474:22487    [0,4]p0    [0,4]p0  2023-11-14T21:43:19.284917+0000  2023-11-11T13:26:08.679346+0000                    1  periodic scrub scheduled @ 2023-11-16T01:39:45.617371+0000     
27.5         1         0          0        0        154            0           0     2  active+clean    23h        69518'2    89474:22216  [0,4,2]p0  [0,4,2]p0  2023-11-14T08:56:33.324158+0000  2023-11-10T23:46:33.688281+0000                    1  periodic scrub scheduled @ 2023-11-15T20:32:30.759743+0000   
ceph osd perf
osd  commit_latency(ms)  apply_latency(ms)
  5                   2                  2
  4                   2                  2
  3                   2                  2
  2                   0                  0
  0                   0                  0
  1                   1                  1

osd 常用命令

查询 osd 属于哪个主机上的哪个设备

ceph device ls-by-daemon osd.0
DEVICE                    HOST:DEV      EXPECTED FAILURE
WUS4BB076D7P3E3_A065CEBE  bjm1:nvme3n1

虽然 ceph osd tree 可以看出 osd 属于哪个主机,但是无法查看属于哪个设备

pg 常用命令

查看 osd.0 上有哪些主 pg

ceph pg ls-by-primary osd.0

输出

PG     OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES       OMAP_BYTES*  OMAP_KEYS*  LOG   STATE         SINCE  VERSION     REPORTED    UP       ACTING   SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING
2.d        486         0          0        0  1851801600          327          10  6060  active+clean    16h  634'320860  634:323529  [0,1]p0  [0,1]p0  2024-03-04T18:58:04.731458+0000  2024-02-28T08:28:46.796845+0000                    1  periodic scrub scheduled @ 2024-03-06T03:20:01.981312+0000
2.13       504         0          0        0  1953812480            0           0  6059  active+clean     7h  634'555759  634:558967  [0,5]p0  [0,5]p0  2024-03-05T03:43:18.459383+0000  2024-02-28T22:54:58.015477+0000                    1  periodic scrub scheduled @ 2024-03-06T13:07:06.584886+0000

查看名字为 replicapool 的pool 上有哪些 pg

ceph osd pool ls

输出

.mgr
replicapool
myfs-metadata
myfs-replicated
.rgw.root
mys3.rgw.otp
mys3.rgw.control
mys3.rgw.buckets.index
mys3.rgw.log
mys3.rgw.buckets.non-ec
mys3.rgw.meta
mys3.rgw.buckets.data
ceph pg ls-by-pool replicapool|head -10
PG    OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES       OMAP_BYTES*  OMAP_KEYS*  LOG    STATE         SINCE  VERSION     REPORTED     UP       ACTING   SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING
2.0       463         0          0        0  1775915008          111           4   5459  active+clean    31h  634'329959   634:332238  [3,6]p3  [3,6]p3  2024-03-04T04:11:24.664590+0000  2024-03-03T00:45:00.484450+0000                    1  periodic scrub scheduled @ 2024-03-05T12:59:36.963354+0000
2.1       473         0          0        0  1864052736           74           8   7201  active+clean     5h  634'450901   634:682328  [6,5]p6  [6,5]p6  2024-03-05T05:44:07.407309+0000  2024-03-05T05:44:07.407309+0000                    4  periodic scrub scheduled @ 2024-03-06T12:14:38.205134+0000
2.2       485         0          0        0  1877909504            0           0   7318  active+clean    22h  634'480218   634:483142  [5,6]p5  [5,6]p5  2024-03-04T12:58:13.639970+0000  2024-03-02T10:59:17.829348+0000                    1  periodic scrub scheduled @ 2024-03-05T15:46:46.116627+0000
2.3       482         0          0        0  1843380224           74           8   7341  active+clean    12h  634'289141   634:520714  [5,4]p5  [5,4]p5  2024-03-04T22:49:43.804542+0000  2024-03-04T22:49:43.804542+0000                    4  periodic scrub scheduled @ 2024-03-06T02:22:58.428672+0000
2.4       487         0          0        0  1903738880            0           0   7183  active+clean    24h  634'727483   634:730381  [6,1]p6  [6,1]p6  2024-03-04T11:15:03.835882+0000  2024-03-03T02:16:19.520219+0000                    1  periodic scrub scheduled @ 2024-03-05T20:58:01.298985+0000
2.5       479         0          0        0  1867280384           74           8   6990  active+clean    11h  634'513990   634:745823  [7,4]p7  [7,4]p7  2024-03-04T23:20:06.240605+0000  2024-02-29T10:15:43.728365+0000                    1  periodic scrub scheduled @ 2024-03-06T05:07:29.609897+0000
2.6       486         0          0        0  1850253312          175          12   7560  active+clean     3h  634'731960   634:957950  [2,3]p2  [2,3]p2  2024-03-05T08:01:00.169060+0000  2024-02-29T12:27:22.576914+0000                    1  periodic scrub scheduled @ 2024-03-06T16:30:36.301055+0000
2.7       442         0          0        0  1702776848            0           0   5487  active+clean    10h  634'416887   634:419915  [3,6]p3  [3,6]p3  2024-03-05T00:21:31.959246+0000  2024-02-28T02:32:58.771028+0000                    1  periodic scrub scheduled @ 2024-03-06T03:13:59.737946+0000
2.8       492         0          0        0  1918148608            0           0   5531  active+clean     7h  634'466931   634:469953  [3,0]p3  [3,0]p3  2024-03-05T03:42:29.646563+0000  2024-03-03T20:05:00.780896+0000                    1  periodic scrub scheduled @ 2024-03-06T13:30:44.616937+0000

查看 osd.0 上有哪些 pg

ceph pg ls-by-osd osd.0|head -10

输出

PG     OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES       OMAP_BYTES*  OMAP_KEYS*  LOG    STATE         SINCE  VERSION     REPORTED    UP       ACTING   SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING
1.0          2         0          0        0     2032160            0           0   5289  active+clean    10h    633'5289    633:8489  [7,0]p7  [7,0]p7  2024-03-05T00:54:43.164368+0000  2024-03-01T03:23:37.094644+0000                    1  periodic scrub scheduled @ 2024-03-06T02:45:01.484116+0000
2.8        492         0          0        0  1918148608            0           0   5532  active+clean     7h  634'466932  634:469954  [3,0]p3  [3,0]p3  2024-03-05T03:42:29.646563+0000  2024-03-03T20:05:00.780896+0000                    1  periodic scrub scheduled @ 2024-03-06T13:30:44.616937+0000
2.d        486         0          0        0  1851801600          327          10   6069  active+clean    16h  634'320869  634:323538  [0,1]p0  [0,1]p0  2024-03-04T18:58:04.731458+0000  2024-02-28T08:28:46.796845+0000                    1  periodic scrub scheduled @ 2024-03-06T03:20:01.981312+0000
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

时空无限

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值