CentOS7快速部署Ceph集群

A Ceph Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and Ceph OSD (Object Storage Daemon).

  • Monitors: A Ceph Monitor (ceph-mon) maintains maps of the cluster state, including the monitor map, manager map, the OSD map, and the CRUSH map. These maps are critical cluster state required for Ceph daemons to coordinate with each other. Monitors are also responsible for managing authentication between daemons and clients. At least three monitors are normally required for redundancy and high availability.
  • Managers: A Ceph Manager daemon (ceph-mgr) is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load. The Ceph Manager daemons also host python-based modules to manage and expose Ceph cluster information, including a web-based Ceph Dashboard and REST API. At least two managers are normally required for high availability.
  • Ceph OSDs: A Ceph OSD (object storage daemon, ceph-osd) stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least 3 Ceph OSDs are normally required for redundancy and high availability.
  • MDSs: A Ceph Metadata Server (MDS, ceph-mds) stores metadata on behalf of the Ceph Filesystem (i.e., Ceph Block Devices and Ceph Object Storage do not use MDS). Ceph Metadata Servers allow POSIX file system users to execute basic commands (like ls, find, etc.) without placing an enormous burden on the Ceph Storage Cluster.

设置主机名:

# 登录各个节点,设置主机名
[root@cs-mgnt ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.56 cs-client.lavenliu.cn cs-client
192.168.2.66 cs-mgnt.lavenliu.cn cs-mgnt
192.168.2.81 cs-node01.lavenliu.cn cs-node01
192.168.2.82 cs-node02.lavenliu.cn cs-node02
192.168.2.83 cs-node03.lavenliu.cn cs-node03

hostnamectl set-hostname cs-mgnt.lavenliu.cn
hostnamectl set-hostname cs-node01.lavenliu.cn
hostnamectl set-hostname cs-node02.lavenliu.cn
hostnamectl set-hostname cs-node03.lavenliu.cn

设置 NTP 对时:

# 在各个机器上添加定时任务
# echo '*/5 * * * * /usr/sbin/ntpdate time6.aliyun.com &> /dev/null' >> /var/spool/cron/root
# systemctl restart crond

YUM 源设置:

sudo subscription-manager repos --enable=rhel-7-server-extras-rpms  # 不一定执行成功
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
sudo yum update -y
sudo yum install ceph-deploy -y
[root@cs-mgnt ~]# scp /etc/yum.repos.d/ceph.repo cs-node01:/etc/yum.repos.d/
[root@cs-mgnt ~]# scp /etc/yum.repos.d/ceph.repo cs-node02:/etc/yum.repos.d/
[root@cs-mgnt ~]# scp /etc/yum.repos.d/ceph.repo cs-node03:/etc/yum.repos.d/

环境准备

主机名IP 地址备注
cs-mgnt.lavenliu.cn192.168.2.66管理节点
cs-node01.lavenliu.cn192.168.2.81计算节点
cs-node02.lavenliu.cn192.168.2.82计算节点
cs-node03.lavenliu.cn192.168.2.83计算节点

添加虚拟硬盘:

[root@office-kvm-003 kvm_vms]# virsh list |grep zstack
 60    zstack_192_168_2_66            running
 66    zstack_compute_192_168_2_81    running
 68    zstack_compute_192_168_2_82    running
 72    zstack_compute_192_168_2_83    running
## 为每台虚拟机创建2个额外的虚拟硬盘
# cd /home/kvm_vms
# qemu-img create -f qcow2 ceph-2-66-disk1-100G 100G
# qemu-img create -f qcow2 ceph-2-66-disk2-100G 100G

# qemu-img create -f qcow2 ceph-2-81-disk1-100G 100G
# qemu-img create -f qcow2 ceph-2-81-disk2-100G 100G

# qemu-img create -f qcow2 ceph-2-82-disk1-100G 100G
# qemu-img create -f qcow2 ceph-2-82-disk2-100G 100G

# qemu-img create -f qcow2 ceph-2-83-disk1-100G 100G
# qemu-img create -f qcow2 ceph-2-83-disk2-100G 100G

virsh attach-disk zstack_192_168_2_66 \
--source /home/kvm_vms/ceph-2-66-disk1-100G \
--target vdb \
--cache none \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent

virsh attach-disk zstack_192_168_2_66 \
--source /home/kvm_vms/ceph-2-66-disk2-100G \
--target vdc \
--cache none \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent

virsh attach-disk zstack_compute_192_168_2_81 \
--source /home/kvm_vms/ceph-2-81-disk1-100G \
--target vdb \
--cache none \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent

virsh attach-disk zstack_compute_192_168_2_81 \
--source /home/kvm_vms/ceph-2-81-disk2-100G \
--target vdc \
--cache none \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent

virsh attach-disk zstack_compute_192_168_2_82 \
--source /home/kvm_vms/ceph-2-82-disk1-100G \
--target vdb \
--cache none \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent

virsh attach-disk zstack_compute_192_168_2_82 \
--source /home/kvm_vms/ceph-2-82-disk2-100G \
--target vdc \
--cache none \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent

virsh attach-disk zstack_compute_192_168_2_83 \
--source /home/kvm_vms/ceph-2-83-disk1-100G \
--target vdb \
--cache none \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent

virsh attach-disk zstack_compute_192_168_2_83 \
--source /home/kvm_vms/ceph-2-83-disk2-100G \
--target vdc \
--cache none \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent

开始部署:

[root@cs-mgnt my-cluster]# ceph-deploy new cs-node01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new cs-node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f41a3ea6320>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f41a360ed88>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['cs-node01']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[cs-node01][DEBUG ] connected to host: cs-mgnt.lavenliu.cn 
[cs-node01][INFO  ] Running command: ssh -CT -o BatchMode=yes cs-node01
[cs-node01][DEBUG ] connected to host: cs-node01 
[cs-node01][DEBUG ] detect platform information from remote host
[cs-node01][DEBUG ] detect machine type
[cs-node01][DEBUG ] find the location of an executable
[cs-node01][INFO  ] Running command: /usr/sbin/ip link show
[cs-node01][INFO  ] Running command: /usr/sbin/ip addr show
[cs-node01][DEBUG ] IP addresses found: [u'192.168.2.81', u'169.254.0.1', u'192.168.122.1']
[ceph_deploy.new][DEBUG ] Resolving host cs-node01
[ceph_deploy.new][DEBUG ] Monitor cs-node01 at 192.168.2.81
[ceph_deploy.new][DEBUG ] Monitor initial members are ['cs-node01']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.2.81']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
ceph-deploy osd create --data /dev/vdb cs-node01
ceph-deploy osd create --data /dev/vdc cs-node01

ceph-deploy osd create --data /dev/vdb cs-node02
ceph-deploy osd create --data /dev/vdc cs-node02

ceph-deploy osd create --data /dev/vdb cs-node03
ceph-deploy osd create --data /dev/vdc cs-node03

健康检查:

[root@cs-mgnt my-cluster]# ssh cs-node01 ceph health
HEALTH_OK
[root@cs-mgnt my-cluster]# ssh cs-node02 ceph health
HEALTH_OK
[root@cs-mgnt my-cluster]# ssh cs-node03 ceph health
HEALTH_OK

# 更详细的信息
[root@cs-mgnt my-cluster]# ssh cs-node01 ceph -s
  cluster:
    id:     1556dfbc-bc4f-4974-bc31-cfff1992fe37
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum cs-node01
    mgr: cs-node01(active)
    osd: 6 osds: 6 up, 6 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   6.0 GiB used, 594 GiB / 600 GiB avail
    pgs:

扩展集群信息:

ceph-deploy mds create cs-node01
# ceph-deploy mon add cs-node02 cs-node03 # 此命令执行不成功,需要拆分成两条命令
ceph-deploy mon add cs-node02 
ceph-deploy mon add cs-node03
[root@cs-node01 ~]# ceph quorum_status --format json-pretty

{
    "election_epoch": 12,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [
        "cs-node01",
        "cs-node02",
        "cs-node03"
    ],
    "quorum_leader_name": "cs-node01",
    "monmap": {
        "epoch": 3,
        "fsid": "1556dfbc-bc4f-4974-bc31-cfff1992fe37",
        "modified": "2019-08-12 15:58:38.917458",
        "created": "2019-08-12 15:24:33.338614",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "cs-node01",
                "addr": "192.168.2.81:6789/0",
                "public_addr": "192.168.2.81:6789/0"
            },
            {
                "rank": 1,
                "name": "cs-node02",
                "addr": "192.168.2.82:6789/0",
                "public_addr": "192.168.2.82:6789/0"
            },
            {
                "rank": 2,
                "name": "cs-node03",
                "addr": "192.168.2.83:6789/0",
                "public_addr": "192.168.2.83:6789/0"
            }
        ]
    }
}

添加管理端:

ceph-deploy mgr create cs-node02 cs-node03
# 查看集群信息
[root@cs-mgnt my-cluster]# ssh cs-node01 ceph -s
  cluster:
    id:     1556dfbc-bc4f-4974-bc31-cfff1992fe37
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum cs-node01,cs-node02,cs-node03
    mgr: cs-node01(active), standbys: cs-node02, cs-node03
    osd: 6 osds: 6 up, 6 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   6.0 GiB used, 594 GiB / 600 GiB avail
    pgs:

下面的三种演示,都是建立在:

  1. 集群状态是: active + clean
  2. at least one Ceph Metadata Server running

To use the Ceph Object Gateway component of Ceph, you must deploy an instance of RGW. Execute the following to create an new instance of RGW:

对象存储

ceph-deploy rgw create cs-node01

To store object data in the Ceph Storage Cluster, a Ceph client must:

  1. Set an object name
  2. Specify a pool
# 示例命令
ceph osd map {poolname} {object-name}
## 一个具体的示例
echo "Ceph is Awesome" > testfile.txt
ceph osd pool create mytest 8
# rados put {object-name} {file-path} --pool=mytest
rados put test-object-1 testfile.txt --pool=mytest
[root@cs-node01 ~]# rados -p mytest ls
test-object-1

# ceph osd map {pool-name} {object-name}
ceph osd map mytest test-object-1
[root@cs-node01 ~]# ceph osd map mytest test-object-1
osdmap e38 pool 'mytest' (5) object 'test-object-1' -> pg 5.74dc35e2 (5.2) -> up ([3,0,5], p3) acting ([3,0,5], p3)

块存储

在管理节点上安装 ceph-client

ceph-deploy install cs-client
[root@cs-mgnt my-cluster]# ceph-deploy admin cs-client
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin cs-client
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f78b6bfb998>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['cs-client']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f78b7d336e0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to cs-client
[cs-client][DEBUG ] connected to host: cs-client 
[cs-client][DEBUG ] detect platform information from remote host
[cs-client][DEBUG ] detect machine type
[cs-client][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

# 确保有权限读取
[root@ip-192-168-2-56 ~]# ll /etc/ceph/ceph.client.admin.keyring
-rw------- 1 root root 151 Aug 12 17:32 /etc/ceph/ceph.client.admin.keyring
[root@ip-192-168-2-56 ~]# chmod +r /etc/ceph/ceph.client.admin.keyring
[root@ip-192-168-2-56 ~]# ll /etc/ceph/ceph.client.admin.keyring
-rw-r--r-- 1 root root 151 Aug 12 17:32 /etc/ceph/ceph.client.admin.keyring

创建一个块设备pool:

## 1. On the admin node, use the ceph tool to create a pool (we recommend the name ‘rbd’).
## 2. On the admin node, use the rbd tool to initialize the pool for use by RBD:
[root@cs-node01 ~]# ceph osd pool create rbd 8 8
pool 'rbd' created
# rbd pool init <pool-name>
[root@cs-node01 ~]# rbd pool init rbd

在pool中配置一个块设备:

  1. On the ceph-client node, create a block device image.
# rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
rbd create myblock --size 4096 --image-feature layering -m 192.168.2.81 -k /etc/ceph/ceph.client.admin.keyring -p rbd
# --size 4096 unit is MB
[root@ip-192-168-2-56 ~]# ceph -s
  cluster:
    id:     1556dfbc-bc4f-4974-bc31-cfff1992fe37
    health: HEALTH_WARN
            too few PGs per OSD (24 < min 30)

  services:
    mon: 3 daemons, quorum cs-node01,cs-node02,cs-node03
    mgr: cs-node01(active), standbys: cs-node02, cs-node03
    osd: 6 osds: 6 up, 6 in
    rgw: 1 daemon active

  data:
    pools:   6 pools, 48 pgs
    objects: 191  objects, 1.2 KiB
    usage:   6.0 GiB used, 594 GiB / 600 GiB avail
    pgs:     48 active+clean

  io:
    client:   353 B/s rd, 588 B/s wr, 0 op/s rd, 0 op/s wr
  1. On the ceph-client node, map the image to a block device.
# sudo rbd map myblock --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}]
sudo rbd map myblock --name client.admin -m 192.168.2.81 -k /etc/ceph/ceph.client.admin.keyring -p rbd
[root@ip-192-168-2-56 ~]# rbd map myblock --name client.admin -m 192.168.2.81 -k /etc/ceph/ceph.client.admin.keyring -p rbd
/dev/rbd0
  1. Use the block device by creating a file system on the ceph-client node.
sudo mkfs.ext4 -m0 /dev/rbd/rbd/myblock
# This may take a few moments.
[root@ip-192-168-2-56 ~]# mkfs.ext4 -m0 /dev/rbd/rbd/myblock
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done                            
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
262144 inodes, 1048576 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
  1. Mount the file system on the ceph-client node.
sudo mkdir /mnt/ceph-block-device
sudo mount /dev/rbd/rbd/myblock /mnt/ceph-block-device
[root@ip-192-168-2-56 ~]# df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/vda3      ext4      189G   69G  111G  39% /
devtmpfs       devtmpfs  3.8G     0  3.8G   0% /dev
tmpfs          tmpfs     3.8G     0  3.8G   0% /dev/shm
tmpfs          tmpfs     3.8G   57M  3.8G   2% /run
tmpfs          tmpfs     3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/vda1      ext4      477M  174M  275M  39% /boot
tmpfs          tmpfs     773M     0  773M   0% /run/user/0
/dev/rbd0      ext4      3.9G   16M  3.8G   1% /mnt/ceph-block-device
cd /mnt/ceph-block-device

CephFS

[root@cs-node01 ~]# ceph osd pool create cephfs_data 16
pool 'cephfs_data' created
[root@cs-node01 ~]# ceph osd pool create cephfs_metadata 16
pool 'cephfs_metadata' created
[root@cs-node01 ~]# ceph fs new myfs cephfs_metadata cephfs_data
new fs with metadata pool 8 and data pool 7
[root@cs-node01 ~]# ceph -s
  cluster:
    id:     1556dfbc-bc4f-4974-bc31-cfff1992fe37
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum cs-node01,cs-node02,cs-node03
    mgr: cs-node01(active), standbys: cs-node02, cs-node03
    mds: myfs-1/1/1 up  {0=cs-node01=up:active}
    osd: 6 osds: 6 up, 6 in
    rgw: 1 daemon active

  data:
    pools:   8 pools, 80 pgs
    objects: 256  objects, 136 MiB
    usage:   6.4 GiB used, 594 GiB / 600 GiB avail
    pgs:     80 active+clean

  io:
    client:   0 B/s wr, 0 op/s rd, 1 op/s wr

[root@cs-node01 ~]# ceph fs ls
name: myfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

确保集群中的key与客户端中的key一样:

[root@cs-mgnt my-cluster]# pwd
/root/my-cluster
[root@cs-mgnt my-cluster]# cat ceph.client.admin.keyring 
[client.admin]
    key = AQAxFFFdJuv0OBAAH+j8OlTDiuVvDpGBKY84ug==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"

[root@ip-192-168-2-56 ceph-block-device]# cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]
    key = AQAxFFFdJuv0OBAAH+j8OlTDiuVvDpGBKY84ug==
    caps mds = "allow *"
    caps mgr = "allow *"
    caps mon = "allow *"
    caps osd = "allow *"

[root@ip-192-168-2-56 ceph-block-device]# cd
[root@ip-192-168-2-56 ~]# cat << EOF > admin.secret
AQAxFFFdJuv0OBAAH+j8OlTDiuVvDpGBKY84ug==
EOF

挂载:

[root@ip-192-168-2-56 ~]# mkdir -pv /mnt/mycephfs
mkdir: created directory ‘/mnt/mycephfs’
[root@ip-192-168-2-56 ~]# mount -t ceph 192.168.2.81:6789:/ /mnt/mycephfs
mount error 22 = Invalid argument
# 上面报错的原因是由于Ceph启用了认证,上面的命令没有加认证信息,所以报错了
[root@ip-192-168-2-56 ~]#  mount -t ceph 192.168.2.81:6789:/ /mnt/mycephfs -o name=admin,secretfile=/root/admin.secret 
[root@ip-192-168-2-56 ~]# df -Th
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/vda3           ext4      189G   69G  111G  39% /
devtmpfs            devtmpfs  3.8G     0  3.8G   0% /dev
tmpfs               tmpfs     3.8G     0  3.8G   0% /dev/shm
tmpfs               tmpfs     3.8G   57M  3.8G   2% /run
tmpfs               tmpfs     3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/vda1           ext4      477M  174M  275M  39% /boot
tmpfs               tmpfs     773M     0  773M   0% /run/user/0
/dev/rbd0           ext4      3.9G   16M  3.8G   1% /mnt/ceph-block-device
192.168.2.81:6789:/ ceph      188G     0  188G   0% /mnt/mycephfs
[root@ip-192-168-2-56 ~]# cd /mnt/mycephfs/
[root@ip-192-168-2-56 mycephfs]# ls

Ceph Dashboard

## 启用Dashboard
[root@cs-node01 ~]# ceph mgr module enable dashboard

## 默认是HTTPS协议,可以创建自签名,命令如下

## 可以禁用HTTPS协议,命令如下
[root@cs-node01 ~]# ceph config set mgr mgr/dashboard/ssl false
# This might be useful if the dashboard will be running behind a proxy which does not support SSL for its upstream servers or other situations where SSL is not wanted or required.

## Note You need to restart the Ceph manager processes manually after changing the SSL certificate and key. This can be accomplished by either running ceph mgr fail mgr or by disabling and re-enabling the dashboard module (which also triggers the manager to respawn itself):
### 当SSL的证书与key变化后,需要手动重启Ceph manager进程,或者先禁用Dashboard,然后再启用也行
$ ceph mgr module disable dashboard
$ ceph mgr module enable dashboard

## 主机名与端口配置
### By default, the ceph-mgr daemon hosting the dashboard (i.e., the currently active manager) will bind to TCP port 8443 or 8080 when SSL is disabled.
### If no specific address has been configured, the web app will bind to ::, which corresponds to all available IPv4 and IPv6 addresses.
### These defaults can be changed via the configuration key facility on a cluster-wide level (so they apply to all manager instances) as follows:
[root@cs-node01 ~]# lsof -i:8080
COMMAND    PID USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
ceph-mgr 27632 ceph   31u  IPv6 14296319      0t0  TCP *:webcache (LISTEN)
$ ceph config set mgr mgr/dashboard/server_addr $IP
$ ceph config set mgr mgr/dashboard/server_port $PORT
$ ceph config set mgr mgr/dashboard/ssl_server_port $PORT

Since each ceph-mgr hosts its own instance of dashboard, it may also be necessary to configure them separately. The IP address and port for a specific manager instance can be changed with the following commands:

$ ceph config set mgr mgr/dashboard/$name/server_addr $IP
$ ceph config set mgr mgr/dashboard/$name/server_port $PORT
$ ceph config set mgr mgr/dashboard/$name/ssl_server_port $PORT

## 用户名与密码
In order to be able to log in, you need to create a user account and associate it with at least one role. We provide a set of predefined system roles that you can use. For more details please refer to the User and Role Management section.

To create a user with the administrator role you can use the following commands:

$ ceph dashboard ac-user-create <username> <password> administrator # 命令有误,在M版本中执行不成功
[root@cs-node01 ~]# ceph dashboard set-login-credentials lianzhong cephpass
Username and password updated

## 打开浏览器登录即可
http://192.168.2.81:8080

Zabbix 模块

ceph mgr module enable zabbix

Two configuration keys are vital for the module to work:

  • zabbix_host
  • identifier (optional)

Configuration keys can be set on any machine with the proper cephx credentials, these are usually Monitors where the client.admin key is present.

ceph zabbix config-set <key> <value>
ceph zabbix config-set zabbix_host zabbix.lianzhongjr.net
ceph zabbix config-set identifier cs-node01.lavenliu.cn

[root@cs-node01 ~]# ceph zabbix config-show |python -mjson.tool
{
    "identifier": "cs-node01.lavenliu.cn",
    "interval": 60,
    "zabbix_host": "zabbix.lianzhongjr.net",
    "zabbix_port": 10051,
    "zabbix_sender": "/usr/bin/zabbix_sender"
}
mysql> select hostid from hosts where name='ceph-mgr Zabbix module';
+--------+
| hostid |
+--------+
|  10377 |
+--------+
1 row in set (0.00 sec)

mysql> select itemid, name, key_, type, trapper_hosts  from items where hostid=10377;
+--------+------------------------------------------------------+------------------------------+------+---------------+
| itemid | name                                                 | key_                         | type | trapper_hosts |
+--------+------------------------------------------------------+------------------------------+------+---------------+
|  51556 | Number of Monitors                                   | ceph.num_mon                 |    2 |               |
|  51541 | Number of OSDs                                       | ceph.num_osd                 |    2 |               |
|  51542 | Number of OSDs in state: IN                          | ceph.num_osd_in              |    2 |               |
|  51543 | Number of OSDs in state: UP                          | ceph.num_osd_up              |    2 |               |
|  51544 | Number of Placement Groups                           | ceph.num_pg                  |    2 |               |
|  51539 | Number of Placement Groups in Active state           | ceph.num_pg_active           |    2 |               |
|  51552 | Number of Placement Groups in backfill_toofull state | ceph.num_pg_backfill_toofull |    2 |               |
|  51545 | Number of Placement Groups in Backfilling state      | ceph.num_pg_backfilling      |    2 |               |
|  51535 | Number of Placement Groups in Clean state            | ceph.num_pg_clean            |    2 |               |
|  51546 | Number of Placement Groups in degraded state         | ceph.num_pg_degraded         |    2 |               |
|  51553 | Number of Placement Groups in inconsistent state     | ceph.num_pg_inconsistent     |    2 |               |
|  51536 | Number of Placement Groups in Peering state          | ceph.num_pg_peering          |    2 |               |
|  51555 | Number of Placement Groups in recovering state       | ceph.num_pg_recovering       |    2 |               |
|  51547 | Number of Placement Groups in recovery_wait state    | ceph.num_pg_recovery_wait    |    2 |               |
|  51554 | Number of Placement Groups in remapped state         | ceph.num_pg_remapped         |    2 |               |
|  51537 | Number of Placement Groups in Scrubbing state        | ceph.num_pg_scrubbing        |    2 |               |
|  51540 | Number of Placement Groups in Temporary state        | ceph.num_pg_temp             |    2 |               |
|  51538 | Number of Placement Groups in Undersized state       | ceph.num_pg_undersized       |    2 |               |
|  51551 | Number of Placement Groups in wait_backfill state    | ceph.num_pg_wait_backfill    |    2 |               |
|  51548 | Number of Pools                                      | ceph.num_pools               |    2 |               |
|  51549 | Ceph OSD avg fill                                    | ceph.osd_avg_fill            |    2 |               |
|  51533 | Ceph OSD avg PGs                                     | ceph.osd_avg_pgs             |    2 |               |
|  51518 | Ceph backfill full ratio                             | ceph.osd_backfillfull_ratio  |    2 |               |
|  51519 | Ceph full ratio                                      | ceph.osd_full_ratio          |    2 |               |
|  51520 | Ceph OSD Apply latency Avg                           | ceph.osd_latency_apply_avg   |    2 |               |
|  51521 | Ceph OSD Apply latency Max                           | ceph.osd_latency_apply_max   |    2 |               |
|  51517 | Ceph OSD Apply latency Min                           | ceph.osd_latency_apply_min   |    2 |               |
|  51516 | Ceph OSD Commit latency Avg                          | ceph.osd_latency_commit_avg  |    2 |               |
|  51512 | Ceph OSD Commit latency Max                          | ceph.osd_latency_commit_max  |    2 |               |
|  51513 | Ceph OSD Commit latency Min                          | ceph.osd_latency_commit_min  |    2 |               |
|  51514 | Ceph OSD max fill                                    | ceph.osd_max_fill            |    2 |               |
|  51550 | Ceph OSD max PGs                                     | ceph.osd_max_pgs             |    2 |               |
|  51515 | Ceph OSD min fill                                    | ceph.osd_min_fill            |    2 |               |
|  51534 | Ceph OSD min PGs                                     | ceph.osd_min_pgs             |    2 |               |
|  51522 | Ceph nearfull ratio                                  | ceph.osd_nearfull_ratio      |    2 |               |
|  51523 | Overall Ceph status                                  | ceph.overall_status          |    2 |               |
|  51530 | Overal Ceph status (numeric)                         | ceph.overall_status_int      |    2 |               |
|  51531 | Ceph Read bandwidth                                  | ceph.rd_bytes                |    2 |               |
|  51532 | Ceph Read operations                                 | ceph.rd_ops                  |    2 |               |
|  51529 | Total bytes available                                | ceph.total_avail_bytes       |    2 |               |
|  51528 | Total bytes                                          | ceph.total_bytes             |    2 |               |
|  51524 | Total number of objects                              | ceph.total_objects           |    2 |               |
|  51525 | Total bytes used                                     | ceph.total_used_bytes        |    2 |               |
|  51526 | Ceph Write bandwidth                                 | ceph.wr_bytes                |    2 |               |
|  51527 | Ceph Write operations                                | ceph.wr_ops                  |    2 |               |
+--------+------------------------------------------------------+------------------------------+------+---------------+
45 rows in set (0.01 sec)

mysql> update items set trapper_hosts='192.168.2.81,192.168.2.82,192.168.2.83' where hostid=<mgr-host-hostid>;
Query OK, 45 rows affected (0.01 sec)
Rows matched: 45  Changed: 45  Warnings: 0

mysql> select itemid, name, key_, type, trapper_hosts  from items where hostid=<mgr-host-hostid>;   
+--------+------------------------------------------------------+------------------------------+------+----------------------------------------+
| itemid | name                                                 | key_                         | type | trapper_hosts                          |
+--------+------------------------------------------------------+------------------------------+------+----------------------------------------+
|  51556 | Number of Monitors                                   | ceph.num_mon                 |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51541 | Number of OSDs                                       | ceph.num_osd                 |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51542 | Number of OSDs in state: IN                          | ceph.num_osd_in              |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51543 | Number of OSDs in state: UP                          | ceph.num_osd_up              |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51544 | Number of Placement Groups                           | ceph.num_pg                  |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51539 | Number of Placement Groups in Active state           | ceph.num_pg_active           |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51552 | Number of Placement Groups in backfill_toofull state | ceph.num_pg_backfill_toofull |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51545 | Number of Placement Groups in Backfilling state      | ceph.num_pg_backfilling      |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51535 | Number of Placement Groups in Clean state            | ceph.num_pg_clean            |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51546 | Number of Placement Groups in degraded state         | ceph.num_pg_degraded         |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51553 | Number of Placement Groups in inconsistent state     | ceph.num_pg_inconsistent     |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51536 | Number of Placement Groups in Peering state          | ceph.num_pg_peering          |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51555 | Number of Placement Groups in recovering state       | ceph.num_pg_recovering       |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51547 | Number of Placement Groups in recovery_wait state    | ceph.num_pg_recovery_wait    |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51554 | Number of Placement Groups in remapped state         | ceph.num_pg_remapped         |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51537 | Number of Placement Groups in Scrubbing state        | ceph.num_pg_scrubbing        |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51540 | Number of Placement Groups in Temporary state        | ceph.num_pg_temp             |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51538 | Number of Placement Groups in Undersized state       | ceph.num_pg_undersized       |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51551 | Number of Placement Groups in wait_backfill state    | ceph.num_pg_wait_backfill    |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51548 | Number of Pools                                      | ceph.num_pools               |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51549 | Ceph OSD avg fill                                    | ceph.osd_avg_fill            |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51533 | Ceph OSD avg PGs                                     | ceph.osd_avg_pgs             |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51518 | Ceph backfill full ratio                             | ceph.osd_backfillfull_ratio  |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51519 | Ceph full ratio                                      | ceph.osd_full_ratio          |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51520 | Ceph OSD Apply latency Avg                           | ceph.osd_latency_apply_avg   |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51521 | Ceph OSD Apply latency Max                           | ceph.osd_latency_apply_max   |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51517 | Ceph OSD Apply latency Min                           | ceph.osd_latency_apply_min   |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51516 | Ceph OSD Commit latency Avg                          | ceph.osd_latency_commit_avg  |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51512 | Ceph OSD Commit latency Max                          | ceph.osd_latency_commit_max  |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51513 | Ceph OSD Commit latency Min                          | ceph.osd_latency_commit_min  |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51514 | Ceph OSD max fill                                    | ceph.osd_max_fill            |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51550 | Ceph OSD max PGs                                     | ceph.osd_max_pgs             |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51515 | Ceph OSD min fill                                    | ceph.osd_min_fill            |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51534 | Ceph OSD min PGs                                     | ceph.osd_min_pgs             |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51522 | Ceph nearfull ratio                                  | ceph.osd_nearfull_ratio      |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51523 | Overall Ceph status                                  | ceph.overall_status          |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51530 | Overal Ceph status (numeric)                         | ceph.overall_status_int      |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51531 | Ceph Read bandwidth                                  | ceph.rd_bytes                |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51532 | Ceph Read operations                                 | ceph.rd_ops                  |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51529 | Total bytes available                                | ceph.total_avail_bytes       |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51528 | Total bytes                                          | ceph.total_bytes             |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51524 | Total number of objects                              | ceph.total_objects           |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51525 | Total bytes used                                     | ceph.total_used_bytes        |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51526 | Ceph Write bandwidth                                 | ceph.wr_bytes                |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
|  51527 | Ceph Write operations                                | ceph.wr_ops                  |    2 | 192.168.2.81,192.168.2.82,192.168.2.83 |
+--------+------------------------------------------------------+------------------------------+------+----------------------------------------+
45 rows in set (0.00 sec)
## 使用了这个脚本进行监控 Ceph集群
[root@cs-node01 ~]# git clone https://github.com/thelan/ceph-zabbix.git
Cloning into 'ceph-zabbix'...
remote: Enumerating objects: 118, done.
remote: Total 118 (delta 0), reused 0 (delta 0), pack-reused 118
Receiving objects: 100% (118/118), 24.19 KiB | 0 bytes/s, done.
Resolving deltas: 100% (67/67), done.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值