ceph技巧3

1. 设置cephx keys

如果ceph设置了cephx,就可以为不同的用户设置权限。

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#创建dummy的key
$ ceph auth get-or-create client.dummy mon 'allow r' osd  'allow rwx pool=dummy'
 
[client.dummy]
     key = AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==
$ ceph auth list
installed auth entries:
...
client.dummy
     key: AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==
     caps: [mon] allow r
     caps: [osd] allow rwx pool=dummy
...
 
#对dummy的key重新分配权限
$ ceph auth caps client.dummy mon 'allow rwx' osd 'allow rwx pool=dummy'
updated caps for client.dummy
$ ceph auth list
installed auth entries:
client.dummy
     key: AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==
     caps: [mon] allow rwx
     caps: [osd] allow allow rwx pool=dummy
<pre> /dev/sda2       /srv/ceph/osdX1  xfs rw,noexec,nodev,noatime,nodiratime,barrier=0   0   0
< /pre >

2. 查看rbd被挂载到哪里

由于rbd showmapped只能显示本地挂载的rbd设备,如果机器比较多,而你恰好忘了在哪里map的了,就只能逐个机器找了。利用listwatchers可以解决这个问题。

对于image format为1的块:

?
1
2
3
4
5
6
7
8
$ rbd info boot
rbd image 'boot' :
     size 10240 MB in 2560 objects
     order 22 (4096 kB objects)
     block_name_prefix: rb.0.89ee.2ae8944a
     format : 1
$ rados -p rbd listwatchers boot.rbd
watcher=192.168.251.102:0 /2550823152 client.35321 cookie=1
对于image format为2的块,有些不一样:
?
1
2
3
4
5
6
7
8
9
[root@osd2 ceph] # rbd info myrbd/rbd1
rbd image 'rbd1' :
     size 8192 kB in 2 objects
     order 22 (4096 kB objects)
     block_name_prefix: rbd_data.13436b8b4567
     format : 2
     features: layering
[root@osd2 ceph] # rados -p myrbd listwatchers rbd_header.13436b8b4567
watcher=192.168.108.3:0 /2292307264 client.5130 cookie=1
需要将rbd info得到的序号加到rbd_header后面。

3. 怎样删除巨型rbd image

之前在一些博客看到删除巨型rbd image,如果直接通过rbd rm的话会很耗时(漫长的夜)。但在ceph 0.87上尝试了一下,这个问题已经不存在了,具体过程如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#创建一个1PB大小的块
[root@osd2 ceph] # time rbd create myrbd/huge-image -s 1024000000
 
real    0m0.353s
user    0m0.016s
sys 0m0.009s
[root@osd2 ceph] # rbd info myrbd/huge-image
rbd image 'huge-image' :
     size 976 TB in 256000000 objects
     order 22 (4096 kB objects)
     block_name_prefix: rb.0.1489.6b8b4567
     format : 1
[root@osd2 ceph] # time rbd rm myrbd/huge-image
Removing image: 2% complete...^\Quit (core dumped)
 
real    10m24.406s
user    18m58.335s
sys 11m39.507s
上面创建了一个1PB大小的块,也许是太大了,直接rbd rm删除的时候还是很慢,所以用了一下方法:
?
1
2
3
4
5
6
7
8
[root@osd2 ceph] # rados -p myrbd rm huge-image.rbd
[root@osd2 ceph] # time rbd rm myrbd/huge-image
2014-11-06 09:44:42.916826 7fdb4fd5a7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
Removing image: 100% complete... done .
 
real    0m0.192s
user    0m0.012s
sys 0m0.013s
来个1TB大小的块试试:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
[root@osd2 ceph] # rbd create myrbd/huge-image -s 1024000
[root@osd2 ceph] # rbd info myrbd/huge-image
rbd image 'huge-image' :
     size 1000 GB in 256000 objects
     order 22 (4096 kB objects)
     block_name_prefix: rb.0.149c.6b8b4567
     format : 1
[root@osd2 ceph] # time rbd rm myrbd/huge-image
Removing image: 100% complete... done .
 
real    0m29.418s
user    0m52.467s
sys 0m32.372s
所以巨型的块删除还是要用以下方法:

format 1:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@osd2 ceph] # rbd create myrbd/huge-image -s 1024000000
[root@osd2 ceph] # rbd info myrbd/huge-image
rbd image 'huge-image' :
     size 976 TB in 256000000 objects
     order 22 (4096 kB objects)
     block_name_prefix: rb.0.14a5.6b8b4567
     format : 1
[root@osd2 ceph] # rados -p myrbd rm huge-image.rbd
[root@osd2 ceph] # time rados -p myrbd ls|grep '^rb.0.14a5.6b8b4567'|xargs -n 200  rados -p myrbd rm
[root@osd2 ceph] # time rbd rm myrbd/huge-image
2014-11-06 09:54:12.718211 7ffae55747e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
Removing image: 100% complete... done .
 
real    0m0.191s
user    0m0.015s
sys 0m0.010s
format 2:
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@osd2 ceph] # rbd create myrbd/huge-image -s 1024000000 --image-format=2
[root@osd2 ceph] # rbd info myrbd/huge-image
rbd image 'huge-image' :
     size 976 TB in 256000000 objects
     order 22 (4096 kB objects)
     block_name_prefix: rbd_data.14986b8b4567
     format : 2
     features: layering
[root@osd2 ceph] # rados -p myrbd rm rbd_id.huge-image
[root@osd2 ceph] # rados -p myrbd rm rbd_header.14986b8b4567
[root@osd2 ceph] # rados -p myrbd ls | grep '^rbd_data.14986b8b4567' | xargs -n 200  rados -p myrbd rm
[root@osd2 ceph] # time rbd rm myrbd/huge-image
2014-11-06 09:59:26.043671 7f6b6923c7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directory
Removing image: 100% complete... done .
 
real    0m0.192s
user    0m0.016s
sys    0m0.010s
注意,如果块是空的,不许要xargs那条语句;如果是非空就需要了。

所以,如果是100TB以上的块,最好还是先删除id,再rbd rm进行删除。

4. 查看kvm或qemu是否支持ceph

?
1
2
3
4
5
6
$ sudo qemu-system-x86_64 -drive format =?
Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd dmg tftp ftps ftp https http cow cloop bochs blkverify blkdebug
$ qemu-img -h
...
...
Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd dmg tftp ftps ftp https http cow cloop bochs blkverify blkdebug

可以到 http://ceph.com/packages/下载最新的rpm或deb包。

5. 利用ceph rbd配置nfs

一种简单实用的存储方法,具体如下:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
#安装nfs rpm
[root@osd1 current] # yum install nfs-utils rpcbind
Loaded plugins: fastestmirror, priorities, refresh-packagekit, security
Loading mirror speeds from cached hostfile
epel /metalink                                                                                                  | 5.5 kB     00:00    
  * base: mirrors.cug.edu.cn
  * epel: mirrors.yun-idc.com
  * extras: mirrors.btte.net
  * rpmforge: ftp .riken.jp
  * updates: mirrors.btte.net
Ceph                                                                                                           |  951 B     00:00    
Ceph-noarch                                                                                                    |  951 B     00:00    
base                                                                                                           | 3.7 kB     00:00    
ceph- source                                                                                                    |  951 B     00:00    
epel                                                                                                           | 4.4 kB     00:00    
epel /primary_db                                                                                                | 6.3 MB     00:01    
extras                                                                                                         | 3.4 kB     00:00    
rpmforge                                                                                                       | 1.9 kB     00:00    
updates                                                                                                        | 3.4 kB     00:00    
updates /primary_db                                                                                             | 188 kB     00:00    
69 packages excluded due to repository priority protections
Setting up Install Process
Package rpcbind-0.2.0-11.el6.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package nfs-utils.x86_64 1:1.2.3-39.el6 will be updated
---> Package nfs-utils.x86_64 1:1.2.3-54.el6 will be an update
--> Finished Dependency Resolution
 
Dependencies Resolved
 
======================================================================================================================================
  Package                         Arch                         Version                                Repository                  Size
======================================================================================================================================
Updating:
  nfs-utils                       x86_64                       1:1.2.3-54.el6                         base                       326 k
 
Transaction Summary
======================================================================================================================================
Upgrade       1 Package(s)
 
Total download size: 326 k
Is this ok [y /N ]: y
Downloading Packages:
nfs-utils-1.2.3-54.el6.x86_64.rpm                                                                              | 326 kB     00:00    
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
   Updating   : 1:nfs-utils-1.2.3-54.el6.x86_64                                                                                    1 /2
   Cleanup    : 1:nfs-utils-1.2.3-39.el6.x86_64                                                                                    2 /2
   Verifying  : 1:nfs-utils-1.2.3-54.el6.x86_64                                                                                    1 /2
   Verifying  : 1:nfs-utils-1.2.3-39.el6.x86_64                                                                                    2 /2
 
Updated:
   nfs-utils.x86_64 1:1.2.3-54.el6          
 
#创建一个块并格式化、挂载
[root@osd1 current] # rbd create myrbd/nfs_image -s 1024000 --image-format=2
[root@osd1 current] # rbd map myrbd/nfs_image
/dev/rbd0
[root@osd1 current] # mkdir /mnt/nfs
[root@osd1 current] # mkfs.xfs /dev/rbd0
log stripe unit (4194304 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data= /dev/rbd0              isize=256    agcount=33, agsize=8190976 blks
          =                       sectsz=512   attr=2, projid32bit=0
data     =                       bsize=4096   blocks=262144000, imaxpct=25
          =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=128000, version=2
          =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@osd1 current] # mount /dev/rbd0 -o rw,noexec,nodev,noatime,nobarrier /mnt/nfs
 
#改写exports文件,添加一行
[root@osd1 current] #  vim /etc/exports
/mnt/nfs 192.168.108.0 /24 (rw,no_root_squash,no_subtree_check,async)
[root@osd1 current] # exportfs -r
这里还需要执行指令service rpcbind start
[root@osd1 current] # service nfs start
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]
 
此时客户端就可以挂载了。客户端运行:
showmount -e 192.168.108.2
然后进行挂载:
mount -t nfs 192.168.108.2: /mnt/nfs /mnt/nfs
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值