ceph技巧总结(二)

1. rbd块的真实大小

由于ceph采用thin provisioning,只有写数据时才会分配相应的块。所以当我们创建一个很大的块时,也是瞬间完成的,因为除了一些元数据外,ceph并没有分配出相应的空间。那么我们创建的rbd块到底有多大呢?以我的环境为例:

[root@osd1 /]# rbd ls myrbd
hello.txt
rbd1
[root@osd1 /]# rbd info myrbd/rbd1
rbd image 'rbd1':
	size 1024 MB in 256 objects
	order 22 (4096 kB objects)
	block_name_prefix: rbd_data.13446b8b4567
	format: 2
	features: layering
[root@osd1 /]# rbd diff myrbd/rbd1 | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'
14.2812 MB
[root@osd1 /]# rbd diff myrbd/rbd1
Offset     Length  Type 
0          131072  data 
4194304    16384   data 
130023424  16384   data 
260046848  16384   data 
390070272  16384   data 
520093696  4194304 data 
524288000  4194304 data 
528482304  2129920 data 
650117120  16384   data 
780140544  16384   data 
910163968  16384   data 
1040187392 16384   data 
1069547520 4194304 data

2. rbd format1与rbd fromat2

rbd format1:

[root@osd1 /]# rbd create myrbd/rbd1 -s 8
[root@osd1 /]# rbd info myrbd/rbd1
rbd image 'rbd1':
	size 8192 kB in 2 objects
	order 22 (4096 kB objects)
	block_name_prefix: rb.0.13fb.6b8b4567
	format: 1
[root@osd1 /]# rados ls -p myrbd
rbd_directory
rbd1.rbd
[root@osd1 /]# rbd map myrbd/rbd1
[root@osd1 /]# rbd showmapped
id pool  image snap device    
0  myrbd rbd1  -    /dev/rbd0 
[root@osd1 /]# dd if=/dev/zero of=/dev/rbd0 
dd: writing to `/dev/rbd0': No space left on device
16385+0 records in
16384+0 records out
8388608 bytes (8.4 MB) copied, 2.25155 s, 3.7 MB/s
[root@osd1 /]# rados ls -p myrbd
rbd_directory
rbd1.rbd
rb.0.13fb.6b8b4567.000000000001
rb.0.13fb.6b8b4567.000000000000
  1. $image_name.rbd : 包含了这个块的id (rb.0.13fb.6b8b4567)
  2. $rbd_id.$fragment : 数据块
  3. rbd_directory :当前pool中rbd块的列表

rbd format2

[root@osd1 /]# rbd create myrbd/rbd1 -s 8 --image-format=2
[root@osd1 /]# rbd info myrbd/rbd1
rbd image 'rbd1':
	size 8192 kB in 2 objects
	order 22 (4096 kB objects)
	block_name_prefix: rbd_data.13436b8b4567
	format: 2
	features: layering
[root@osd1 /]# rados ls -p myrbd
rbd_directory
rbd_header.13436b8b4567
rbd_id.rbd1
[root@osd1 /]# rbd map myrbd/rbd1
[root@osd1 /]# rbd showmapped
id pool  image snap device    
0  myrbd rbd1  -    /dev/rbd0 
[root@osd1 /]# dd if=/dev/zero of=/dev/rbd0
dd: writing to `/dev/rbd0': No space left on device
16385+0 records in
16384+0 records out
8388608 bytes (8.4 MB) copied, 2.14407 s, 3.9 MB/s
[root@osd1 /]# rados ls -p myrbd
rbd_directory
rbd_data.13436b8b4567.0000000000000000
rbd_data.13436b8b4567.0000000000000001
rbd_header.13436b8b4567
rbd_id.rbd1
  1. rbd_data.$rbd_id.$fragment :数据块
  2. rbd_directory : 当前pool中rbd块的列表
  3. rbd_header.$rbd_id : rbd块的元数据
  4. rbd_id.$image_name : 包含了这个块的id ( 13436b8b4567 )

3. Ceph Primary Affinity

[root@mon0 yum.repos.d]# ceph --admin-daemon /var/run/ceph/ceph-mon.*.asok config show | grep 'primary_affinity'
  "mon_osd_allow_primary_affinity": "false",

#在ceph.conf中加入primary affinity
mon osd allow primary affinity = true

[root@mon0 yum.repos.d]# ceph pg dump | grep active+clean | egrep "\[0," | wc -l
dumped all in format plain
109
[root@mon0 yum.repos.d]# ceph pg dump | grep active+clean | egrep ",0\]" | wc -l
dumped all in format plain
123

# ceph osd primary-affinity osd.0 0.5
set osd.0 primary-affinity to 0.5 (8327682)

# ceph pg dump | grep active+clean | egrep "\[0," | wc -l
48
# ceph pg dump | grep active+clean | egrep ",0\]" | wc -l
132

# ceph osd primary-affinity osd.0 0
set osd.0 primary-affinity to 0 (802)

# ceph pg dump | grep active+clean | egrep "\[0," | wc -l
0
# ceph pg dump | grep active+clean | egrep ",0\]" | wc -l
180

4. 升级ceph

29号ceph放出了0.87 giant版本,我们第一时间进行了升级。升级过程非常简单,只需修改一处ceph.repo,然后yum update ceph就可以了。升级完成后重启各种服务。ceph.repo如下:

[root@mon0 software]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
enabled=1
baseurl=http://ceph.com/rpm-giant/el6/$basearch
priority=1
gpgcheck=1
type=rpm-md

[ceph-source]
name=Ceph source packages
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
enabled=1
baseurl=http://ceph.com/rpm-giant/el6/SRPMS
priority=1
gpgcheck=1
type=rpm-md

[Ceph-noarch]
name=Ceph noarch packages
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
enabled=1
baseurl=http://ceph.com/rpm-giant/el6/noarch
priority=1
gpgcheck=1
type=rpm-m

5. ceph admin socket

利用ceph admin socket可以获得ceph的在线参数,对于验证与调试很有帮助。

$ ceph --admin-daemon /path/to/your/ceph/socket
[root@osd2 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.4.asok help
{ "config diff": "dump diff of current config and default config",
  "config get": "config get <field>: get the config value",
  "config set": "config set <field> <val> [<val> ...]: set a config variable",
  "config show": "dump current config settings",
  "dump_blacklist": "dump blacklisted clients and times",
  "dump_historic_ops": "show slowest recent ops",
  "dump_op_pq_state": "dump op priority queue state",
  "dump_ops_in_flight": "show the ops currently in flight",
  "dump_reservations": "show recovery reservations",
  "dump_watchers": "show clients which have active watches, and on which objects",
  "flush_journal": "flush the journal to permanent store",
  "get_command_descriptions": "list available commands",
  "getomap": "output entire object map",
  "git_version": "get git sha1",
  "help": "list available commands",
  "injectdataerr": "inject data error into omap",
  "injectmdataerr": "inject metadata error",
  "log dump": "dump recent log entries to log file",
  "log flush": "flush log entries to log file",
  "log reopen": "reopen log file",
  "objecter_requests": "show in-progress osd requests",
  "perf dump": "dump perfcounters value",
  "perf schema": "dump perfcounters schema",
  "rmomapkey": "remove omap key",
  "setomapheader": "set omap header",
  "setomapval": "set omap key",
  "status": "high-level status of OSD",
  "truncobj": "truncate object to length",
  "version": "get ceph version"}

获取journal相关的参数设置:

[root@osd2 ~]# ceph --admin-daemon /var/run/ceph/ceph-mon.osd2.asok config show | grep journal
  "debug_journaler": "0\/5",
  "debug_journal": "1\/3",
  "journaler_allow_split_entries": "true",
  "journaler_write_head_interval": "15",
  "journaler_prefetch_periods": "10",
  "journaler_prezero_periods": "5",
  "journaler_batch_interval": "0.001",
  "journaler_batch_max": "0",
  "mds_kill_journal_at": "0",
  "mds_kill_journal_expire_at": "0",
  "mds_kill_journal_replay_at": "0",
  "mds_journal_format": "1",
  "osd_journal": "\/var\/lib\/ceph\/osd\/ceph-osd2\/journal",
  "osd_journal_size": "5120",
  "filestore_fsync_flushes_journal_data": "false",
  "filestore_journal_parallel": "false",
  "filestore_journal_writeahead": "false",
  "filestore_journal_trailing": "false",
  "journal_dio": "true",
  "journal_aio": "true",
  "journal_force_aio": "false",
  "journal_max_corrupt_search": "10485760",
  "journal_block_align": "true",
  "journal_write_header_frequency": "0",
  "journal_max_write_bytes": "10485760",
  "journal_max_write_entries": "100",
  "journal_queue_max_ops": "300",
  "journal_queue_max_bytes": "33554432",
  "journal_align_min_size": "65536",
  "journal_replay_from": "0",
  "journal_zero_on_create": "false",
  "journal_ignore_corruption": "false",




转载于:https://my.oschina.net/renguijiayi/blog/340741

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值