ceph配置-------“dumpling”版本
1.配置方案:
安装机器时,要自己划分分区,/ ;/home;swap ,另外一块硬盘不要动,等系统安装完成了用fdisk在格式化安装,挂载
client:ubuntu12.04 server版
mon和mds:CentOS-6.4(同一台机器)
osd0:CentOS-6.4
osd1:CentOS-6.4
2.安装过程:
1.修改IP、主机名和hosts文件
以MON节点为例 修改成如下所示:(注意,对于多网卡,默认网管只能有一个)1) # vi /etc/sysconfig/network (一般默认的就是这样)
2) vi /etc/sysconfig/network
3) # vi /etc/hosts
2.除Client节点之外,SSH无密码访问。
1)在A机器上安装ssh服务端和客户端。
rpm -qa | grep openssh 查看是否安装ssh服务
yum install openssh-server 安装ssh服务
/etc/init.d/sshd status看sshd服务的状态
/etc/init.d/sshd start开启sshd服务
/etc/init.d/sshd stop关闭sshd服务
yum install openssh-clients
2)同样在B机器上安装SSH客户端和服务端
3)需要配置主机A无密码登录主机A,主机B
先确保所有主机的防火墙处于关闭状态。
在主机A上执行如下:
1. $cd ~/.ssh
2. $ssh-keygen -t rsa --------------------然后一直按回车键,就会按照默认的选项将生成的密钥保存在.ssh/id_rsa文件中。
3. $cp id_rsa.pub authorized_keys(注意,这个文件名不能变)
这步完成后,正常情况下就可以无密码登录本机了,即ssh localhost,无需输入密码。
4. $scp authorized_keys summer@10.0.5.198:/home/summer/.ssh ------把刚刚产生的authorized_keys文件拷一份到主机B上.
5. $chmod 600 authorized_keys
进入主机B的.ssh目录,改变authorized_keys文件的许可权限。
(4和5可以合成一步,执行: $ssh-copy-id -i summer@10.0.5.198 )
3.关闭防火墙,禁用selinux
# service iptables stop
# chkconfig iptables off
4.安装ceph
安装官方教程,安装"dumpling"版本
1)使用官方文档安装ceph-deploy(CentOS)
sudo vim /etc/yum.repos.d/ceph.repo修改如下:
[ceph-noarch] name=Ceph noarch packages baseurl=http://ceph.com/rpm-{ceph-stable-release}/{distro}/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
sudo yum update && sudo yum install ceph-deploy
2)安装ceph(在4台机器上)
CentOS安装ceph
# sudo rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
# su -c 'rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm'
# sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
# su -c 'rpm -Uvh http://ceph.com/rpm-dumpling/el6/noarch/ceph-release-1-0.el6.noarch.rpm'
# sudo yum install ceph
ubuntu安装Ceph
sudo apt-get update && sudo apt-get install ceph
5..配置Ceph
1) 在除client端以外的其他节点上新建目录 ,用来存放CEPH的配置文件
# mkdir /etc/ceph
2)搭建OSD节点
在两台OSD服务器上挂载新硬盘(如果原有硬盘上还有空间的话也可以新建分区),格式化,挂载:
osd0:
# mkdir /mnt/osd0
# fdisk /dev/sdb
# n --> p --> 1 --> Enter --> Enter --> w # mkfs.btrfs /dev/sdb1
# mount -t btrfs /dev/sdb1 /mnt/osd0
osd1:
# mkdir /mnt/osd1
# fdisk /dev/sdb
# n --> p --> 1 --> Enter --> Enter --> w # mkfs.btrfs /dev/sdb1
如果没有mkfs.btrfs命令,使用 yum install btrfs-progs安装该包
# mount -t btrfs /dev/sdb1 /mnt/osd1
如果挂载的硬盘分区格式是ext3或者ext4,一定要加上参数-o user_xattr。(注意:这里一定要是btrfs格式的,ext4的系统有问题)
3)配置mon节点
# mkdir -p /data/mon0
"CEPH的源码包"中带有三个非常重要的文件,一个是"sample.ceph.conf",是配置文件的样例;一个是sample.fetch_conf,是个脚本,作用是在各节点间同步配置文件;还有一个是init-ceph,是启动脚本,用来启动各节点服务。将他们三个分别拷贝到相应路径 (不同的版本,这三个文件的文件名可能又差异,在/src目录下找一下)
# cp ceph-0.67/src/sample.ceph.conf /etc/ceph/ceph.conf
# cp ceph-0.67/src/sample.fetch_conf /etc/ceph/fetch_conf
# cp ceph-0.67/src/init-ceph /etc/init.d/ceph
修改配置文件:
# vi /etc/ceph/ceph.conf
;
; Sample ceph ceph.conf file.
;
; This file defines cluster membership, the various locations
; that Ceph stores data, and any other runtime options.
; If a 'host' is defined for a daemon, the init.d start/stop script will
; verify that it matches the hostname (or else ignore it). If it is
; not defined, it is assumed that the daemon is intended to start on
; the current host (e.g., in a setup with a startup.conf on each
; node).
; The variables $type, $id and $name are available to use in paths
; $type = The type of daemon, possible values: mon, mds and osd
; $id = The ID of the daemon, for mon.alpha, $id will be alpha
; $name = $type.$id
; For example:
; osd.0
; $type = osd
; $id = 0
; $name = osd.0
; mon.beta
; $type = mon
; $id = beta
; $name = mon.beta
; global
[global]
; enable secure authentication
auth supported = cephx
; allow ourselves to open a lot of files
max open files = 131072
; set log file
log file = /var/log/ceph/$name.log
; log_to_syslog = true ; uncomment this line to log to syslog
; set up pid files
pid file = /var/run/ceph/$name.pid
; If you want to run a IPv6 cluster, set this to true. Dual-stack isn't possible
;ms bind ipv6 = true
; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = /data/mon$id
; If you are using for example the RADOS Gateway and want to have your newly created
; pools a higher replication level, you can set a default
;osd pool default size = 3
; You can also specify a CRUSH rule for new pools
; Wiki: http://ceph.newdream.net/wiki/Custom_data_placement_with_CRUSH
;osd pool default crush rule = 0
; Timing is critical for monitors, but if you want to allow the clocks to drift a
; bit more, you can specify the max drift.
;mon clock drift allowed = 1
; Tell the monitor to backoff from this warning for 30 seconds
;mon clock drift warn backoff = 30
; logging, for debugging monitor crashes, in order of
; their likelihood of being helpful :)
;debug ms = 1
;debug mon = 20
;debug paxos = 20
;debug auth = 20
[mon.0]
host = mon
mon addr = 10.1.199.41:6789
; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
keyring = /etc/ceph/keyring.$name
; mds logging to debug issues.
debug ms = 1
debug mds = 20
[mds.0]
host = mon
; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.
[osd]
; This is where the osd expects its data
osd data = /mnt/osd$id
; Ideally, make the journal a separate disk or partition.
; 1-10GB should be enough; more if you have fast or many
; disks. You can use a file under the osd data dir if need be
; (e.g. /data/$name/journal), but it will be slower than a
; separate disk or partition.
; This is an example of a file-based journal.
osd journal = /mnt/osd$id/journal
osd journal size = 128 ; journal size, in megabytes
; If you want to run the journal on a tmpfs (don't), disable DirectIO
;journal dio = false
; You can change the number of recovery operations to speed up recovery
; or slow it down if your machines can't handle it
; osd recovery max active = 3
; osd logging to debug osd issues, in order of likelihood of being
; helpful
;debug ms = 1
;debug osd = 20
;debug filestore = 20
;debug journal = 20
; ### The below options only apply if you're using mkcephfs
; ### and the devs options
; The filesystem used on the volumes
osd mkfs type = btrfs
; If you want to specify some other mount options, you can do so.
; for other filesystems use 'osd mount options $fstype'
osd mount options btrfs = rw,noatime
; The options used to format the filesystem via mkfs.$fstype
; for other filesystems use 'osd mkfs options $fstype'
; osd mkfs options btrfs =
[osd.0]
host = osd0
; if 'devs' is not specified, you're responsible for
; setting up the 'osd data' dir.
devs = /dev/sdb1
[osd.1]
host = osd1
devs = /dev/sdb1
修改fetch_config,在最后添上一句命令即可。
# vi /etc/ceph/fetch_config
#!/bin/sh
conf="$1"
## fetch ceph.conf from some remote location and save it to $conf.
##
## make sure this script is executable (chmod +x fetch_config)
##
## examples:
##
## from a locally accessible file
# cp /path/to/ceph.conf $conf
## from a URL:
# wget -q -O $conf http://somewhere.com/some/ceph.conf
## via scp
# scp -i /path/to/id_dsa user@host:/path/to/ceph.conf $conf
scp root@mon:/usr/local/etc/ceph/ceph.conf $conf
4)将配置文件同步到/etc/ceph和/usr/local/etc/ceph目录下(需手动先建立/etc/ceph目录,如果在所有节点的/usr/local/etc/ceph目前都不存在,需要手动创建):
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/etc/ceph/ceph.conf
5)配置最后一步
# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
6)启动
7.客户端client挂载(ubuntu)
1)客户端安装ceph
2)创建挂载点
3)挂载
sudo mount -t ceph 10.1.199.41:6789:/ /mnt/cephfs -o name=admin,secret=AQDm9YtSyA7hGhAAlrpbbLhS2XtS+3UnPXsNWA==Filesystem Size Used Avail Use% Mounted on
/dev/sda3 47G 2.0G 43G 5% /
udev 3.9G 4.0K 3.9G 1% /dev
tmpfs 1.6G 232K 1.6G 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 3.9G 0 3.9G 0% /run/shm
/dev/sda1 118M 76M 36M 69% /boot
10.1.199.41:6789:/ 160G 8.4G 152G 6% /mnt/cephfs