Ceph 安装配置之一 创建集群( 转)

ceph安装配置:
由于以下环境是在装有openstack的三节点上安装的,故主机名是:controller/compute/network
但是配置ceph,可以添加mon、mds、osd、client等名称到/etc/hosts里
注:使用mon0 osd0 osd1主机名在后边创建过程中,提到和远程hostname不匹配,OSD节点id是从0开始,所以主机节点可从osd0开始
    故仍使用原openstack三节点为的主机名mon0=compute osd0=controller osd1=network
(一)环境准备
1、节点IP
192.168.128.100  (hostname controller,有一个分区/dev/sdc1 提供给osd)
192.168.128.102  (hostname network,有一个分区/dev/sdc1 提供给osd)
192.168.128.101  (hostname compute,有一个分区/dev/sdc1 提供给osd)
2、修改所有主机的/etc/hosts
#controller
192.168.128.100 controller swift1 osd0
#compute
192.168.128.101 compute mon0 osd2 client
#network
192.168.128.102 network swift2 osd1
3、在所有node上创建ceph用户(注:主机上创建了ceph用户,为方便管理,还是决定使用mengfei用户)
sudo useradd -d /home/ceph -m ceph
sudo passwd ceph
4、在每个Ceph节点中为用户增加 root 权限
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph
echo "mengfei ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/mengfei
sudo chmod 0440 /etc/sudoers.d/mengfei
    permission denied (publickey)解决方法
       修改root密码后,依然拒绝root密码登录,解决方法如下:
    ssh出现permission denied (publickey)问题:
      修改/etc/ssh/sshd_config文件.
    将其中的PermitRootLogin no修改为yes
    PubkeyAuthentication yes  注:不能改为no,否则在key无法使用
    AuthorizedKeysFile .ssh/authorized_keys前面加上#屏蔽掉,
    PasswordAuthentication no  注释#掉
    重启sshd即可:service sshd restart
5、配置ceph-deploy部署的无密码登录每个ceph节点   
(1)在每个Ceph节点上安装一个SSH服务器
apt-get install openssh-server -y
(2)配置compute管理节点与每个Ceph节点无密码的SSH访问。(使用不同的用户,要在不同用户下建key)
     root@compute:~/.ssh# ssh-keygen
     Generating public/private rsa key pair.
     Enter file in which to save the key (/root/.ssh/id_rsa):
     Enter passphrase (empty for no passphrase):
     Enter same passphrase again:
     Your identification has been saved in /root/.ssh/id_rsa.
     Your public key has been saved in /root/.ssh/id_rsa.pub.
     The key fingerprint is:
     25:6e:7c:8f:ea:4b:f1:a5:e6:f5:e7:30:fe:79:bf:08 root@compute
     The key's randomart image is:
     +--[ RSA 2048]----+
     |                 |
     |                 |
     |        . .      |
     |       o o       |
     |        S . .    |
     |       . + =     |
     |        . =Eo o  |
     |       . + ..o.o+|
     |       .+..  .o*B|
     +-----------------+
     root@compute:~/.ssh#
(3)复制mon节点的秘钥到每个ceph节点(要指定要使用的用户名)
ssh-copy-id mengfei@compute
ssh-copy-id mengfei@controller
ssh-copy-id mengfei@network
ssh-copy-id root@compute
ssh-copy-id root@controller
ssh-copy-id root@network
     root@compute:~/.ssh# ssh-copy-id mengfei@controller
     /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
     /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
     mengfei@controller's password:
     Number of key(s) added: 1
     Now try logging into the machine, with:   "ssh 'mengfei@controller'"
     and check to make sure that only the key(s) you wanted were added.
     root@compute:~/.ssh#
(4)测试每台ceph节点不用密码是否可以登录
ssh mengfei@controller
ssh mengfei@network
ssh root@controller
ssh root@network
     root@compute:~/.ssh# ssh mengfei@controller                     
     Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-39-generic i686)
      * Documentation:  https://help.ubuntu.com/                     
     19 packages can be updated.                                     
     15 updates are security updates.                                
     Last login: Tue Nov 25 14:07:06 2014 from compute               
     mengfei@controller:~$
(5)(Recommended) Modify the ~/.ssh/config file of your ceph-deploy admin node
     so that ceph-deploy can log in to Ceph nodes as the user you created without requiring you to specify --username {username} each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scp usage. Replace {username} with the user name you created:
     注:实例中没有添加,config默认貌似没有这个文件。
Host controller
   Hostname controller
   User mengfei
Host node2
   Hostname network
   User mengfei
(二)安装ceph-deploy
1、添加release key
wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -
2、添加Ceph包到你的仓库,用一个稳定的Ceph发行版替换{ceph-stable-release}(如 cuttlefish, dumpling等)
   实例:echo deb http://ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
3、更新源,并安装ceph-deploy
apt-get update
apt-get install ceph-deploy
4、安装ntp (省略)
(三)安装配置ceph cluter
1、为了获得最佳效果,你的admin维护配置您的群集节点上创建一个目录。
mkdir my-cluster
mkdir /etc/ceph
cd my-cluster
2、创建一个集群
(1)要创建您的Ceph的存储集群,生成一个文件系统ID(FSID),在命令行提示符下输入以下命令,生成监视器的秘钥
ceph-deploy purgedata compute controller network
ceph-deploy forgetkeys
     root@compute:/home/mengfei/my-cluster# ceph-deploy purgedata compute controller network
     [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
     [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy purgedata compute controller network
     [ceph_deploy.install][DEBUG ] Purging data from cluster ceph hosts compute controller network
     [compute][DEBUG ] connected to host: compute
     [compute][DEBUG ] detect platform information from remote host
     [compute][DEBUG ] detect machine type
     [compute][DEBUG ] find the location of an executable
     [controller][DEBUG ] connected to host: controller
     [controller][DEBUG ] detect platform information from remote host
     [controller][DEBUG ] detect machine type
     [controller][DEBUG ] find the location of an executable
     [network][DEBUG ] connected to host: network
     [network][DEBUG ] detect platform information from remote host
     [network][DEBUG ] detect machine type
     [network][DEBUG ] find the location of an executable
     [compute][DEBUG ] connected to host: compute
     [compute][DEBUG ] detect platform information from remote host
     [compute][DEBUG ] detect machine type
     [ceph_deploy.install][INFO  ] Distro info: Ubuntu 14.04 trusty
     [compute][INFO  ] purging data on compute
     [compute][INFO  ] Running command: rm -rf --one-file-system -- /var/lib/ceph
     [compute][INFO  ] Running command: rm -rf --one-file-system -- /etc/ceph/
     [controller][DEBUG ] connected to host: controller
     [controller][DEBUG ] detect platform information from remote host
     [controller][DEBUG ] detect machine type
     [ceph_deploy.install][INFO  ] Distro info: Ubuntu 14.04 trusty
     [controller][INFO  ] purging data on controller
     [controller][INFO  ] Running command: rm -rf --one-file-system -- /var/lib/ceph
     [controller][INFO  ] Running command: rm -rf --one-file-system -- /etc/ceph/
     [network][DEBUG ] connected to host: network
     [network][DEBUG ] detect platform information from remote host
     [network][DEBUG ] detect machine type
     [ceph_deploy.install][INFO  ] Distro info: Ubuntu 14.04 trusty
     [network][INFO  ] purging data on network
     [network][INFO  ] Running command: rm -rf --one-file-system -- /var/lib/ceph
     [network][INFO  ] Running command: rm -rf --one-file-system -- /etc/ceph/
     root@compute:/home/mengfei/my-cluster#
     root@compute:/home/mengfei/my-cluster# ceph-deploy forgetkeys
     [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
     [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy forgetkeys
     root@compute:/home/mengfei/my-cluster#
(2)在管理模式下,请使用ceph-deploy创建集群
    注:当前目录下会生成ceph.conf ceph.mon.keyring ceph.log 配置文件,密钥环,日志文件
cd /home/mengfei/my-cluster
ceph-deploy new compute  
    (注:应该先在主节点上先创建集群ceph,应该先new compute创建,后边再install)
       root@compute:/home/mengfei/my-cluster# ceph-deploy new compute
       [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
       [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy new compute
       [ceph_deploy.new][DEBUG ] Creating new cluster named ceph
       [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
       [compute][DEBUG ] connected to host: compute
       [compute][DEBUG ] detect platform information from remote host
       [compute][DEBUG ] detect machine type
       [compute][DEBUG ] find the location of an executable
       [compute][INFO  ] Running command: /bin/ip link show
       [compute][INFO  ] Running command: /bin/ip addr show
       [compute][DEBUG ] IP addresses found: ['192.168.122.1', '192.168.128.101', '10.10.10.101']
       [ceph_deploy.new][DEBUG ] Resolving host compute
       [ceph_deploy.new][DEBUG ] Monitor compute at 192.168.128.101
       [ceph_deploy.new][DEBUG ] Monitor initial members are ['compute']
       [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.128.101']
       [ceph_deploy.new][DEBUG ] Creating a random mon key...
       [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
       [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
       root@compute:/home/mengfei/my-cluster#
(3)安装Ceph
ceph-deploy install compute controller network
ceph-deploy uninstall compute controller network    如果需要重装,可以此两条命令删除ceph
apt-get remove --purge ceph ceph-common ceph-mds
      root@compute:/home/mengfei/my-cluster# ceph-deploy install compute controller network
      [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
      [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy install compute controller network
      [ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster ceph hosts compute controller network
      [ceph_deploy.install][DEBUG ] Detecting platform for host compute ...
      [compute][DEBUG ] connected to host: compute
      [compute][DEBUG ] detect platform information from remote host
      [compute][DEBUG ] detect machine type
      [ceph_deploy.install][INFO  ] Distro info: Ubuntu 14.04 trusty
      [compute][INFO  ] installing ceph on compute
      [compute][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive apt-get -q install --assume-yes ca-certificates
      [compute][DEBUG ] Reading package lists...
      [compute][DEBUG ] Building dependency tree...
      [compute][DEBUG ] Reading state information...
      [compute][DEBUG ] ca-certificates is already the newest version.
      [compute][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 48 not upgraded.
      [compute][INFO  ] Running command: wget -O release.asc https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
      [compute][WARNIN] --2014-11-26 17:36:27--  https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
      [compute][WARNIN] Resolving ceph.com (ceph.com)... 208.113.241.137, 2607:f298:4:147::b05:fe2a
      [compute][WARNIN] Connecting to ceph.com (ceph.com)|208.113.241.137|:443... connected.
      [compute][WARNIN] HTTP request sent, awaiting response... 200 OK
      [compute][WARNIN] Length: unspecified [text/plain]
      [compute][WARNIN] Saving to: ‘release.asc’
      [compute][WARNIN]
      [compute][WARNIN]      0K .                                                      19.7M=0s
      [compute][WARNIN]
      [compute][WARNIN] 2014-11-26 17:36:28 (19.7 MB/s) - ‘release.asc’ saved [1752]
      [compute][WARNIN]
      [compute][INFO  ] Running command: apt-key add release.asc
      [compute][DEBUG ] OK
      [compute][DEBUG ] add deb repo to sources.list
      [compute][INFO  ] Running command: apt-get -q update
      [compute][DEBUG ] Ign http://cn.archive.ubuntu.com trusty InRelease
      [compute][DEBUG ] Hit http://ceph.com trusty InRelease
      [compute][DEBUG ] Ign http://cn.archive.ubuntu.com trusty-updates InRelease
      [compute][DEBUG ] Ign http://ubuntu-cloud.archive.canonical.com trusty-updates/juno InRelease
      [compute][DEBUG ] Ign http://cn.archive.ubuntu.com trusty-backports InRelease
      [compute][DEBUG ] Hit http://ceph.com trusty/main i386 Packages
      [compute][DEBUG ] Hit http://ubuntu-cloud.archive.canonical.com trusty-updates/juno Release.gpg
      [compute][DEBUG ] Ign http://cn.archive.ubuntu.com trusty-security InRelease
      [compute][DEBUG ] Hit http://ubuntu-cloud.archive.canonical.com trusty-updates/juno Release
      ..................一系列包的源网址,太长了,就省略显示了。。
          [network][DEBUG ] Reading package lists...
      [network][WARNIN] W: Duplicate sources.list entry http://cn.archive.ubuntu.com/ubuntu/ trusty/main i386 Packages (/var/lib/apt/lists/cn.archive.ubuntu.com_ubuntu_dists_trusty_main_binary-i386_Packages)
      [network][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes install -- ceph ceph-mds ceph-common ceph-fs-common gdisk
      [network][DEBUG ] Reading package lists...
      [network][DEBUG ] Building dependency tree...
      [network][DEBUG ] Reading state information...
      [network][DEBUG ] gdisk is already the newest version.
      [network][DEBUG ] ceph is already the newest version.
      [network][DEBUG ] ceph-common is already the newest version.
      [network][DEBUG ] ceph-fs-common is already the newest version.
      [network][DEBUG ] ceph-mds is already the newest version.
      [network][DEBUG ] 0 upgraded, 0 newly installed, 0 to remove and 36 not upgraded.
      [network][INFO  ] Running command: ceph --version
      [network][DEBUG ] ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
      root@compute:/home/mengfei/my-cluster#
(4)增加一个Ceph集群监视器
ceph-deploy mon create compute
      root@compute:/home/mengfei/my-cluster# ceph-deploy mon create compute
      [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
      [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy mon create compute
      [ceph_deploy.mon][WARNIN] keyring (ceph.mon.keyring) not found, creating a new one
      [ceph_deploy.new][DEBUG ] Creating a random mon key...
      [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
      [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts compute
      [ceph_deploy.mon][DEBUG ] detecting platform for host compute ...
      [compute][DEBUG ] connected to host: compute
      [compute][DEBUG ] detect platform information from remote host
      [compute][DEBUG ] detect machine type
      [ceph_deploy.mon][INFO  ] distro info: Ubuntu 14.04 trusty
      [compute][DEBUG ] determining if provided host has same hostname in remote
      [compute][DEBUG ] get remote short hostname
      [compute][DEBUG ] deploying mon to compute
      [compute][DEBUG ] get remote short hostname
      [compute][DEBUG ] remote hostname: compute
      [compute][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
      [compute][DEBUG ] create the mon path if it does not exist
      [compute][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-compute/done
      [compute][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-compute/done
      [compute][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-compute.mon.keyring
      [compute][DEBUG ] create the monitor keyring file
      [compute][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i compute --keyring /var/lib/ceph/tmp/ceph-compute.mon.keyring
      [compute][DEBUG ] ceph-mon: mon.noname-a 192.168.128.101:6789/0 is local, renaming to mon.compute
      [compute][DEBUG ] ceph-mon: set fsid to a15c8476-cd50-4609-bfc7-bc49a5d24f8c
      [compute][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-compute for mon.compute
      [compute][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-compute.mon.keyring
      [compute][DEBUG ] create a done file to avoid re-doing the mon deployment
      [compute][DEBUG ] create the init path if it does not exist
      [compute][DEBUG ] locating the `service` executable...
      [compute][INFO  ] Running command: initctl emit ceph-mon cluster=ceph id=compute
      [compute][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.compute.asok mon_status
      [compute][DEBUG ] ********************************************************************************
      [compute][DEBUG ] status for monitor: mon.compute
      [compute][DEBUG ] {
      [compute][DEBUG ]   "election_epoch": 2,
      [compute][DEBUG ]   "extra_probe_peers": [],
      [compute][DEBUG ]   "monmap": {
      [compute][DEBUG ]     "created": "0.000000",
      [compute][DEBUG ]     "epoch": 1,
      [compute][DEBUG ]     "fsid": "a15c8476-cd50-4609-bfc7-bc49a5d24f8c",
      [compute][DEBUG ]     "modified": "0.000000",
      [compute][DEBUG ]     "mons": [
      [compute][DEBUG ]       {
      [compute][DEBUG ]         "addr": "192.168.128.101:6789/0",
      [compute][DEBUG ]         "name": "compute",
      [compute][DEBUG ]         "rank": 0
      [compute][DEBUG ]       }
      [compute][DEBUG ]     ]
      [compute][DEBUG ]   },
      [compute][DEBUG ]   "name": "compute",
      [compute][DEBUG ]   "outside_quorum": [],
      [compute][DEBUG ]   "quorum": [
      [compute][DEBUG ]     0
      [compute][DEBUG ]   ],
      [compute][DEBUG ]   "rank": 0,
      [compute][DEBUG ]   "state": "leader",
      [compute][DEBUG ]   "sync_provider": []
      [compute][DEBUG ] }
      [compute][DEBUG ] ********************************************************************************
      [compute][INFO  ] monitor: mon.compute is running
      [compute][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.compute.asok mon_status
      root@compute:/home/mengfei/my-cluster#
(5)收集密钥
ceph-deploy gatherkeys compute
     一旦你收集到密钥,在本地目录下可看到如下密钥环文件:                                               
     1. {cluster-name}.client.admin.keyring         
     2. {cluster-name}.bootstrap-osd.keyring        
     3. {cluster-name}.bootstrap-mds.keyring  
     root@compute:/home/mengfei/my-cluster# ceph-deploy gatherkeys compute
     [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
     [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy gatherkeys compute
     [ceph_deploy.gatherkeys][DEBUG ] Checking compute for /etc/ceph/ceph.client.admin.keyring
     [compute][DEBUG ] connected to host: compute
     [compute][DEBUG ] detect platform information from remote host
     [compute][DEBUG ] detect machine type
     [compute][DEBUG ] fetch remote file
     [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from compute.
     [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
     [ceph_deploy.gatherkeys][DEBUG ] Checking compute for /var/lib/ceph/bootstrap-osd/ceph.keyring
     [compute][DEBUG ] connected to host: compute
     [compute][DEBUG ] detect platform information from remote host
     [compute][DEBUG ] detect machine type
     [compute][DEBUG ] fetch remote file
     [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from compute.
     [ceph_deploy.gatherkeys][DEBUG ] Checking compute for /var/lib/ceph/bootstrap-mds/ceph.keyring
     [compute][DEBUG ] connected to host: compute
     [compute][DEBUG ] detect platform information from remote host
     [compute][DEBUG ] detect machine type
     [compute][DEBUG ] fetch remote file
     [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from compute.
     root@compute:/home/mengfei/my-cluster#
(6)创建osd目录挂载点
    注:disk是5G,这里只划出1G,剩余空间暂时留作它用。
ssh root@controller   也就是osd0
创建磁盘分区
fdisk /dev/sdc     注:下边有输出记录
创建挂载点
mkdir -p /var/lib/ceph/osd/ceph-osd0
格式化分区:荐用xfs或btrfs文件系统,命令是mkfs
mkfs.xfs -f /dev/sdc1     
mount /dev/sdc1 /var/lib/ceph/osd/ceph-osd0                 注:加-o user_xattr 报错,提示bad option
mount -o remount,user_xattr /var/lib/ceph/osd/ceph-osd0     注:文件系统上添加user_xattr选项,remount不需要完全卸载文件系统
vi /etc/fstab
/dev/sdc1 /var/lib/ceph/osd/ceph-osd0 xfs defaults 0 0    注:自已添加,官方文档没此步骤
/dev/sdc1 /var/lib/ceph/osd/ceph-osd0 xfs remount,user_xattr 0 0
      root@controller:/home/mengfei#fdisk /dev/sdc
      Command (m for help): n
      Partition type:
         p   primary (0 primary, 0 extended, 4 free)
         e   extended
      Select (default p): p
      Partition number (1-4, default 1): 1
      First sector (2048-10485759, default 2048):
      Using default value 2048
      Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 2097151
      Command (m for help): p
        Device Boot      Start         End      Blocks   Id  System
        /dev/sdc1            2048     2097151     1047552   83  Linux
      Command (m for help): w
      The partition table has been altered!
      Calling ioctl() to re-read partition table.
      Syncing disks.
      root@controller:/home/mengfei#
      root@controller:/home/mengfei# mkfs.xfs -f /dev/sdc1
      meta-data=/dev/sdc1              isize=256    agcount=4, agsize=65472 blks
               =                       sectsz=512   attr=2, projid32bit=0
      data     =                       bsize=4096   blocks=261888, imaxpct=25
               =                       sunit=0      swidth=0 blks
      naming   =version 2              bsize=4096   ascii-ci=0
      log      =internal log           bsize=4096   blocks=1200, version=2
               =                       sectsz=512   sunit=0 blks, lazy-count=1
      realtime =none                   extsz=4096   blocks=0, rtextents=0
      root@controller:/home/mengfei#
ssh root@network      也就是osd1
创建磁盘分区
fdisk /dev/sdc     注:下边有输出记录
创建挂载点
mkdir -p /var/lib/ceph/osd/ceph-osd1
格式化分区:荐用xfs或btrfs文件系统,命令是mkfs
mkfs.xfs -f /dev/sdc1     
mount /dev/sdc1 /var/lib/ceph/osd/ceph-osd1                 注:加-o user_xattr 报错,提示bad option
mount -o remount,user_xattr /var/lib/ceph/osd/ceph-osd1     注:文件系统上添加user_xattr选项,remount不需要完全卸载文件系统
vi /etc/fstab
#/dev/sdc1 /var/lib/ceph/osd/ceph-osd1 xfs user_xattr 0 0    注:自已添加,官方文档没此步骤
/dev/sdc1 /var/lib/ceph/osd/ceph-osd1 xfs rw 0 0
      root@controller:/home/mengfei#fdisk /dev/sdc
      Command (m for help): n
      Partition type:
         p   primary (0 primary, 0 extended, 4 free)
         e   extended
      Select (default p): p
      Partition number (1-4, default 1): 1
      First sector (2048-10485759, default 2048):
      Using default value 2048
      Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 2097151
      Command (m for help): p
      Disk /dev/sdc: 5368 MB, 5368709120 bytes
      255 heads, 63 sectors/track, 652 cylinders, total 10485760 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0xfbd1ab98
      Device Boot      Start         End      Blocks   Id  System
      /dev/sdc1            2048     2097151     1047552   83  Linux
      Command (m for help): w
      The partition table has been altered!
      Calling ioctl() to re-read partition table.
      Syncing disks.
      root@network:/home/mengfei#
      root@network:/home/mengfei# mkfs.xfs -f /dev/sdc1
      meta-data=/dev/sdc1              isize=256    agcount=4, agsize=65472 blks
               =                       sectsz=512   attr=2, projid32bit=0
      data     =                       bsize=4096   blocks=261888, imaxpct=25
               =                       sunit=0      swidth=0 blks
      naming   =version 2              bsize=4096   ascii-ci=0
      log      =internal log           bsize=4096   blocks=1200, version=2
               =                       sectsz=512   sunit=0 blks, lazy-count=1
      realtime =none                   extsz=4096   blocks=0, rtextents=0
      root@network:/home/mengfei#
(7)管理模式下添加OSD节点并激活OSD
cd /home/mengfei/my-cluster      
    注:一定要到此目录下执行,因为创建集群ceph时会自动在此目录下生成ceph.conf.运行ceph-deploy时会自动分发,不在此目录下执行会提示“Cannot load config”
    有些配置是需要在my-cluster/ceph.conf修改的,比如:ceph-osd0/journal 默认可能需要很大,所以我就在my-cluter/ceph.conf做了修改:
    osd journal size = 100         journal大小100M,如果mount点够大,快速安装就无所谓了,我的空间小,就设定了100
    osd pool default size = 3      (配置存储对象副本数=对象+副本)   
    osd pool default min_size = 1  (配置存储对象最小副本数)
    osd crush chooseleaf type = 1  (使用在CRUSH规则chooseleaf斗式。使用序号名称而非军衔,默认是1)
ceph-deploy osd prepare controller:/var/lib/ceph/osd/ceph-osd0
ceph-deploy osd prepare network:/var/lib/ceph/osd/ceph-osd1
ceph-deploy osd activate controller:/var/lib/ceph/osd/ceph-osd0
ceph-deploy osd activate network:/var/lib/ceph/osd/ceph-osd1
        注:有时执行时会提示--overwirte-conf
(8)复制配置文件和管理密钥到管理节点和你的Ceph节点
   注:使用ceph-deploy命令将配置文件和管理密钥复制到管理节点和你的Ceph节点。
       下次你再使用ceph命令界面时就无需指定集群监视器地址,执行命令时也无需每次都指定ceph.client.admin.keyring
ceph-deploy admin compute controller network   (注:有时提示需要--overwrite-conf,实例中需要指定)
     root@compute:/home/mengfei/my-cluster# ceph-deploy admin compute controller network
     [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
     [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy --overwrite-conf admin compute controller network
     [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to compute
     [compute][DEBUG ] connected to host: compute
     [compute][DEBUG ] detect platform information from remote host
     [compute][DEBUG ] detect machine type
     [compute][DEBUG ] get remote short hostname
     [compute][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
     [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to controller
     [controller][DEBUG ] connected to host: controller
     [controller][DEBUG ] detect platform information from remote host
     [controller][DEBUG ] detect machine type
     [controller][DEBUG ] get remote short hostname
     [controller][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
     [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to network
     [network][DEBUG ] connected to host: network
     [network][DEBUG ] detect platform information from remote host
     [network][DEBUG ] detect machine type
     [network][DEBUG ] get remote short hostname
     [network][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
     root@compute:/home/mengfei/my-cluster#
     root@compute:/home/mengfei/my-cluster#
     root@compute:/home/mengfei/my-cluster#
(9)验证osd
ceph osd tree   查看状态
ceph osd dump   查看osd配置信息
ceph osd rm     删除节点 remove osd(s) <id> [<id>...]
ceph osd crush rm osd.0   在集群中删除一个osd 硬盘 crush map
ceph osd crush rm node1   在集群中删除一个osd的host节点
     root@compute:/home/mengfei/my-cluster# ceph osd tree   (weight默认是0)
     # id    weight  type name       up/down reweight
     -1      0       root default
     -2      0               host controller
     0       0                       osd.0   up      1
     -3      0               host network
     1       0                       osd.1   up      1
     root@compute:/home/mengfei/my-cluster#  
     root@compute:/var/log/ceph# ceph osd dump
     epoch 89
     fsid 8b2af1e6-92eb-4d74-9ca5-057522bb738f
     created 2014-11-27 16:22:54.085639
     modified 2014-11-28 23:39:44.056533
     flags
     pool 0 'data' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 89 flags hashpspool crash_replay_interval 45 stripe_width 0
     pool 1 'metadata' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 88 flags hashpspool stripe_width 0
     pool 2 'rbd' replicated size 3 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 87 flags hashpspool stripe_width 0
     max_osd 2
     osd.0 up   in  weight 0 up_from 32 up_thru 82 down_at 31 last_clean_interval [15,29) 192.168.128.100:6800/2811 192.168.128.100:6801/2811 192.168.128.100:6802/2811 192.168.128.100:6803/2811 exists,up f4707c04-aeca-46fe-bf0e-f7e2d43d0524
     osd.1 up   in  weight 0 up_from 33 up_thru 82 down_at 29 last_clean_interval [14,28) 192.168.128.102:6800/3105 192.168.128.102:6801/3105 192.168.128.102:6802/3105 192.168.128.102:6803/3105 exists,up c8b2811c-fb19-49c3-b630-374a4db7073e
     root@compute:/var/log/ceph#

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值