ceph0.94安装

Install ceph

docment: http://docs.ceph.com/docs/master/start/quick-start-preflight/#rhel-centos

Config system
systemctl stop firewalld.service 
systemctl disable firewalld.service 
hostnamectl set-hostname ceph-osd-node1
timedatectl set-timezone Asia/Shanghai
yum install chrony -y
systemctl enable chronyd.service
systemctl start chronyd.service
chronyc sources
for RHEL/Centos
yum install centos-release-openstack-mitaka

yum install -y ftp://ftp.linux.kiev.ua/puias/updates/7.1/en/os/x86_64/python-setuptools-0.9.8-4.el7.noarch.rpm

yum install ceph-deploy -y
Create Ceph Deploy User
# useradd ceph
# passwd ceph
Changing password for user ceph.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.

# echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
# chmod 0440 /etc/sudoers.d/ceph
No password login for ceph user
# su - ceph
# pwd
/home/ceph

# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa): 
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
The key fingerprint is:
a5:bd:fe:c7:57:e8:46:2d:71:a0:c7:1f:dc:0a:d1:8e ceph@ceph-osd-node1
The key's randomart image is:
+--[ RSA 2048]----+
|             .   |
|            . o  |
|          .  *...|
|         +  E =oo|
|        S .  o B.|
|           .  = +|
|          .  + ..|
|         .    = .|
|          ...o . |
+-----------------+

# ll .ssh/
total 8
-rw-------. 1 ceph ceph 1675 Mar 16 15:46 id_rsa
-rw-r--r--. 1 ceph ceph  401 Mar 16 15:46 id_rsa.pub

ssh-copy-id {username}@node1
ssh-copy-id {username}@node2
ssh-copy-id {username}@node3
Disable Firewall
# sudo systemctl stop firewalld.service 
# sudo systemctl disable firewalld.service
TTY config

在Ceph nodes上将Defaults requiretty 设置为Defaults:ceph !requiretty,使用sudo visudo定位并更改。

# visudo
# Defaults    requiretty
Defaults:ceph !requiretty

注: Centos7 没有 Defaults requiretty 只需要添加上Defaults:ceph !requiretty 即可

Create A Cluster

只在管理节点上运行即可
测试环境

$ cd my-cluster/
$ ceph-deploy purgedata  ceph-osd-node1 ceph-osd-node2
ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /bin/ceph-deploy purgedata ceph-osd-node1 ceph-osd-node2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f52f20cb440>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-osd-node1', 'ceph-osd-node2']
[ceph_deploy.cli][INFO  ]  func                          : <function purgedata at 0x7f52f2910578>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.install][DEBUG ] Purging data from cluster ceph hosts ceph-osd-node1 ceph-osd-node2
[ceph-osd-node1][DEBUG ] connection detected need for sudo
[ceph-osd-node1][DEBUG ] connected to host: ceph-osd-node1 
[ceph-osd-node1][DEBUG ] detect platform information from remote host
[ceph-osd-node1][DEBUG ] detect machine type
[ceph-osd-node1][DEBUG ] find the location of an executable
[ceph-osd-node2][DEBUG ] connection detected need for sudo
[ceph-osd-node2][DEBUG ] connected to host: ceph-osd-node2 
[ceph-osd-node2][DEBUG ] detect platform information from remote host
[ceph-osd-node2][DEBUG ] detect machine type
[ceph-osd-node2][DEBUG ] find the location of an executable
[ceph-osd-node1][DEBUG ] connection detected need for sudo
[ceph-osd-node1][DEBUG ] connected to host: ceph-osd-node1 
[ceph-osd-node1][DEBUG ] detect platform information from remote host
[ceph-osd-node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph-osd-node1][INFO  ] purging data on ceph-osd-node1
[ceph-osd-node1][INFO  ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph
[ceph-osd-node1][INFO  ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/
[ceph-osd-node2][DEBUG ] connection detected need for sudo
[ceph-osd-node2][DEBUG ] connected to host: ceph-osd-node2 
[ceph-osd-node2][DEBUG ] detect platform information from remote host
[ceph-osd-node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph-osd-node2][INFO  ] purging data on ceph-osd-node2
[ceph-osd-node2][INFO  ] Running command: sudo rm -rf --one-file-system -- /var/lib/ceph
[ceph-osd-node2][INFO  ] Running command: sudo rm -rf --one-file-system -- /etc/ceph/
生成key

$ ceph-deploy forgetkeys 
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /bin/ceph-deploy forgetkeys
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7efc20899ab8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function forgetkeys at 0x7efc210dec08>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
清除ceph安装包

$ ceph-deploy purge ceph-osd-node1 ceph-osd-node2
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /bin/ceph-deploy purge ceph-osd-node1 ceph-osd-node2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f9abb797d88>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph-osd-node1', 'ceph-osd-node2']
[ceph_deploy.cli][INFO  ]  func                          : <function purge at 0x7f9abbfde500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.install][INFO  ] note that some dependencies *will not* be removed because they can cause issues with qemu-kvm
[ceph_deploy.install][INFO  ] like: librbd1 and librados2
[ceph_deploy.install][DEBUG ] Purging on cluster ceph hosts ceph-osd-node1 ceph-osd-node2
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-osd-node1 ...
[ceph-osd-node1][DEBUG ] connection detected need for sudo
[ceph-osd-node1][DEBUG ] connected to host: ceph-osd-node1 
[ceph-osd-node1][DEBUG ] detect platform information from remote host
[ceph-osd-node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph-osd-node1][INFO  ] Purging Ceph on ceph-osd-node1
[ceph-osd-node1][INFO  ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw
[ceph-osd-node1][WARNIN] No Match for argument: ceph
[ceph-osd-node1][WARNIN] No Match for argument: ceph-release
[ceph-osd-node1][WARNIN] No Match for argument: ceph-common
[ceph-osd-node1][WARNIN] No Match for argument: ceph-radosgw
[ceph-osd-node1][INFO  ] Running command: sudo yum clean all
[ceph-osd-node1][DEBUG ] Loaded plugins: fastestmirror
[ceph-osd-node1][DEBUG ] Cleaning repos: base centos-ceph-hammer centos-openstack-mitaka centos-qemu-ev
[ceph-osd-node1][DEBUG ]               : extras updates
[ceph-osd-node1][DEBUG ] Cleaning up everything
[ceph-osd-node1][DEBUG ] Cleaning up list of fastest mirrors
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-osd-node2 ...
[ceph-osd-node2][DEBUG ] connection detected need for sudo
[ceph-osd-node2][DEBUG ] connected to host: ceph-osd-node2 
[ceph-osd-node2][DEBUG ] detect platform information from remote host
[ceph-osd-node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS Linux 7.3.1611 Core
[ceph-osd-node2][INFO  ] Purging Ceph on ceph-osd-node2
[ceph-osd-node2][INFO  ] Running command: sudo yum -y -q remove ceph ceph-release ceph-common ceph-radosgw
[ceph-osd-node2][WARNIN] No Match for argument: ceph
[ceph-osd-node2][WARNIN] No Match for argument: ceph-release
[ceph-osd-node2][WARNIN] No Match for argument: ceph-common
[ceph-osd-node2][WARNIN] No Match for argument: ceph-radosgw
[ceph-osd-node2][INFO  ] Running command: sudo yum clean all
[ceph-osd-node2][DEBUG ] Loaded plugins: fastestmirror
[ceph-osd-node2][DEBUG ] Cleaning repos: base centos-ceph-hammer centos-openstack-mitaka centos-qemu-ev
[ceph-osd-node2][DEBUG ]               : extras updates
[ceph-osd-node2][DEBUG ] Cleaning up everything
[ceph-osd-node2][DEBUG ] Cleaning up list of fastest mirrors
Create Cluster
# ceph-deploy new  ceph-osd-node1 //initial-monitor-node(s) 

该命令将node1初始化为Monitor节点,同时生成Ceph configure file,也就是Ceph的配置文件。

edit ceph.conf add line
[global]
osd_pool_default_size = 2 
Install Ceph
install ceph rpm
yum -y install ceph ceph-radosgw
ceph-deploy install ceph-osd-node1 ceph-osd-node2
添加初始monitor并收集keys
ceph-deploy mon create-initial
Add 4 OSDs

# ssh ceph-osd-node1
# sudo mkdir /var/local/osd0 
# sudo mkdir /var/local/osd1
# chmod 777 /var/local/osd1 /var/local/osd1 //防止后文osd activate操作被拒绝 
# exit

# ssh  ceph-osd-node2 
# sudo mkdir /var/local/osd2 
# sudo mkdir /var/local/osd3 
# chmod 777 /var/local/osd1  /var/local/osd3 
# exit



ceph-deploy osd prepare ceph-osd-node1:/var/local/osd0  ceph-osd-node1:/var/local/osd1
ceph-deploy osd prepare ceph-osd-node2:/var/local/osd2  ceph-osd-node2:/var/local/osd3


ceph-deploy osd activate ceph-osd-node1:/var/local/osd0  ceph-osd-node1:/var/local/osd1
ceph-deploy osd activate ceph-osd-node2:/var/local/osd2  ceph-osd-node2:/var/local/osd3


使用ceph-deploy将configure file和admin key拷贝到admin节点和Ceph节点上
ceph-deploy admin  ceph-osd-node1 ceph-osd-node2
delete ceph

yum erase ceph-deploy ceph-mon ceph-osd ceph ceph-radosgw python-cephfs ceph-common ceph-base ceph-mds libcephfs1 ceph-selinux  
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值