环境准备:
Controller Node: 1 processor, 2.5 GB memory, and 20 GB storage ,3 network interface
Compute Node: 1 processor, 2 GB memory, and 20 GB storage,3 network interface
Compute Node: 1 processor, 2 GB memory, and 20 GB storage,3 network interface
Object Node: 1 processor, 512 GB memory, and 20 GB storage + 10 GB storage,2 network interface
controller 和compute节点三个网卡:一个是Management network (10.0.0.0/24),一个是Provider network(192.168.128.0/24),一个是用来上网安装软件的。
object 节点不需要network网卡。
由于笔记本资源有限,内存分配的比较小。这里准备两个compute节点是为了测试热迁移。
该环境的网络我这里使用的是Self-service network。(linuxbridge + vlan)
一、环境准备
1、配置controller节点。
修改hostname:
[root@controller ~]# cat /etc/hostname controller
配置网络:
[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet BOOTPROTO=static DEFROUTE=no NAME=eth0 DEVICE=eth0 ONBOOT=yes IPADDR=10.0.0.11 NETMASK=255.255.255.0 [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 TYPE=Ethernet BOOTPROTO=none NAME=eth1 DEVICE=eth1 ONBOOT=yes [root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2 TYPE=Ethernet BOOTPROTO=dhcp DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eth2 DEVICE=eth2 ONBOOT=yes
配置hosts解析
[root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.0.11 controller 10.0.0.31 compute1 10.0.0.21 compute2 10.0.0.41 block1
修改完成,重启服务器。
2、配置compute节点
修改hostname
[root@compute1 ~]# cat /etc/hostname compute1
修改网卡
[root@compute1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 TYPE=Ethernet BOOTPROTO=static DEFROUTE=no NAME=eth0 DEVICE=eth0 ONBOOT=yes IPADDR=10.0.0.31 NETMASK=255.255.255.0 [root@compute1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 TYPE=Ethernet BOOTPROTO=none NAME=eth1 DEVICE=eth1 ONBOOT=yes [root@compute1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2 TYPE=Ethernet BOOTPROTO=dhcp DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eth2 DEVICE=eth2 ONBOOT=yes
配置hosts解析
[root@compute1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.0.11 controller 10.0.0.31 compute1 10.0.0.21 compute2 10.0.0.41 block1
重启服务器
.......
配置compute2(10.0.0.21)和object1(10.0.0.41)操作类似。
配置重启完服务器后确保各个节点能ping通外网和通过hostname ping通各个节点。
3、安装时间服务器
安装控制节点
[root@controller ~]# yum install chrony
编辑/etc/chrony.conf配置文件,可以不用改,只要添加allow 10.0.0.0/24
[root@controller ~]# cat /etc/chrony.conf # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst allow 10.0.0.0/24
启动服务,并添加到开机启动
[root@controller ~]# systemctl enable chronyd.service [root@controller ~]# systemctl start chronyd.service
安装compute节点
[root@compute1 ~]# yum install chrony
编辑/etc/chrony.conf配置文件
server controller iburst
启动服务,并添加到开机启动
[root@compute1 ~]# systemctl enable chronyd.service [root@compute1 ~]# systemctl start chronyd.service
其他节点和compute节点一样。
验证,在控制节点执行下面命令。
[root@controller ~]# chronyc sources 210 Number of sources = 4 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 202.118.1.130 2 8 377 127 -361us[ -647us] +/- 7821us ^- time7.aliyun.com 2 8 177 128 +855us[ +570us] +/- 29ms ^- news.neu.edu.cn 2 8 377 6 -420us[ -420us] +/- 7927us ^- dns1.synet.edu.cn 2 8 200 17m -3379us[-3251us] +/- 12ms
在其他节点执行下面命令
[root@compute1 ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* controller 3 6 377 22 +12us[ -18us] +/- 8716us
在name/ip address 栏显示的是controller
4、配置openstack 软件安装源,我这里用的是centos7
[root@controller ~]# yum install centos-release-openstack-mitaka [root@controller ~]# yum upgrade
在所有节点执行上面两步操作,重启服务器
安装openstack client
[root@controller ~]# yum install python-openstackclient [root@controller ~]# yum install openstack-selinux
5、安装配置数据库服务
[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL
编辑/etc/my.cnf,增加下面配置。
[root@controller ~]# cat /etc/my.cnf [mysqld] bind-address = 10.0.0.11 default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci character-set-server = utf8
启动服务,并加入开机启动
[root@controller ~]# systemctl enable mariadb.service [root@controller ~]# systemctl start mariadb.service
数据库安全设置,配置数据密码,其他全部yes
[root@controller ~]# mysql_secure_installation
6、安装No sql数据库
[root@controller ~]# yum install mongodb-server mongodb
修改配置文件/etc/mongod.conf
bind_ip = 10.0.0.11 smallfiles = true
启动服务,并加入开机启动
[root@controller ~]# systemctl enable mongod.service [root@controller ~]# systemctl start mongod.service
7、安装消息队列服务
[root@controller ~]# yum install rabbitmq-server
启动服务,并加入开机启动
[root@controller ~]# systemctl enable rabbitmq-server.service [root@controller ~]# systemctl start rabbitmq-server.service
增加rabbitmq用户,并添加权限
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS Creating user "openstack" ......done. [root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" Setting permissions for user "openstack" in vhost "/" ......done.
8、安装Memcached
[root@controller ~]# yum install memcached python-memcached
启动服务,并加入开机启动
[root@controller ~]# systemctl enable memcached.service [root@controller ~]# systemctl start memcached.service
基础环境安装完成。
转载于:https://blog.51cto.com/venuxs/1795901