OpenStack基础环境配置(后续有各组件配置)

前言

在配置OpenStack的各个项目时,应首先保证好各个节点的基础环境配置,以便于实现后续操作环境的稳定。下文是博主在配置OpenStack时的基础环境

一、资源规划

主机名内存硬盘网卡
ct8G300GVM:172.16.1.20
NAT:10.0.0.20
c18G300GVM:172.16.1.21
NAT:10.0.0.21
c28G300GVM:172.16.1.22
NAT:10.0.0.22

二、基础环境配置

2.1 所有节点

# 基础环境依赖包
[root@localhost ~]# yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre  pcre-devel expat-devel cmake  bzip2 
[root@localhost ~]# yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils
[root@localhost ~]# hostnamectl set-hostname ct
[root@localhost ~]# su

# 设置映射
[root@ct ~]# vi /etc/hosts
172.16.1.20  ct
172.16.1.21  c1
172.16.1.22  c2

# 关闭防火墙及核心防护
[root@ct ~]# systemctl stop firewalld
[root@ct ~]# systemctl disable firewalld
[root@ct ~]# setenforce 0
[root@ct ~]# vim /etc/sysconfig/selinux 
SELINUX=disabled

# 非对称密钥
[root@ct ~]#  ssh-keygen -t rsa	
[root@ct ~]#  ssh-copy-id ct
[root@ct ~]#  ssh-copy-id c1
[root@ct ~]#  ssh-copy-id c2

# 配置DNS(所有节点)
[root@ct ~]# vim /etc/resolv.conf
nameserver 114.114.114.114

# 控制节点ct时间同步配置
[root@ct ~]# yum -y install chrony
[root@ct ~]# vim /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst	# 注释这几行
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst  			# 添加阿里云时间同步
allow 172.16.1.0/24    					# 允许内网同步
[root@ct ~]# systemctl enable chronyd
[root@ct ~]# systemctl restart chronyd

# 其他节点
[root@c1 ~]# vim /etc/chrony.conf 
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst	# 注释这几行
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ct iburst   						# 同步主机ct
[root@c1 ~]# systemctl enable chronyd
[root@c1 ~]# systemctl restart chronyd

# 计划任务
[root@ct ~]# crontab -e
*/30 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log  		# 每半小时同步一次

2.2 控制节点

2.2.1 安装、配置MariaDB

# 安装、配置MariaDB
[root@ct ~]# yum -y install mariadb mariadb-server python2-PyMySQL 
[root@ct ~]# yum -y install libibverbs	#此包用于openstack的控制端连接mysql所需要的模块 

# 添加MySQL子配置文件,并配置自启动
[root@ct ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 172.16.1.20				#控制节点局域网地址
default-storage-engine = innodb			#默认存储引擎
innodb_file_per_table = on				#每张表独立表空间文件
max_connections = 4096					#最大连接数
collation-server = utf8_general_ci		#默认字符集 
character-set-server = utf8
[root@ct ~]# systemctl enable mariadb
[root@ct ~]# systemctl start mariadb

# 执行MariaDB 安全配置脚本
[root@ct my.cnf.d]# mysql_secure_installation
Enter current password for root (enter for none): 			#回车
OK, successfully used password, moving on...
Set root password? [Y/n] Y
Remove anonymous users? [Y/n] Y
 ... Success!
Disallow root login remotely? [Y/n] N
 ... skipping.
Remove test database and access to it? [Y/n] Y 
Reload privilege tables now? [Y/n] Y 	

2.2.2 安装RabbitMQ

所有创建虚拟机的指令,控制端都会发送到rabbitmq,node节点监听rabbitmq

# yum安装
[root@ct ~]# yum -y install rabbitmq-server

# 配置服务,启动RabbitMQ服务,并设置其开机启动
[root@ct ~]# systemctl enable rabbitmq-server.service
[root@ct ~]# systemctl start rabbitmq-server.service

# 创建消息队列用户,用于controler和node节点连接rabbitmq的认证
[root@ct ~]# rabbitmqctl add_user openstack RABBIT_PASSCreating user "openstack" ...
...done.

# 查看rabbitmq插件列表
[root@ct ~]# rabbitmq-plugins list
[ ] amqp_client                       3.3.5
[ ] cowboy                            0.5.0-rmq3.3.5-git4b93c2d
[ ] eldap                             3.3.5-gite309de4
[ ] mochiweb                          2.7.0-rmq3.3.5-git680dba8
[ ] rabbitmq_amqp1_0                  3.3.5
[ ] rabbitmq_auth_backend_ldap        3.3.5
[ ] rabbitmq_auth_mechanism_ssl       3.3.5
[ ] rabbitmq_consistent_hash_exchange 3.3.5
[ ] rabbitmq_federation               3.3.5
[ ] rabbitmq_federation_management    3.3.5
[ ] rabbitmq_management               3.3.5
[ ] rabbitmq_management_agent         3.3.5
[ ] rabbitmq_management_visualiser    3.3.5
[ ] rabbitmq_mqtt                     3.3.5
[ ] rabbitmq_shovel                   3.3.5
[ ] rabbitmq_shovel_management        3.3.5
[ ] rabbitmq_stomp                    3.3.5
[ ] rabbitmq_test                     3.3.5
[ ] rabbitmq_tracing                  3.3.5
[ ] rabbitmq_web_dispatch             3.3.5
[ ] rabbitmq_web_stomp                3.3.5
[ ] rabbitmq_web_stomp_examples       3.3.5
[ ] sockjs                            0.3.4-rmq3.3.5-git3132eb9
[ ] webmachine                        1.10.3-rmq3.3.5-gite9359c7

# 配置openstack用户的操作权限(正则,配置读写权限)
[root@ct ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"			#可查看256725672 两个端口(5672是Rabbitmq默认端口,25672是Rabbit的测试工具CLI的端口)

# 开启rabbitmq的web管理界面的插件,端口为15672
[root@ct ~]# rabbitmq-plugins enable rabbitmq_management

在这里插入图片描述

在这里插入图片描述

2.2.3 安装memcached

  • 作用

安装memcached是用于存储session信息;服务身份验证机制使用Memcached来缓存令牌 在登录openstack的dashboard时,会产生一些session信息,这些session信息会存放到memcached中0

# yum安装
[root@ct ~]# yum install -y memcached python-memcached			#python-*模块在OpenStack中起到连接数据库的作用

# 修改Memcached配置文件
[root@ct ~]# cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,ct"

[root@ct ~]# systemctl enable memcached
[root@ct ~]# systemctl start memcached
[root@ct ~]# netstat -nautp | grep 11211

2.2.4 安装etcd

# yum安装
[root@ct ~]# yum -y install etcd

# 修改etcd配置文件
[root@ct ~]# vim /etc/etcd/etcd.conf 

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"					# 数据目录位置
ETCD_LISTEN_PEER_URLS="http://172.16.1.20:2380"				# 监听其他etcd member的url(2380端口,集群之间通讯,域名为无效值)
ETCD_LISTEN_CLIENT_URLS="http://172.16.1.20:2379"				# 对外提供服务的地址(2379端口,集群内部的通讯端口)
ETCD_NAME="ct"												# 集群中节点标识(名称)
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.1.20:2380"	# 该节点成员的URL地址,2380端口:用于集群之间通讯。
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.1.20:2379"
ETCD_INITIAL_CLUSTER="ct=http://172.16.1.20:2380"	
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"				# 集群唯一标识
ETCD_INITIAL_CLUSTER_STATE="new"							# 初始集群状态,new为静态,若为existing,则表示此ETCD服务将尝试加入已有的集群,若为DNS,则表示此集群将作为被加入的对象

# 开机自启动、开启服务,检测端口
[root@ct ~]# systemctl enable etcd.service
[root@ct ~]# systemctl start etcd.service

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值