OpenStack Queens版搭建
一、基础环境配置
1.节点网络规划
节点 | ip |
---|---|
Controller | 192.168.10.10 |
Computer | 192.168.10.20 |
Cinder | 192.168.10.30 |
2.关闭防火墙
sed -i 's/enforcing/disabled/g' /etc/selinux/config #sed修改selinux状态为disabled
setenforce 0 #关闭selinux
systemctl disable firewalld && systemctl stop firewalld #永久关闭防火墙
3.修改域名并配置域名解析
hostnamectl set-hostname <节点域名>
节点域名 |
---|
controller |
compute |
vim /etc/hosts #每一台都需配置
--------------
192.168.10.10 controller
192.168.10.20 compute
4.配置yum源
vim /etc/yum.repos.d/openstack.repo #创建repo文件,下文使用的是华为云
[base]
name=base
baseurl=https://repo.huaweicloud.com/centos/7/os/x86_64/
enable=1
gpgcheck=0
[extras]
name=extrax
baseurl=https://repo.huaweicloud.com/centos/7/extras/x86_64/
enable=1
gpgcheck=0
[updates]
name=updates
baseurl=https://repo.huaweicloud.com/centos/7/updates/x86_64/
enable=1
gpgcheck=0
[queens]
name=queens
baseurl=https://repo.huaweicloud.com/centos/7/cloud/x86_64/openstack-queens/
enable=1
gpgcheck=0
[virt]
name=virt
baseurl=https://repo.huaweicloud.com/centos/7/virt/x86_64/kvm-common/
enable=1
gpgcheck=0
清除缓存并安装
yum clean all
yum repolist
yum install python-openstackclient -y
5.安装并配置时钟同步服务
双机都得安装
yum install chrony -y
vim /etc/chrony.conf
--------------------
set nu #显示行号
controller节点:
#将26行和29行的#号删除,使参数有效
allow 192.168.10.0/24 #192.168.10.0/24 为允许时间同步的网段,根据实际环境修改
local stratum 10 #本地时钟提供服务
Computer节点:
#把3-6行默认的官方时钟源加上#(取消功能),并加入下列参数
server controller iburst
保存退出后重启服务并设置开机自启
systemctl enable chronyd
systemctl restart chronyd
检查时间同步情况
chronyc sources -v
最下方出现^*代表成功与controller同步时间
二、controller节点
1.安装数据库( MariaDB)
yum install mariadb mariadb-server python2-PyMySQL -y
新增openstack数据库配置文件
vim /etc/my.cnf.d/openstack.cnf #加入以下参数
-------------------------------
[mysqld]
bind-address = 192.168.10.10 #本机IP
default-storage-engine = innodb #默认搜索引擎
innodb_file_per_table = on
max_connections = 4096 #最大连接数
collation-server = utf8_general_ci #字符顺序
character-set-server = utf8 #字体
保存退出,加入开机自启并启动数据库
systemctl enable mariadb
systemctl start mariadb
初始化数据库
mysql_secure_installation
#设置数据库无密码,数据库管理员登录密码,移除匿名用户,允许管理员远程登录,移除测试用的数据库,重新加载权限表
2.安装消息队列服务(rabbitmq)
yum install -y rabbitmq-server
启动并加入开机自启
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
为rabbitmq添加名为openstack
的用户,密码为123456
rabbitmqctl add_user openstack 123456
若无法创建且报错提示为下图
表示 RabbitMQ 服务无法连接到运行在本地主机上的 RabbitMQ 节点,可能是修改域名时未写入至文件,导致用户为rabbit@localhost
,而不是rabbit@controller
。
运行以下代码
echo 'NODENAME=rabbit@controller' | sudo tee -a /etc/rabbitmq/rabbitmq-env.conf
这样做的目的是配置 RabbitMQ 节点的名称为 rabbit@controller
为openstack用户添加最高权限
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
这个命令授予了用户 openstack
在默认虚拟主机上对所有交换器、队列和路由键进行完全访问的权限。这意味着用户 openstack
可以进行任何操作,包括创建、删除、绑定队列和交换器,以及发送和接收消息。
命令格式如下
rabbitmqctl set_permissions [-p <vhost>] <user> <conf> <write> <read>
参数说明:
<vhost>
:虚拟主机名称,默认为/
。<user>
:要设置权限的 RabbitMQ 用户。<conf>
:用于匹配用户在哪些交换器上有配置权限的正则表达式。<write>
:用于匹配用户在哪些队列上有写入权限的正则表达式。<read>
:用于匹配用户在哪些队列上有读取权限的正则表达式。
验证rabbitmq是否成功安装,端口(5672)是否正常
netstat -lantu |grep 5672
-------------------------
#提示未找到命令可安装以下软件包
yum install -y net-tools
3.安装缓存服务(memcache)
yum install memcached python-memcached -y
配置memcache
vim /etc/sysconfig/memcached
---------------------------
修改“OPTIONS的值,末尾加入“controller”,此处的”controller“为控制节点主机名
保存退出,启动服务并加入开机自启
systemctl enable memcached
systemctl start memcached
4.安装Etcd
yum install etcd -y
配置:
vim /etc/etcd/etcd.conf
将ETCD_NAME后面的参数更改为主机名controller
重启服务并加入开机自启
systemctl enable etcd
systemctl start etcd
5.Keystone组件安装
5.1创建Keystone数据库
mysql -uroot -p123456 #登录数据库
-------------------------------------
CREATE DATABASE keystone;
为系统用户keystone,赋予开放本地/远程登录权限,密码为keystone_db
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone_db';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone_db';
FLUSH PRIVILEGES; #刷新权限
5.2安装keystone组件
yum install openstack-keystone httpd mod_wsgi -y
备份原始配置文件并去掉带“#”号行
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
#备份原始配置文件并去掉带“#”号行
cat /etc/keystone/keystone.conf.bak | grep -v ^# | uniq > /etc/keystone/keystone.conf
#将原始配置文件去掉带”#“号行
------------------------
sed -i.bak '/^#/d' /etc/keystone/keystone.conf && sed -i '/^$/d;s/$/\n/' /etc/keystone/keystone.conf
#也可以使用此命令
配置keystone配置文件
vim /etc/keystone/keystone.conf
-------------------------------
#在 [database] 和[token] 选项里分别添加以下参数
[database]
connection=mysql+pymysql://keystone:keystone_db@controller/keystone #记得将密码更改为之前数据库中创建的密码
[token]
provider = fernet
参数说明:
[database]:数据库选项设置
“mysql+pymysql”:数据库连接方式
“keystone:KEYSTONE_DBPASS”: 登录用户名:用户密码
“@controller/keystone”:controller主机上的keystone数据库
[token]:鉴权认证设置
保存退出
5.3填充keystone数据库
su keystone -s /bin/sh -c "keystone-manage db_sync"
#无返回任何结果则表示填充正常,可在keystone中查询出新建的表
5.4初始化Fernet key库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
5.5引导身份认证服务,配置keystone的相关认证信息
(未来openstack登录界面的管理员admin密码,在此设置)
keystone-manage bootstrap --bootstrap-password 123456 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
#`\`为换行
参数说明:
--bootstrap-password:keystone管理员密码
--bootstrap-admin-url:管理员认证URL
--bootstrap-internal-url:内部认证URL
--bootstrap-public-url:外部认证URL
--bootstrap-region-id:指定区域名
6.配置Apache服务
vim /etc/httpd/conf/httpd.conf
------------------------------
:96 #在Apache配置文件中设置ServerName为本机主机名,输入命令跳转至96行并加入'ServerName controller’
ServerName controller
为wsgi-keystone.conf创建链接到Apache服务目录
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
重启httpd服务,并加入开机自启
systemctl enable httpd.service
systemctl start httpd.service
7.Keystone部署验证
7.1创建环境脚本
vim /root/admin-openrc
----------------------
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
参数说明:
export OS_USERNAME=admin:登录keystone的admin(管理员)帐号
export OS_PASSWORD=ADMIN_PASS:keystone预设的密码
export OS_PROJECT_NAME=admin:指定Openstack的项目类型
export OS_USER_DOMAIN_NAME=Default:指定Openstack用户所属域
export OS_PROJECT_DOMAIN_NAME=Default:指定Openstack项目所属域
export OS_AUTH_URL=http://controller:35357/v3:指定认证链接
export OS_IDENTITY_API_VERSION=3:指定认证版本
执行脚本
. /root/admin-openrc
查看当前环境
env | grep OS
7.2验证
openstack token issue
7.3创建
(可选)鉴权通过,创建一个domain(域),名为“example”
,描述为“Test Example”
openstack domain create --description "Test Example" example
在默认域(default domain)下,创建一个project(项目),名为“service”
,描述为“Service Project”
openstack project create --domain default --description "Service Project" service
查看当前环境下的所有项目(project)
openstack project list
在默认域下(default domain)下,创建一个project(项目),名为“demo”
,描述为“Demo Project”
openstack project create --domain default --description "Demo Project" demo
一般情况下,除了管理员外,我们还需要一些非特权项目以及用户
在默认域下创建一个用户,名为“zhangsan”
,后置手动设置密码
openstack user create --domain default --password-prompt zhangsan
-----------------------------------------------------------------
User Password:
Repeat User Password:
创建Openstack的“普通用户”
角色,名为“user”
openstack role create user
查看当前都有哪些角色
openstack role list
将用户“zhangsan”
在“demo”
项目中的角色,规划为“普通用户”
的角色
openstack role add --project demo --user zhangsan user
7.3验证登录
unset OS_AUTH_URL OS_PASSWORD #取消环境变量的密码
用户admin
登录
openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
--------------------
password:123456 #输入密码
用户zhangsan
登录
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name demo --os-username zhangsan token issue
---------------
password:123456
8.Glance组件安装
8.1创建Glance数据库
mysql -uroot -p123456 #登录数据库
-------------------------------------
CREATE DATABASE glance;
赋予系统用户glance
,开放本地/远程登录,登录密码为“glance_db”
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance_db';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance_db';
FLUSH PRIVILEGES;
8.2创建glance相关信息
先执行环境变量
. /root/admin-openrc
在openstack默认域中创建glance
用户(设置密码为:glance123)
openstack user create --domain default --password glance123 glance
将glance
用户在service
项目(project)中赋予管理员角色admin
openstack role add --project service --user glance admin
#执行后无返回结果即正常
创建一个类型为image
(镜像)的服务(service)实体,描述为“OpenStack Image”
openstack service create --name glance --description "OpenStack Image" image
为image
服务实体在RegionOne区域中创建三种供访问的Endpoint API
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
public
、internal
、admin
分别对应 公网
,内网
,管理员
若要删除endpoint,可先查询endpoint的id
openstack endpoint list
openstack endpoint delete [endpoint-id]
8.3安装并配置glance组件
yum install openstack-glance -y
8.3.1配置glance文件
备份原始配置文件并去掉带“#”号行
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
#备份原始配置文件并去掉带“#”号行
cat /etc/glance/glance-api.conf.bak | grep -v ^# | uniq > /etc/glance/glance-api.conf
#将原始配置文件去掉带”#“号行
-----------------------------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/glance/glance-api.conf && sed -i '/^$/d;s/$/\n/' /etc/glance/glance-api.conf
编辑配置文件
vim /etc/glance/glance-api.conf
-------------------------------
#在以下选项里分别添加以下参数
[database] #数据库设置
connection = mysql+pymysql://glance:glance_db@controller/glance
[keystone_authtoken] #keystone鉴权设置
auth_uri = http://controller:5000 #鉴权uri
auth_url = http://controller:5000 #鉴权url
memcached_servers = controller:11211 #memcached服务链接
auth_type = password #认证方式
project_domain_name = Default #指定项目域
user_domain_name = Default #指定用户域
project_name = service #指定项目
username = glance #指定服务用户名
password = glance123 #服务用户名密码
[paste_deploy] #认证模式
flavor = keystone
[glance_store] #glance设置
stores = file,http #存储方式
default_store = file #默认存储类型
filesystem_store_datadir = /var/lib/glance/images/ #默认存储路径
8.3.2配置glance-registry文件
备份原始配置文件并去掉带“#”号行
cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
cat /etc/glance/glance-registry.conf.bak | grep -v ^# | uniq > /etc/glance/glance-registry.conf
-----------------------------------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/glance/glance-registry.conf && sed -i '/^$/d;s/$/\n/' /etc/glance/glance-registry.conf
编辑配置文件
vim /etc/glance/glance-registry.conf
------------------------------------
[database]
connection = mysql+pymysql://glance:glance_db@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance123
[paste_deploy]
flavor = keystone
填充glance数据库
su glance -s /bin/sh -c "glance-manage db_sync"
启动服务并加入开机自启
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
8.4glance部署验证
运行管理员环境脚本,下载测试镜像
. /root/admin-openrc
wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
#相对安装至默认目录/root
#提示未找到命令需要安装wget软件
yum install -y wget
8.4.1创建openstack镜像
openstack image create "cirros" \
--file cirros-0.4.0-x86_64-disk.img \ #源镜像路径,此处为相对路径/root(可更换为绝对路径)
--disk-format qcow2 --container-format bare \ #镜像模式以及容器类型
--public #镜像权限
8.4.2查询镜像
openstack image list
9.Nova-api安装
9.1安装控制节点(controller)的nova组件
9.1.1创建nova数据库
mysql -uroot -p123456
---------------------
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
---------------------------
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_db';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova_db';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_db';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova_db';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova_db';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova_db';
FLUSH PRIVILEGES;
9.1.2创建nova相关信息
1)在控制节点(controller)上创建nova
用户
. /root/admin-openrc
--------------------
openstack user create --domain default --password nova123 nova
2)将nova
用户在service
项目(project)中赋予管理员角色admin
openstack role add --project service --user nova admin
#执行后无返回结果即正常
3)创建一个名为compute
的服务(service)实体,描述为“OpenStack Compute”
openstack service create --name nova --description "OpenStack Compute" compute
4)为compute
服务实体在RegionOne
区域中创建三种供访问的Endpoint API
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
5)在openstack默认域中创建placement
用户
openstack user create --domain default --password placement123 placement
6)将placement
用户在service
项目(project)中赋予管理员角色admin
openstack role add --project service --user placement admin
#执行后无返回结果即正常
7)创建一个名为placement
的服务(service)实体,描述为“Placement API”
openstack service create --name placement --description "Placement API" placement
8)为Placement服务实体在RegionOne区域中创建三种供访问的Endpoint API
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
9.1.3安装并配置nova组件
yum install openstack-nova-api \
openstack-nova-conductor \
openstack-nova-console \
openstack-nova-novncproxy \
openstack-nova-scheduler \
openstack-nova-placement-api -y
1)配置nova.conf配置文件
备份原始配置文件并去掉带“#”号行
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
cat /etc/nova/nova.conf.bak | grep -v ^# | uniq > /etc/nova/nova.conf
---------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/nova/nova.conf && sed -i '/^$/d;s/$/\n/' /etc/nova/nova.conf
vim /etc/nova/nova.conf
-----------------------
在以下选项里分别添加以下参数:
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.10.10 #本机管理网口地址
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver #需要关闭自带firewalld服务
[api_database] #数据库设置
connection=mysql+pymysql://nova:nova_db@controller/nova_api
[database] #数据库设置
connection = mysql+pymysql://nova:nova_db@controller/nova
[api] #API验证设置
auth_strategy = keystone
[keystone_authtoken] #keystone鉴权设置
auth_url = http://controller:5000/v3 #鉴权url
memcached_servers = controller:11211 #memcached服务链接
auth_type = password #认证方式
project_domain_name = default #指定项目域
user_domain_name = default #指定用户域
project_name = service #指定项目
username = nova #指定服务用户名
password = nova123 #服务用户名密码
[vnc] #vnc设置
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance] #glance设置
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement] #placement设置
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement123
2)基于补丁bug,还需要配置Placement API配置文件,在第13行加入以下参数
vim /etc/httpd/conf.d/00-nova-placement-api.conf
------------------------------------------------
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
重启http服务
systemctl restart httpd
3)填充nova-api数据库
su nova -s /bin/sh -c "nova-manage api_db sync"
#可忽略此次返回的描述信息
4)填充cell0数据库(暂无数据)
su nova -s /bin/sh -c "nova-manage cell_v2 map_cell0"
#可忽略此次返回的描述信息
5)创建cell1的cell
su nova -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose"
#正常返回一个连串无规则代码即可
6)同步nova数据库
su nova -s /bin/sh -c "nova-manage db sync" nova
7)较验cell0和cell1的注册是否正常
nova-manage cell_v2 list_cells
9.2启动服务并加入开机自启
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
10.安装控制节点的neutron组件
10.1创建neutron数据库
mysql -u root -p123456
----------------------
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron_db';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron_db';
FLUSH PRIVILEGES;
10.2创建neutron相关信息
. /root/admin-openrc
10.2.1在openstack默认域中创建neutron
用户
openstack user create --domain default --password neutron123 neutron
10.2.2将neutron
用户在service
项目(project)中赋予管理员角色admin
openstack role add --project service --user neutron admin
10.2.3创建一个名为network
的服务(service)实体,描述为“OpenStack Networking”
openstack service create --name neutron --description "OpenStack Networking" network
10.2.4为neutron服务实体在RegionOne区域中创建三种供访问的Endpoint API
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
11.配置网络
11.1安装组件
yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables -y
11.2配置neutron相关配置文件
11.2.1配置neutron主配置文件
备份原始配置文件并去掉带“#”号行
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
cat /etc/neutron/neutron.conf.bak | grep -v ^# | uniq > /etc/neutron/neutron.conf
---------------------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/neutron/neutron.conf && sed -i '/^$/d;s/$/\n/' /etc/neutron/neutron.conf
编辑配置文件
vim /etc/neutron/neutron.conf
-----------------------------
在以下选项里分别添加以下参数:
[DEFAULT]
core_plugin = ml2 #启动Modular Layer 2模块
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database] #数据库设置
connection = mysql+pymysql://neutron:neutron_db@controller/neutron
[keystone_authtoken] #keystone鉴权设置
auth_uri = http://controller:5000 #鉴权uri
auth_url = http://controller:5000 #鉴权url
memcached_servers = controller:11211 #memcached服务链接
auth_type = password #认证类型
project_domain_name = default #指定项目域
user_domain_name = default #指定用户域
project_name = service #指定项目
username = neutron #指定服务用户名
password = neutron123 #服务用户名密码
[nova] #nova相关选项
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
11.2.2配置ml2模块配置文件
备份原始配置文件并去掉带“#”号行
cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
cat /etc/neutron/plugins/ml2/ml2_conf.ini.bak | grep -v ^# | uniq > /etc/neutron/plugins/ml2/ml2_conf.ini
---------------------------------------------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/neutron/plugins/ml2/ml2_conf.ini && sed -i '/^$/d;s/$/\n/' /etc/neutron/plugins/ml2/ml2_conf.ini
编辑配置文件
vim /etc/neutron/plugins/ml2/ml2_conf.ini
-----------------------------------------
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
11.2.3配置网桥配置文件
备份原始配置文件并去掉带“#”号行
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
cd /etc/neutron/plugins/ml2
cat linuxbridge_agent.ini.bak | grep -v ^# | uniq > linuxbridge_agent.ini
-------------------------------------------------------------------------
cd /etc/neutron/plugins/ml2
sed -i.bak '/^#/d' linuxbridge_agent.ini && sed -i '/^$/d;s/$/\n/' linuxbridge_agent.ini
编辑配置文件
[linux_bridge]
physical_interface_mappings = provider:ens33
#处此的“ens33”为控制节点的外部/管理网络网卡名
[vxlan]
enable_vxlan = true
local_ip = 172.16.10.10 #控制节点隧道网口IP地址
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
注意:由于网桥工作于数据链路层,在iptables没有开启 bridge-nf时,数据会直接经过网桥转发,结果就是对FORWARD的设置失效;
Centos默认不开启bridge-nf透明网桥功能,启动bridge-nf方式:
vim /usr/lib/sysctl.d/00-system.conf #将0值改为“1”
或编辑文件/etc/sysctl.conf
vim /etc/sysctl.conf #添加:
----------------------------------
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
加重网桥模块br_netfilter
modprobe br_netfilter
执行:/sbin/sysctl -p
将此模块加入开机自加载
① 在/etc/新建rc.sysinit 文件,并写入以下内容
vim /etc/rc.sysinit
-------------------
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
②在/etc/sysconfig/modules/目录下新建文件br_netfilter.modules
vim /etc/sysconfig/modules/br_netfilter.modules
-----------------------------------------------
modprobe br_netfilter
增加权限
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
重启后检查模块
lsmod | grep br_netfilter
11.2.4配置三层代理Layer-3(L3)为自助虚拟网络提供路由和NAT服务
备份原始配置文件并去掉带“#”号行
cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
cat /etc/neutron/l3_agent.ini.bak | grep -v ^# | uniq > /etc/neutron/l3_agent.ini
---------------------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/neutron/l3_agent.ini && sed -i '/^$/d;s/$/\n/' /etc/neutron/l3_agent.ini
编辑配置文件
vim /etc/neutron/l3_agent.ini
-----------------------------
[DEFAULT]
interface_driver = linuxbridge
11.2.5配置DHCP代理配置文件
备份原始配置文件并去掉带“#”号行
cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
cat /etc/neutron/dhcp_agent.ini.bak | grep -v ^# | uniq > /etc/neutron/dhcp_agent.ini
-------------------------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/neutron/dhcp_agent.ini && sed -i '/^$/d;s/$/\n/' /etc/neutron/dhcp_agent.ini
编辑配置文件
vim /etc/neutron/dhcp_agent.ini
-------------------------------
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
11.2.6配置元数据代理配置文件
备份原始配置文件并去掉带“#”号行
cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
cat /etc/neutron/metadata_agent.ini.bak | grep -v ^# | uniq > /etc/neutron/metadata_agent.ini
---------------------------------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/neutron/metadata_agent.ini && sed -i '/^$/d;s/$/\n/' /etc/neutron/metadata_agent.ini
编辑配置文件
vim /etc/neutron/metadata_agent.ini
-----------------------------------
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = meta123
11.3配置控制节点nova配置文件里[neutron]选项设置
vim /etc/nova/nova.conf
-----------------------
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron123
service_metadata_proxy = true
metadata_proxy_shared_secret = meta123
11.4在控制节点(controller)上创建ml2软链接,并同步neutron数据库
1.ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
----------------------------------------------------------------------
2. su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
11.5在控制节点上(controller)重启/启动相关服务,并将neutron相关服务加入开机自启
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-l3-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service
systemctl start neutron-linuxbridge-agent.service
systemctl start neutron-dhcp-agent.service
systemctl start neutron-metadata-agent.service
systemctl start neutron-l3-agent.service
三、computer节点
1.Nova-compute组件安装
1.1安装计算节点(compute)的nova-compute组件
1.1.1配置计算节点(compute)的域名解析与yum安装源
确保域名解析与controller节点一致
确保基础环境配置时computer节点的时钟同步成功
1.2安装nova-compute并编辑相关配置文件
yum install openstack-nova-compute -y
1.2.1编辑nova配置文件
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
cat /etc/nova/nova.conf.bak | grep -v ^# | uniq > /etc/nova/nova.conf
---------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/nova/nova.conf && sed -i '/^$/d;s/$/\n/' /etc/nova/nova.conf
vim /etc/nova/nova.conf
-----------------------
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.10.20 #计算节点的管理网口IP
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova123
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement123
1.3检测计算节点是否支持硬件虚拟化加速(返回值1,为支持)
egrep -c '(vmx|svm)' /proc/cpuinfo
由于本实验是由虚拟机来做计算节点,所以返回值不>1,需要在/etc/nova/nova.conf配置文件的[libvirt]选项中加入以下内容:
vi /etc/nova/nova.conf
----------------------
[libvirt]
virt_type = qemu
(Vmware WorkStation 工具,也需要开启虚拟机Intel-VT技术)
2.安装neutron组件
2.1安装组件
yum install openstack-neutron-linuxbridge ebtables ipset
2.2配置neutron主配置文件
备份原始配置文件并去掉带“#”号行
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
cat /etc/neutron/neutron.conf.bak | grep -v ^# | uniq > /etc/neutron/neutron.conf
---------------------------------------------------------------------------------
sed -i.bak '/^#/d' /etc/neutron/neutron.conf && sed -i '/^$/d;s/$/\n/' /etc/neutron/neutron.conf
编辑配置文件
vim /etc/neutron/neutron.conf
-----------------------------
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
2.3配置网桥配置文件
备份原始配置文件并去掉带“#”号行
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
cd /etc/neutron/plugins/ml2
cat linuxbridge_agent.ini.bak | grep -v ^# | uniq > linuxbridge_agent.ini
-------------------------------------------------------------------------
cd /etc/neutron/plugins/ml2
sed -i.bak '/^#/d' linuxbridge_agent.ini && sed -i '/^$/d;s/$/\n/' linuxbridge_agent.ini
编辑配置文件
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
--------------------------------------------------
[vxlan]
enable_vxlan = true
local_ip = 172.16.10.20 #计算节点隧道网口IP地址
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver =neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
2.4开启bridge-nf透明网桥功能,启动bridge-nf
vim /usr/lib/sysctl.d/00-system.conf #将0值改为1
或
vim /etc/sysctl.conf
--------------------添加
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
加重网桥模块
modprobe br_netfilter
将此模块加入开机自加载
①在/etc/新建rc.sysinit 文件,并写入以下内容
vim /etc/rc.sysinit
-------------------
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
②在/etc/sysconfig/modules/目录下新建文件br_netfilter.modules
vim /etc/sysconfig/modules/br_netfilter.modules
-----------------------------------------------
modprobe br_netfilter
③增加权限
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
重启后检查模块
lsmod | grep br_netfilter
2.5编辑计算节点的nova配置文件,加入[neuron]选项功能
vim /etc/nova/nova.conf
-----------------------
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron123
2.6重启/启动相关服务,并加入开机自启
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
3.部署验证
在控制节点(controller)操作
. /root/admin-openrc
openstack network agent list #provider二层网络
Self-service(三层自服务网络)
四、Horizon组件安装
1.在控制节点安装horizon组件
yum install openstack-dashboard
2.修改配置文件
备份原始配置文件
cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak
编辑配置文件
vim /etc/openstack-dashboard/local_settings
第38行,设置“*”号,允许所有主机访问
第188行至190行,修改主机名为“controller”,使用鉴权API的v3版本,设置默认角色为“user”
OPENSTACK_HOST = "controller"
190 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
191 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
第164行,缓存设置,加入以下内容
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
第167-168行加入
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
第75行,开启多域支持(去“#”号并将后面参数改为True)
第64行配置API版本(去“#”号)
第97行,设置默认域为“Default”(去“#”号)
第464行,设置时区为“Asia/Shanghai”
在以下配置文件加入:WSGIApplicationGroup %{GLOBAL}
vim /etc/httpd/conf.d/openstack-dashboard.conf
----------------------------------------------
WSGIApplicationGroup %{GLOBAL}
3.重启服务
systemctl restart httpd.service
systemctl restart memcached.service
4.登录Horizon
http://192.168.10.10/dashboard #192.168.10.10为controller的管理网段IP地址
Domain:default
用户:admin
密码:123456
五、cinder块存储服务安装
1.初期工作
重新准备一台虚拟机,新加3块虚拟硬盘
设置主机名、域名解析,配置yum源并关闭防火墙、selinux,并确保与controller节点时间同步
hostnamectl set-hostname cinder
cat >>/etc/hosts<<EOF
192.168.10.10 controller
192.168.10.20 compute
192.168.10.30 cinder
EOF
配置yum源
cat >/etc/yum.repos.d/openstack.repo<<EOF
[base]
name=base
baseurl=https://repo.huaweicloud.com/centos/7/os/x86_64/
enable=1
gpgcheck=0
[extras]
name=extrax
baseurl=https://repo.huaweicloud.com/centos/7/extras/x86_64/
enable=1
gpgcheck=0
[updates]
name=updates
baseurl=https://repo.huaweicloud.com/centos/7/updates/x86_64/
enable=1
gpgcheck=0
[queens]
name=queens
baseurl=https://repo.huaweicloud.com/centos/7/cloud/x86_64/openstack-queens/
enable=1
gpgcheck=0
[virt]
name=virt
baseurl=https://repo.huaweicloud.com/centos/7/virt/x86_64/kvm-common/
enable=1
gpgcheck=0
EOF
yum clean all
yum repolist
关闭防火墙与selinux
sed -i 's/enforcing/disabled/g' /etc/selinux/config
setenforce 0
systemctl disable firewalld && systemctl stop firewalld
时间同步
yum install -y chrony
sed -i -e "/^server/s/^/#/" -e "7i\server controller iburst" /etc/chrony.conf
systemctl enable chronyd && systemctl restart chronyd
chronyc sources -v #查看同步情况
2.安装LVM组件
yum install lvm2 device-mapper-persistent-data
启动服务并加入开机自启
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
3.配置LVM卷
使用lsblk查看刚刚插入的三块盘
3.1创建LVM物理逻辑卷
pvcreate /dev/sd[b-d]
3.2创建cinder-volumes
逻辑卷组
vgcreate cinder-volumes /dev/sd[b-d] #记下cinder-volumes这个卷组名,后续有用
3.3设置lvm过滤器
默认情况下,LVM卷扫描工具会扫描包含卷的块存储设备的 /dev目录。 如果项目在其卷上使用LVM,则扫描工具将检测这些卷并尝试缓存它们,这可能会导致底层操作系统和项目卷出现各种问题。
所以我们必须控制 LVM(逻辑卷管理)工具在扫描设备时包含或排除哪些设备。
在lvm.conf配置文件下添加filter
参数,并修改其默认数值
vim /etc/lvm/lvm.conf
------------------------
devices {
...
filter = ["a/sdb/","a/sdc/","a/sdd/","r/.*/"]
...
}
也可使用此命令配置
sed -i.bak '^devices.*/a\ filter = ["a/sdb/","a/sdc/","a/sdd/","r/.*/"]' /etc/lvm/lvm.conf
4.cinder 机器环境配置,安装cinder组件(在cinder上操作)
yum install -y centos-release-openstack-queens
yum install -y openstack-cinder targetcli python-keystone
4.1配置cinder相关配置文件
将原始配置文件备份并去掉带”#“号行
sed -i.bak -e "/^#/d" -e "/^$/d;s/$/\n/" /etc/cinder/cinder.conf
编辑配置文件
vim /etc/cinder/cinder.conf
---------------------------
在以下选项里分别添加以下参数:
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.10.30 #cinder管理网络IP地址
enabled_backends = lvm #lvm为后端名称,任意命名
glance_api_servers = http://controller:9292
[database] #数据库设置
connection = mysql+pymysql://cinder:cinder_123456@controller/cinder
[keystone_authtoken] #keystone鉴权设置
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder_123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[lvm] #如果该[lvm]部分不存在,就创建它
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes #自行替换为创建卷组时所取的卷组名
iscsi_protocol = iscsi
iscsi_helper = lioadm
4.2启动服务并加入开机自启
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
5.在控制节点配置cinder(controller上操作)
5.1配置cinder数据库
mysql -u root -p123456
-----------------------
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder_123456';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder_123456';
FLUSH PRIVILEGES;
5.2创建cinder相关信息
运行openstack管理员环境脚本
. /root/admin-openrc
创建一个cinder
用户,密码设置为cinder_123456
openstack user create --domain default --password cinder_123456 cinder
添加admin
角色到cinder
用户
openstack role add --project service --user cinder admin
创建cinderv2
和cinderv3
服务实体:(注意:块存储服务需要两个服务实体)
openstack service create --name cinderv2 --description "OpenStack Block Storage v2" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage v3" volumev3
创建块存储服务API端点
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v2/%\(project_id\)s
5.3安装和配置cinder组件(controller上操作)
yum install openstack-cinder
将原始配置文件备份并去掉带”#“号行
sed -i.bak -e "/^#/d" -e "/^$/d;s/$/\n/" /etc/cinder/cinder.conf
编辑配置文件
vim /etc/cinder/cinder.conf
-----------------------------
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.10.10 #控制节点管理网口IP地址
[database]
connection = mysql+pymysql://cinder:cinder_123456@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder_123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
5.4同步块存储数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
5.5配置计算服务使用块存储
vim /etc/nova/nova.conf
----------------------------
[cinder] #在此处新加下列参数
os_region_name = RegionOne
5.6启动/重启以下服务,将cinder服务加入开机自启
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service
systemctl enable openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service
systemctl start openstack-cinder-scheduler.service
6.cinder验证,在控制节点(controller)操作
. /root/admin-openrc
openstack volume service list