虚拟机上利用OpenStack搭建私有云(queens)

OpenStack项目主要提供:计算服务、存储服务、镜像服务、网络服务,均依赖于身份认证keystone的支撑。其中的每个项目可以拆开部署,同一项目也可以部署在多台=物理机上,并且每个服务都提供了应用接口程序(API),方便与第三方集成调用资源。

环境准备

安装openstack环境的硬件需求

  • CPU 支持intel 64或AMD 64 CPU扩展,并启用AMD-H或intel VT硬件虚拟化支持的64位x86处理器
  • 内存 >=2G
  • 磁盘空间 >=50G

虚拟机分配

主机名操作系统IP地址备注
controllerCentOS-7.4-x86_64172.16.10.33控制节点
computeCentOS-7.4-x86_64172.16.10.35计算节点
cinderCentOS-7.4-x86_64172.16.10.36块存储节点

关闭虚拟机防火墙及selinux

systemctl disable firewalld.service
systemctl stop firewalld.service
vim /etc/sysconfig/selinux
SELINUX=disable           //将enforcing修改为disable,永久关闭
setenforce 0

搭建OpenStack

环境准备

在接下来的操作中若无特别说明,则表示在三台主机上均进行相同操作

配置域名解析

修改所有主机名
hostnamectl set-hostname 主机名      //三台虚拟机修改相对应主机名,修改完成之后重启服务器
修改所有主机hosts文件
vim /etc/hosts
172.16.10.33 controller
172.16.10.35 compute
172.16.10.36 cinder             //三台服务器hosts文件内容一致
测试各节点连通性
ping -c 4 openstack.org     //是否ping通官网
ping -c 4 compute            //各节点间测试

配置阿里云yum源

备份默认yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
下载最新yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

安装配置NTP服务

在controller节点安装配置chrony
yum install chrony -y

vim /etc/chrony.conf
server  controller  iburst  //所有节点向controller节点同步时间
allow 172.16.10.0/24      //设置时间同步网段
systemctl enable chronyd
systemctl restart chronyd
在compute节点安装配置chrony
yum install chrony -y

vim /etc/chrony.conf
server  controller  iburst
systemctl enable chronyd
systemctl restart chronyd
在cinder节点安装配置chrony
yum install chrony -y

vim /etc/chrony.conf
server  controller  iburst
systemctl enable chronyd
systemctl restart chronyd
验证时钟同步服务
chronyc sources

启用OpenStack库

yum install centos-release-openstack-queens -y
yum upgrade -y                    //在主机上升级包
yum install python-openstackclient -y  //安装openstack客户端
yum install openstack-selinux -y  //安装openstack-selinux,便于自动管理openstack的安全策略

MySQL数据库部署(controller)

软件包安装
yum install mariadb mariadb-server python2-PyMySQL -y
配置文件修改
vim /etc/my.cnf.d/mariadb-server.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid
bind-address = 172.16.10.33   //修改为控制节点IP,使其他节点可以通过管理网络访问数据库
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
启动服务并设置为开机自启
systemctl enable mariadb.service
systemctl start mariadb.service
对数据库进行安全加固
mysql_secure_installation

虚拟机上利用OpenStack搭建私有云(queens)

安装配置Messaging server-RabbitMQ

OpenStack使用message queue协调操作和各服务器的状态信息。消息队列服务一般运行在控制节点上。

在controller节点安装RabbitMQ
yum install rabbitmq-server -y
开启服务并设置为开机自启
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service 
netstat -ntap | grep 5672

虚拟机上利用OpenStack搭建私有云(queens)

添加openstack用户

如果在添加用户时报错,就检查是否修改了主机名,或者是在之前的操作中修改主机名之后未重启,重启即可解决创建用户报错

rabbitmqctl add_user openstack 123456         //创建用户openstack,密码为123456
rabbitmqctl set_permissions openstack ".*" ".*" ".*"   //授予新建用户权限

部署memcached服务(controller)

安装软件
yum install memcached python-memcached -y
修改配置文件
vim /etc/sysconfig/memcached

PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 172.16.10.33,::1"
开启服务并设置为开机自启
systemctl enable memcached.service
systemctl start memcached.service

部署etcd服务(controller)

etcd是一个分布式,一致的键值存储,用于共享配置和服务发现,特点是,安全,具有可选客户端证书身份验证的自动TLS;快速,基准测试10,000次/秒;可靠,使用Raft正确分发。

安装软件
yum install etcd -y
修改配置文件
vim /etc/etcd/etcd.conf

ETCD_INITIAL_CLUSTER
ETCD_INITIAL_ADVERTISE_PEER_URLS
ETCD_ADVERTISE_CLIENT_URLS
ETCD_LISTEN_CLIENT_URLS
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://172.16.10.33:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.16.10.33:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.10.33:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.10.33:2379"
ETCD_INITIAL_CLUSTER="controller=http://172.16.10.33:2380"   
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
开启服务并设置为开机自启
systemctl enable etcd.service
systemctl start etcd.service

部署keystone认证服务

Identity服务为其他OpenStack服务提供验证和授权服务,为所有服务提供终端目录,其他OpenStack服务将身份认证当作通用统一API来使用。此外,提供用户信息但是不在OpenStack项目中的服务(如LDAP服务)可被整合进先前存在的基础设施中。
为了从identify服务中获益,其他的OpenStack服务需要与他合作。当某个OpenStack服务需要与他合作。当某个OpenStack服务收到来自用户的请求时,该服务询问identify服务,验证该用户是否有权限进行此次请求,身份验证服务包括以下组件

  • 服务器:一个中心化的服务器使用RESTful接口来提供认证和授权服务
  • 驱动:驱动或服务后端被整合进集中式服务器中。它们被用来访问OpenStack外部仓库的身份信息,并且它们可能已经存在于OpenStack被部署在的基础设施中,如SQL数据库
  • 模块:中间件模块运行于使用身份验证服务的OpenStack组件的地址空间中。这些模块拦截服务请求,取出用户凭据,并将它们送入中央服务器寻求授权。中间件模块和OpenStack组件间的整合使用python web服务器网关接口。
    当安装OpenStack自身服务时,用户必须将之注册到其OpenStack安装环境的每个服务。身份服务才可以追踪到哪些OpenStack服务已经安装,以及在网络中定位它们。

keystone服务的安装配置

在controller节点上操作

配置MySQL数据库及授权
mysql -uroot -p         //登陆数据库
CREATE DATABASE keystone;   //创建keystone数据库
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '123456';
//授权本地登陆
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '123456';
//授权任意地址登陆
FLUSH PRIVILEGES;
安装软件包
yum install openstack-keystone httpd mod_wsgi -y
修改配置文件(keystone.conf)
vim /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:123456@controller/keystone
[token]
provider = fernet     //2922行,安全消息传递算法
同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone

虚拟机上利用OpenStack搭建私有云(queens)

初始化数据库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
keystone-manage bootstrap --bootstrap-password 123456 \      //添加admin用户及三种登陆方式
--bootstrap-admin-url http://controller:35357/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne

虚拟机上利用OpenStack搭建私有云(queens)

配置apache服务
vim /etc/httpd/conf/httpd.conf
ServerName controller   //修改主机名
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/   //创建软连接
systemctl enable httpd.service
systemctl start httpd.service              //启动服务,并将服务添加为开机自启
设置环境变量脚本
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

创建域、项目用户和角色

创建域
openstack domain create --description "Domain" example

虚拟机上利用OpenStack搭建私有云(queens)

创建项目
openstack project create --domain default   --description "Service Project" service

虚拟机上利用OpenStack搭建私有云(queens)

创建平台demo项目
openstack project create --domain default --description "Demo Project" demo

虚拟机上利用OpenStack搭建私有云(queens)

创建demo用户
openstack user create --domain default  --password-prompt demo

虚拟机上利用OpenStack搭建私有云(queens)

创建用户角色
openstack role create user

虚拟机上利用OpenStack搭建私有云(queens)

添加用户角色到demo项目和用户
openstack role add --project demo --user demo user  //该步骤没有返回值

验证keystone

取消环境变量
unset OS_AUTH_URL OS_PASSWORD
admin用户返回的认证token
openstack --os-auth-url http://controller:35357/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue

虚拟机上利用OpenStack搭建私有云(queens)

demo用户返回的认证token
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name demo --os-username demo token issue

创建openstack客户端环境脚本

创建admin-openrc脚本
vim admin-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
创建demo-openrc脚本
vim demo-openrc

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
使用脚本验证返回值

查看admin用户的token信息

source ~/admin-openrc   //刷入环境变量
openstack token issue   //认证

镜像服务(glance)

在controller节点上操作

安装与配置

配置MySQL数据库及授权
mysql -u root -p

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY '123456';
FLUSH PRIVILEGES;
获取admin用户的环境变量
source admin-penrc
export | grep OS_

虚拟机上利用OpenStack搭建私有云(queens)

创建glance用户
openstack user create --domain default --password-prompt glance
admin用户添加到glance用户和项目中
openstack role add --project service --user glance admin
创建glance服务
openstack service create --name glance  --description "OpenStack Image" image
创建镜像服务API端点

OpenStack使用三种API端点变种代表每种服务:admin、internal、public。

openstack endpoint create --region RegionOne  image public http://controller:9292
openstack endpoint create --region RegionOne  image internal http://controller:9292
openstack endpoint create --region RegionOne  image admin http://controller:9292

虚拟机上利用OpenStack搭建私有云(queens)
虚拟机上利用OpenStack搭建私有云(queens)
虚拟机上利用OpenStack搭建私有云(queens)

安装glance包
yum install openstack-glance -y
创建images文件夹,并修改属性
mkdir /var/lib/glance/images
cd /var/lib
chown -hR glance:glance glance
修改glance-api.conf配置文件
vim /etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000    
auth_url = http://controller:35357  //3501行,注意 url 不是 uri
memcached_servers = controller:11211    //3552行
auth_type = password        //3659
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456

[paste_deploy]
flavor = keystone   //4508

[glance_store]
stores = file,http    //2066
default_store = file   //2110
filesystem_store_datadir = /var/lib/glance/images  //2429
修改glance-registry.conf配置文件
vim /etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357    //注意 url 不是 uri
memcached_servers = controller:11211  //1365
auth_type = password               //1472
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456

[paste_deploy]
flavor = keystone           //2294
同步镜像数据库
su -s /bin/sh -c "glance-manage db_sync" glance

虚拟机上利用OpenStack搭建私有云(queens)

启动服务
systemctl enable openstack-glance-api.service
systemctl start openstack-glance-api.service
systemctl enable openstack-glance-registry.service
systemctl start openstack-glance-registry.service

验证上传镜像

获取admin用户的环境变量并下载镜像
source ~/admin-openrc
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
//下载一个小型linux镜像进行测试
上传镜像

使用QCOW2磁盘格式,裸容器格式和公开可见性将图像上传到Image服务,以便所有项目都可以访问它

openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img  --disk-format qcow2 --container-format bare  --public

虚拟机上利用OpenStack搭建私有云(queens)

查看上传的镜像
openstack image list

虚拟机上利用OpenStack搭建私有云(queens)

部署compute服务

在controller节点上操作

安装与配置

配置MySQL数据库及授权
mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
创建nova用户
source ~/admin-openrc   //加载admin环境变量
openstack user create --domain default --password-prompt nova
添加admin用户为nova用户
openstack role add --project service --user nova admin
创建nova服务端点
openstack service create --name nova --description "OpenStack Compute" compute
创建compute API 服务端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
创建一个placement服务用户
openstack user create --domain default --password-prompt placement
添加placement用户为项目服务admin角色
openstack role add --project service --user placement admin
在服务目录创建Placement API服务
openstack service create --name placement --description "Placement API" placement
创建Placement API服务端点
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
安装软件包
yum install openstack-nova-api openstack-nova-conductor  openstack-nova-console openstack-nova-novncproxy  openstack-nova-scheduler openstack-nova-placement-api -y
修改nova.conf配置文件
vim /etc/nova/nova.conf

[DEFAULT]
enabled_apis=osapi_compute,metadata  //2756行
transport_url=rabbit://openstack:123456@controller  //3156行
my_ip=172.16.10.33     //1291行
use_neutron=true    //1755行
firewall_driver=nova.virt.firewall.NoopFirewallDriver   //2417行

[api_database]
connection=mysql+pymysql://nova:123456@controller/nova_api  //3513行

[database]
connection=mysql+pymysql://nova:123456@controller/nova   //4588行

[api]
auth_strategy=keystone   //3221行

[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357    //6073行
memcached_servers=controller:11211   //6124行
auth_type=password     //6231行
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

[vnc]
enabled=true    //10213行
server_listen=$my_ip     //10237行
server_proxyclient_address=$my_ip    //10250行 

[glance]
api_servers=http://controller:9292   //5266行

[oslo_concurrency]
lock_path=/var/lib/nova/tmp   //7841行

[placement]
os_region_name=RegionOne    //8740行
auth_type=password    //8780行 
auth_url=http://controller:35357/v3   //8786行 
project_name=service   //8801行 
project_domain_name=Default   //8807行
username=placement     //8827行
user_domain_name=Default    //8833行
password=123456    //8836行 
启用placement API访问

由于软件包错误,必须启用对Placement API的访问,在配置文件末尾添加即可。

vim /etc/httpd/conf.d/00-nova-placement-api.conf

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
重启httpd服务
systemctl restart httpd.service
同步nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1 cell
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
同步nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
验证数据库是否注册正确
nova-manage cell_v2 list_cells

虚拟机上利用OpenStack搭建私有云(queens)

启动并将服务添加为开机自启
systemctl enable openstack-nova-api.service
systemctl enable openstack-nova-consoleauth.service
systemctl enable openstack-nova-scheduler.service
systemctl enable openstack-nova-conductor.service
systemctl enable openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service
systemctl start openstack-nova-consoleauth.service
systemctl start openstack-nova-scheduler.service
systemctl start openstack-nova-conductor.service
systemctl start openstack-nova-novncproxy.service

安装和配置compute节点

安装软件包
yum install openstack-nova-compute -y
修改nova.conf配置文件
vim /etc/nova/nova.conf 

[DEFAULT]
my_ip = 172.16.10.35         //1291,输入compute节点IP
use_neutron=true              //1755 
firewall_driver=nova.virt.firewall.NoopFirewallDriver       //2417
enabled_apis = osapi_compute,metadata                 //2756
transport_url = rabbit://openstack:123456@controller  //3156

[api]
auth_strategy=keystone    //3221 

[keystone_authtoken]
auth_uri = http://172.16.10.33:5000       //6073controller节点IP
auth_url = http://controller:35357
memcached_servers=controller:11211      //6124 
auth_type=password                     //6231 
project_domain_name=default
user_domain_name=default
project_name=service
username=nova
password=123456

[vnc]
enabled=true        //10213 
server_listen=0.0.0.0       //10237 
server_proxyclient_address=$my_ip      //10250 
novncproxy_base_url=http://controller:6080/vnc_auto.html     //10268 

[glance]
api_servers=http://controller:9292       //5266 

[oslo_concurrency]
lock_path=/var/lib/nova/tmp       //7841 

[placement]
os_region_name=RegionOne         //8740 
auth_type = password                //8780
auth_url=http://controller:35357/v3    //8786
project_name = service        //8801
project_domain_name = Default     //8807
user_domain_name = Default        //8833
username = placement            //8827
password = 123456               //8836
启动服务同时添加为开机自启
systemctl enable libvirtd.service
systemctl restart libvirtd
systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service

添加compute节点到cell数据库

在controller节点上进行操作

验证在数据库中的计算节点
source ~/admin-openrc        //在重启虚拟机时需重新加载环境变量
openstack compute service list --service nova-compute
发现计算节点
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
在controller节点验证计算服务操作
openstack compute service list

虚拟机上利用OpenStack搭建私有云(queens)

列出身份服务中的API端点以验证与身份服务的连接
openstack catalog list

虚拟机上利用OpenStack搭建私有云(queens)

检查cells和placement API是否正常
nova-status upgrade check

虚拟机上利用OpenStack搭建私有云(queens)

Networking服务

安装和配置controller节点neutron网络配置

创建nuetron数据库并授权
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'   IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%'   IDENTIFIED BY '123456';
创建用户
source ~/admin-openrc
openstack user create --domain default --password-prompt neutron
创建neutron服务
openstack service create --name neutron   --description "OpenStack Networking" network
创建网络服务端点
openstack endpoint create --region RegionOne  network public http://controller:9696
openstack endpoint create --region RegionOne  network internal http://controller:9696
openstack endpoint create --region RegionOne  network admin http://controller:9696
安装软件包
yum install -y openstack-neutron openstack-neutron-ml2  openstack-neutron-linuxbridge ebtables
修改配置文件
vim  /etc/neutron/neutron.conf

[database]
connection = mysql+pymysql://neutron:123456@controller/neutron   //729

[DEFAULT]
auth_strategy = keystone  //27
core_plugin = ml2   //30
service_plugins =    //33 不写代表禁用其他插件
transport_url = rabbit://openstack:123456@controller   //570
notify_nova_on_port_status_changes = true   //98
notify_nova_on_port_data_changes = true     //102

[keystone_authtoken]
auth_uri = http://controller:5000   //847
auth_url = http://controller:35357
memcached_servers = controller:11211    //898
auth_type = password        //1005
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[nova]
auth_url = http://controller:35357   //1085
auth_type = password        //1089
project_domain_name = default   //1127
user_domain_name = default    //1156
region_name = RegionOne      //1069
project_name = service     //1135
username = nova           //1163
password = 123456        //1121

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp       //1179
配置网络二层插件
vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan       //136
tenant_network_types =           //141   设置空是禁用本地网络
mechanism_drivers = linuxbridge    //145
extension_drivers = port_security  //150

[ml2_type_flat]
flat_networks = provider   //186

[securitygroup]
enable_ipset = true    //263
配置Linux网桥
vim  /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens33       //157

[vxlan]
enable_vxlan = false      //208

[securitygroup]
enable_security_group = true       //193
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver   //188
配置DHCP
vim /etc/neutron/dhcp_agent.ini

interface_driver = linuxbridge          //16
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq          //28
enable_isolated_metadata = true            //37
配置metadata
vim  /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller   //22
metadata_proxy_shared_secret = 123456       //34
配置计算服务使用网络服务
vim /etc/nova/nova.conf

[neutron]
url = http://controller:9696         //7534
auth_url = http://controller:35357   //7610
auth_type = password                //7604
project_domain_name = default        //7631
user_domain_name = default          //7657
region_name = RegionOne          //7678
project_name = service          //7625 
username = neutron              //7651
password = 123456               //7660
service_metadata_proxy = true     //7573
metadata_proxy_shared_secret = 123456   //7584
建立服务软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启compute API服务
systemctl restart openstack-nova-api.service
启动neutron服务并添加为开机自启
systemctl enable neutron-server.service   
systemctl enable neutron-linuxbridge-agent.service 
systemctl enable neutron-dhcp-agent.service   
systemctl enable neutron-metadata-agent.service
systemctl start neutron-server.service   
systemctl start neutron-linuxbridge-agent.service 
systemctl start neutron-dhcp-agent.service   
systemctl start neutron-metadata-agent.service

配置compute节点网络服务

安装软件包
yum install -y openstack-neutron-linuxbridge ebtables ipset
配置公共组件
vim /etc/neutron/neutron.conf

[DEFAULT]
auth_strategy = keystone      //27
transport_url = rabbit://openstack:123456@controller   //570

[keystone_authtoken]
auth_uri = http://controller:5000      //847
auth_url = http://controller:35357
memcached_servers = controller:11211    //898
auth_type = password       //1005
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp    //1180
配置Linux网桥
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens33   //157

[vxlan]
enable_vxlan = false    //208

[securitygroup]
enable_security_group = true    //193
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver   //188
配置计算节点网络服务
vim /etc/nova/nova.conf

[neutron]
url = http://controller:9696    //7534
auth_url = http://controller:35357    //7610
auth_type = password      //7640
project_domain_name = default    //7631
user_domain_name = default    //7657
region_name = RegionOne    //7678
project_name = service    //7625
username = neutron    //7651
password = 123456   //7660
启动服务
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

部署Horizon服务

在controller节点安装Horizon服务

安装软件包
yum install openstack-dashboard -y
修改配置文件
vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"    //189
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"   //191
ALLOWED_HOSTS = ['*']     //38
SESSION_ENGINE = 'django.contrib.sessions.backends.file'  //51
配置memcache会话存储
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'   //50,添加
CACHES = {          //注释166-170 去掉注释159-164
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST   //开启身份认证API版本v3 190行
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True    //开启domains版本支持 76行 

OPENSTACK_API_VERSIONS = {    //配置API版本  65行
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"   //98

OPENSTACK_NEUTRON_NETWORK = {    //324

    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_***': False,
    'enable_fip_topology_check': False,
}
解决网页无法打开检查
vim /etc/httpd/conf.d/openstack-dashboard.conf

WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL}   //添加
重启web服务和会话存储
systemctl restart httpd.service 
systemctl restart memcached.service

登陆测试

http://172.16.10.33/dashboard

domain: default
用户名:admin
密码:123456

虚拟机上利用OpenStack搭建私有云(queens)
虚拟机上利用OpenStack搭建私有云(queens)

转载于:https://blog.51cto.com/13643643/2171262

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值