二、安装openstack(pike)

1 openstack分布式搭建分析

openstack搭建时一般将控制节点和网络节点放在一台服务器上,计算节点单独一台服务器,存储节点单独一台服务器,不过也可以将所有的节点放在同一台服务器上,本文档我是参考官方pike版本的文档进行搭建的分布式pike版本的环境,主要搭建控制节点(控制和网络节点)和计算节点,控制节点和计算节点所要安装的模块和服务如下图1-1所示:

                                     图1-1

控制节点顾名思义就是对创建的虚拟机进行进行控制的一个节点,比如使用数据库存储已经创建虚拟机的信息、操作身份认证、调度虚拟机运行、提供网络服务等。

计算节点主要是用来创建虚拟机,我们在计算节点准备好要创建的虚拟机的相关数据信息,然后调用计算节点,计算节点会通过libvirt调用虚拟化层(kvm),虚拟化层会根据现有的数据和硬件设备虚拟出一个虚拟机。

下面我们来分析一下需要安装的模块和包的作用:

chrony:chrony是时间同步模块,按照字面意思来说就是让若干个节点的时钟保持一致,openstack在搭建的过程中把控制节点作为时钟服务器,其他节点的时钟以控制节点为基准。

mariadb及mysql:mariadb和mysql都是出自同一个公司的产品,不过后来mysql被oracle公司收购了,mariadb和mysql的操作方式基本一致,openstack采用的是mariadb和mysql这种关系型数据库来存储数据的。

rabbitmq-server:rabbitmq是一种高级消息队列,openstack中有很多组件,多个组件之间的通信是依赖于rabbitmq。

python-memcached:python-memcached是用来做缓存使用的,openstack是基于django开发的,因此使用python的缓存模块可以减少服务器的压力,加快访问速度。

keystone:keystone是一个认证服务,在openstack中操作每一种资源时都要经过keystone认证,例如现在有一个用户想要放问nova,其过程如图1-2所示:

                                      图1-2

用户的会带着请求先到keystone进行身份认证是否有此用户,认证成功后返回一个token给用户,用户带着token访问nova,nova和keystone是两个不同的组件,因此nova不知道这个token是真还是假,为了保证数据安全,nova会带着用户传送过来的token访问keystone进行认证,认证成功后返回消息给nova,接着nova会处理完用户请求中所包含是事情,处理完后将结果返回给用户。

glance:glance用来提供镜像服务,创建虚拟机时必须依赖于镜像文件,这个镜像文件就是glance组件提供。

nova:nova用来提供创建实例服务,它接受到请求后会调用虚拟化层完成相应的虚拟化操作。

neutron:neutron用来提供网络服务,管理各种网络资源,为虚拟机提供网络。

dashboard:dashboard翻译过来叫做仪表盘,它提供了一个可视化的界面去操作openstack。

openstack所使用到的一个模块和服务接简单的介绍完了,上面都是本人根据官网的理解,如有不妥可留下你宝贵的意见。

2 搭建环境介绍

此文档使用的系统是操作系统是centos7.2,搭建的openstack的版本是pike版本,pike的源搭建可参考之前yum搭建的文档。

本此搭建主要搭建两个节点,即控制节点和计算节点,控制节点有三个网卡ens33、ens37、ens38,ens38提供外网,ens33是管理网卡,ens37是业务网卡;计算节点有两个网卡ens33和ens37,ens33是管理网卡,ens37是业务网卡。

将控制节点所在的服务器名称改为controller,计算节点所在的服务器名称改为compute1,重启生效,具体如下:

#以下代码在控制节点执行
$vim /etc/hostname

controller

#以下代码在计算节点执行
$vim /etc/hostname

compute1

3 配置网卡

#以下代码在控制节点执行
$vim /etc/sysconfig/network-scripts/ifcfg-ens33 #管理网卡

DEVICE=ens33
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="static"

IPADDR=10.0.0.11
NETMASK=255.255.255
GATEWAY=10.0.0.1


$vim /etc/sysconfig/network-scripts/ifcfg-ens37 #业务网卡

DEVICE=ens33
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"

$vim /etc/sysconfig/network-scripts/ifcfg-ens38  #这个网卡配置后确定能访问外网

TYPE="Ethernet"
BOOTPROTO="static"
DEFROUTE="yes"
PEERDNS="yes"
PEERROUTES="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens38"
UUID="h8064b18-d31d-8734-9db0-ba7376ba3db0"
DEVICE="ens38"
ONBOOT="yes"
IPADDR=192.168.135.144
NETMASK=255.255.255.0
GATEWAY=192.168.135.2  

$vim /etc/hosts
10.0.0.11 controller
10.0.0.31 compute1

$systemctl restart network
 
  
#以下代码在计算节点执行
$vim /etc/sysconfig/network-scripts/ifcfg-ens33 #管理网卡

DEVICE=ens33
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="static"

IPADDR=10.0.0.31
NETMASK=255.255.255
GATEWAY=10.0.0.1

$vim /etc/sysconfig/network-scripts/ifcfg-ens37 #业务网卡

DEVICE=ens33
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"

$vim /etc/hosts
10.0.0.11 controller
10.0.0.31 compute1

$systemctl restart network

4 安装时间同步器

*****控制节点执行*****
$yum install chrony

$vim /etc/chrony.conf
server controller iburst

$systemctl enable chronyd.service
$systemctl restart chronyd.service


*****计算节点执行*****

$yum install chrony

$vim /etc/chrony.conf
server controller iburst #注意在计算节点注释掉其他提供时间的server

$systemctl enable chronyd.service
$systemctl restart chronyd.service

5 安装openstack命令行客户端以及selinux

selinux是linux的一个防止入侵的设计,安装openstack-selinux可以自动的管理selinux。

*****控制节点执行*****

$yum install python-openstackclient
$yum install openstack-selinux

$systemctl stop firewalld.service  #关闭防火墙

$setenforece 0   #关闭selinux

*****计算节点执行*****

$yum install python-openstackclient

$yum install openstack-selinux

$systemctl stop firewalld.service  #关闭防火墙
$setenforece 0   #关闭selinux

6 安装数据库(控制节点)

数据库是用来存储相关数据的,数据库部署在控制节点,具体如下:

$yum install mariadb mariadb-server python2-PyMySQL

$vim /etc/my.cnf.d/openstack.cnf

[mysqld]
bind-address = 10.0.0.11 #控制节点的管理网卡地址

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

$systemctl enable mariadb.service
$systemctl start mariadb.service
$mysql_secure_installation  #修改数据库密码,案例中数据库密码修改为123456

7 安装rabbitmq(控制节点)

$yum install rabbitmq-server

$systemctl enable rabbitmq-server
$systemctl restart rabbitmq-server

$rabbitmqctl add_user openstack 123456   #创建用户和密码
$rabbitmqctl set_permissions openstack ".*" ".*" ".*"   #为openstack用户设置权限

8 安装memcached(控制节点

$yum install memcached python-memcached

$vim /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,controller"
$systemctl enable memcached.service
$systemctl restart memcached.service

9 安装keystone(控制节点)

$mysql -u root -p

MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY '123456';   #为本地用户设置数据库的权限
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '123456'; #为其他用户设置数据库权限


$yum install openstack-keystone httpd mod_wsgi

$vim /etc/keystone/keystone.conf

[database]
# ...
connection = mysql+pymysql://keystone:123456@controller/keystone #配置数据库

[token]
# ...
provider = fernet

$su -s /bin/sh -c "keystone-manage db_sync" keystone  #同步数据库

#初始化秘钥
$keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
$keystone-manage credential_setup --keystone-user keystone --keystone-group  keystone

#引导认证

$keystone-manage bootstrap --bootstrap-password 123456 \
  --bootstrap-admin-url http://controller:35357/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

$vim /etc/httpd/conf/httpd.conf

ServerName controller

#创建一个链接到/usr/share/keystone/wsgi-keystone.conf文件
$ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

$systemctl enable httpd.service
$systemctl restart httpd.service

$export OS_USERNAME=admin
$export OS_PASSWORD=123456
$export OS_PROJECT_NAME=admin
$export OS_USER_DOMAIN_NAME=Default
$export OS_PROJECT_DOMAIN_NAME=Default
$export OS_AUTH_URL=http://controller:35357/v3
$export OS_IDENTITY_API_VERSION=3
#创建service项目
$openstack project create --domain default \
  --description "Service Project" service

#创建demo项目
$openstack project create --domain default \
--description “Demo Project” demo

#创建demo用户,密码设置为123456
$openstack user create --domain default \
--password-prompt demo

#创建有一个user角色
$openstack role create user

#将角色关联到某一个项目中
$openstack role add --project demo --user demo user

#解除环境变量
$unset OS_AUTH_URL OS_PASSWORD

#使用admin用户获取认证,密码123456
$openstack --os-auth-url http://controller:35357/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue

#demo用户获取认证,密码123456
$openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name demo --os-username demo token issue

#创建环境变量
$vim ~/admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

$vim ~/demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

$. admin-openrc       #使用环境变量
$openstack token issue #获取认证

10 安装Image(控制节点)

$mysql -u root -p

MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY '123456';

$. admin-openrc
#创建glance用户,密码设置为123456
$openstack user --domain default --password-prompt glance

#将admin角色关联到项目service项目以及glance用户
$openstack role add --project service --user glance admin

#创建glance服务实体
$openstack service create --name glance \
--description ‘OpenStack Image’ image

#创建glance服务api端点
$openstack endpoint create --region RegionOne \
  image public http://controller:9292

$openstack endpoint create --region RegionOne \
  image internal http://controller:9292

$openstack endpoint create --region RegionOne \
  image admin http://controller:9292


$yum install openstack-glance

$vim /etc/glance/glance-apu.conf
[database]
# ...
connection = mysql+pymysql://glance:123456@controller/glance


[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456

[paste_deploy]
# ...
flavor = keystone


[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

$vim /etc/glance/glance-registry.conf
[database]
# ...
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456

[paste_deploy]
# ...
flavor = keystone

$su -s /bin/sh -c "glance-manage db_sync" glance

$systemctl enable openstack-glance-api.service \
  openstack-glance-registry.service

$systemctl start openstack-glance-api.service \
  openstack-glance-registry.service

#使用环境变量
$. admin-openrc
#下载镜像
$wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
#上传镜像
$openstack image create "cirros" \
  --file cirros-0.3.5-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public
#加载镜像列表
$openstack image list

11 安装nova服务

*****控制节点安装*****
$mysql -u root -p

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';

$. admin-openrc

#密码设为123456
$openstack user create --domain default --password-prompt nova

$openstack role add --project service --user nova admin

$openstack service create --name nova \
  --description "OpenStack Compute" compute

$openstack endpoint create --region RegionOne \
  compute public http://controller:8774/v2.1

$openstack endpoint create --region RegionOne \
  compute internal http://controller:8774/v2.1

$openstack endpoint create --region RegionOne \
  compute admin http://controller:8774/v2.1

#密码设为123456
$openstack user create --domain default --password-prompt placement

$openstack role add --project service --user placement admin

$openstack service create --name placement --description "Placement API" placement

$openstack endpoint create --region RegionOne placement public  \ 
http://controller:8778

$openstack endpoint create --region RegionOne placement internal \
http://controller:8778

$openstack endpoint create --region RegionOne placement admin \
http://controller:8778


$yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api

$vim /etc/nova/nova.conf

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 10.0.0.11 #控制节点的管理网卡的ip
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]
# ...
connection = mysql+pymysql://nova:123456@controller/nova_api

[database]
# ...
connection = mysql+pymysql://nova:123456@controller/nova

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip

[glance]
# ...
api_servers = http://controller:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456

$vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

$systemctl restart httpd

$su -s /bin/sh -c "nova-manage api_db sync" nova

$su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

$su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

$su -s /bin/sh -c "nova-manage db sync" nova

$nova-manage cell_v2 list_cells

$systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

$systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service



*****计算节点安装*****
$yum install openstack-nova-compute
$vim /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 10.0.0.31 #计算节点的管理网卡ip
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456


[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
# ...
api_servers = http://controller:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456

$egrep -c '(vmx|svm)' /proc/cpuinfo
#如果上句命令执行结果为0,则在/etc/nova/nova.conf中加入以下代码
[libvirt]
# ...
virt_type = qemu

$systemctl enable libvirtd.service openstack-nova-compute.service
$systemctl start libvirtd.service openstack-nova-compute.service

#以下命令在控制节点进行
$. admin-openrc
$su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

$openstack compute service list

$openstack catalog list

$openstack image list

$nova-status upgrade check

12 neutron服务安装

*****控制节点安装*****

$mysql -u root -p

MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY '123456';

$. admin-openrc

#密码设为123456
$openstack user create --domain default --password-prompt neutron-openrc

$openstack role add --project service --user neutron admin

$openstack service create --name neutron \
  --description "OpenStack Networking" network


$openstack endpoint create --region RegionOne \
  network public http://controller:9696


$openstack endpoint create --region RegionOne \
  network internal http://controller:9696

$openstack endpoint create --region RegionOne \
  network admin http://controller:9696


$yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables

$vim  /etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:123456@controller/neutron
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp


$vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
# ...
flat_networks = provider

[ml2_type_vxlan]
# ...
vni_ranges = 1:1000

[securitygroup]
# ...
enable_ipset = true

$vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens37 #ens37是业务网卡名

[vxlan]
enable_vxlan = true
local_ip = 10.0.0.11  #管理网卡ip
l2_population = true

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

$vim /etc/neutron/l3_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge

vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

$vim /etc/neutron/metadata_agent.ini 
[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET

$vim /etc/nova/nova.conf
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

$ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

$su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

$systemctl restart openstack-nova-api.service

$systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service


$systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

$systemctl enable neutron-l3-agent.service
$systemctl start neutron-l3-agent.service





*****计算节点配置*****
$yum install openstack-neutron-linuxbridge ebtables ipset

$vim /etc/neutron/neutron.conf
[DEFAULT]
# ...
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456


[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens37  #计算节点的业务网卡名

[vxlan]
enable_vxlan = true
local_ip = 10.0.0.31  #计算节点管理网卡的ip
l2_population = true

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

$vim /etc/nova/nova.conf
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

$systemctl restart openstack-nova-compute.service

$systemctl enable neutron-linuxbridge-agent.service
$systemctl start neutron-linuxbridge-agent.service

$openstack network agent list  #这句在控制节点执行

13 dashboard安装(控制节点安装)

$yum install openstack-dashboard

$vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}


OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

$systemctl restart httpd.service memcached.service

14 在浏览器访问http://10.0.0.11/dashboard

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值