一亩三分地

深入云计算、深入网络编排

OpenStack Pike版本部署手册

Openstack安装部署文档(Pike

一、  环境准备

本文的安装部署都是在CentOS 7.4 上完成,本文中的各个节点都是双网卡设置

 

1.        虚拟机节点拓扑部署和主机命名

eth0: 管理网络

eth1: 数据网络/隧道

控制节点: eth0: 10.0.2.15/24eth1: 192.168. 56.101/24

网络节点: eth0: 10.0.2.5/24eth1: 192.168. 56.102/24

计算节点: eth0: 10.0.2.4/24eth1: 192.168. 56.103/24

存储节点: eth0: 10.0.2.6/24eth1: 192.168. 56.104/24

 

$ vim /etc/hosts

# controller

192.168.56.101      controller

# compute

192.168.56.103      compute

#network

192.168.56.102     network

#block storage 

192.168.56.104    block 

 

2.        虚拟机网卡配置

使用传统网卡命名方式

编辑/etc/default/grub并加入“net.ifnames=0

$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg

[NOTE]  具体参考如下连接:www.linuxprobe.com/eno16777736-eth0/

3.        关闭各个节点的防火墙和NetworkManager服务

#service NetworkManager stop

#chkconfig NetworkManager off

# systemctl stop firewalld.service

# systemctl disable firewalld.service

# /usr/sbin/setenforce 0

 

##########set SELINUX disabled##############

#vim /etc/sysconfig/selinux

SELINUX=disabled

4.        安装NTP服务

1)       在所有结点上安装chrony

$ yum install chrony

 

2)       配置/etc/chrony.confcontroller node

修改相应的部分:

$ vim /etc/chrony.conf

……

allow 10.0.0.0/8

 

重启serverchrony服务

# systemctl enable chronyd.service

# systemctl start chronyd.service

 

3)       配置NTP clientnetworkcomputeblock

修改相应的部分:

$ vim /etc/chrony.conf

……

server controller iburst

……

 

启动ntp服务:

# systemctl enable chronyd.service

# systemctl start chronyd.service

 

4)       所有节点上进行验证

$ chronyc sources

 

5.       安装Openstack (所有节点)

# yum install centos-release-openstack-pike

# yum upgrade

# yum install python-openstackclient

# yum install openstack-selinux

6.       安装MariaDB SQL数据库

1)        Controller节点:

安装mariadb-server

# yum install mariadb mariadb-server python2-PyMySQL

 

修改mariadb_openstack.cnf配置

# vi /etc/my.cnf.d/openstack.cnf

[mysqld]

bind-address = 192.168.56.101

default-storage-engine = innodb

innodb_file_per_table = on

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

 

重启mysqld服务,并设置开机启动

# systemctl enable mariadb.service

# systemctl start mariadb.service

# mysql_secure_installation

设置密码 1235456, 其他都是Yes

      

 

7.       安装Message QueuerabbitMQ , Controller node

#yum install rabbitmq-server

重启rabbitmq服务

# systemctl enable rabbitmq-server.service

# systemctl start rabbitmq-server.service

 

添加rabbitmq用户, 并配置权限

# rabbitmqctl add_user openstack openstack123

# rabbitmqctl set_permissions openstack ".*"".*" ".*"

 

8.       安装Memcached(控制节点)

安装包。

yum install memcached python-memcached

 

配置/etc/sysconfig/memcached

 

OPTIONS="-l 127.0.0.1,::1"

修正为

OPTIONS="-l 127.0.0.1,::1,controller"

启动服务。

systemctl enable memcached.service

 systemctl startmemcached.service

二、  安装KeyStone

[] keystone只需要安装在Controller Node

 

1)        mariadb sql节点创建keystone的数据库

$ mysql -u root -p

mysql> CREATE DATABASE keystone;

mysql> GRANT ALL PRIVILEGES ON keystone.* TO'keystone'@'localhost' \

IDENTIFIED BY '123456';

mysql> GRANT ALL PRIVILEGES ON keystone.* TO'keystone'@'%' \

IDENTIFIED BY '123456';

mysql> exit

2)        yum安装rpm

#  yum installopenstack-keystone httpd mod_wsgi

 

3)        配置/etc/keystone/keystone.conf

[DEFAULT]

verbose=True

admin_token=15fe8a5fd6f8a6c0cb74

log_dir=/var/log/keystone

[database]

connection = mysql+pymysql://keystone:123456@controller/keystone

 [token]

provider = fernet

 

4)        加载Keystone数据库的schema

#  su -s /bin/sh -c"keystone-manage db_sync" keystone

5)        创建证书和密钥

# keystone-manage fernet_setup --keystone-user keystone--keystone-group keystone

# keystone-manage credential_setup --keystone-userkeystone --keystone-group keystone

 

6)        启动 keystone 服务

# keystone-manage bootstrap --bootstrap-passwordADMIN_PASS \

 --bootstrap-admin-url http://controller:35357/v3/ \

 --bootstrap-internal-url http://controller:5000/v3/ \

 --bootstrap-public-url http://controller:5000/v3/ \

 --bootstrap-region-id RegionOne

7)        配置Apache http服务

 

######配置/etc/httpd/conf/httpd.conf Servername

#ServerName controller

####创建/usr/share/keystone/wsgi-keystone.conf 的软连接。

# ln -s /usr/share/keystone/wsgi-keystone.conf/etc/httpd/conf.d/

 

#####启动 http服务

# systemctl enable httpd.service

#  systemctl starthttpd.service

 

8)        创建service entity API endpoint

## 设置认证环境变量

# export OS_USERNAME=admin

#export OS_PASSWORD=ADMIN_PASS

#export OS_PROJECT_NAME=admin

#export OS_USER_DOMAIN_NAME=Default

#export OS_PROJECT_DOMAIN_NAME=Default

#exportOS_AUTH_URL=http://controller:35357/v3

#export OS_IDENTITY_API_VERSION=3

#####创建DEMO用户等信息。

#openstack project create --domain default   --description "Service Project"service

#openstack project create --domain default  --description "Demo Project" demo

#openstack user create --domain default    --password-prompt demo

#openstack role create user openstack role add --projectdemo --user demo user

 

9)        验证安装是否成功

unset OS_AUTH_URL OS_PASSWORD

# openstack --os-auth-url http://controller:35357/v3   --os-project-domain-name Default--os-user-domain-name Default  --os-project-name admin --os-username admin token issue

# openstack --os-auth-url http://controller:5000/v3 \

  --os-project-domain-name Default --os-user-domain-name Default \

  --os-project-name demo --os-username demo token issue

10)   使用环境变量

#创建admin-openrc.sh

vim  admin-openrc.sh

export OS_PROJECT_DOMAIN_NAME=Default

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=ADMIN_PASS

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

[root@controller ~]#

[root@controller ~]# cat demo-openrc

export OS_PROJECT_DOMAIN_NAME=Default

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=demo

export OS_AUTH_URL=http://controller:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

# 执行 admin-openrc.sh

source admin-openrc.sh

 

### 验证

# openstack token issue

#openstack service list

 

三、  安装Glance

1)        MariaDB SQL节点配置Glance数据库

$ mysql -u root -p123456

mysql> CREATE DATABASE glance;

mysql> GRANT ALL PRIVILEGES ON glance.* TO'glance'@'localhost'  IDENTIFIED BY  '123456';

mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY  '123456';

mysql> exit

2)        创建glance用户, 并添加管理员角色

# openstack user create --domain default --password-promptglance

# openstack role add --project service --user glanceadmin

 

3)        keystone创建glance服务和endpoint

# oopenstack service create --name glance  --description "OpenStack Image"image

#  openstackendpoint create --region RegionOne   image public http://controller:9292

# openstack endpoint create --region RegionOne  image internal http://controller:9292

#openstack endpoint create --region RegionOne  image admin http://controller:9292

 

 

4)        yum安装rpm

# yum install openstack-glance

 

5)        修改Glance配置文件/etc/glance/glance-api.conf

[database]

connection = mysql+pymysql://glance: 123456@controller/glance[keystone_authtoken]

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = 123456

 

[paste_deploy]

# ...

flavor = keystone

 

[glance_store]

# ...

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

 

 

6)        修改glance-registry.conf

[database]

connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = 123456

 

[paste_deploy]

# ...

flavor = keystone

 

 

7)        生成数据库

# su -s /bin/sh -c "glance-manage db_sync"glance

8)        启动glance服务

# systemctl enable openstack-glance-api.service openstack-glance-registry.service

# systemctl start openstack-glance-api.service  openstack-glance-registry.service

9)        验证glance安装是否成功

#  . admin-openrc

# mkdir /tmp/images

# wgethttp://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

# wget -P https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img(0.3.0的镜像)

# glance image-create --name"cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img\

--disk-format qcow2 --container-format bare --progress

# glance image-list

# rm -r /tmp/images

四、  安装Nova

1.       安装 Nova-Controller节点

1)        设置MySQL数据库,添加nova数据库

 

mysql -u root -p123456

mysql> CREATE DATABASE nova_api;

mysql> CREATE DATABASE nova;

mysql> CREATE DATABASE nova_cell0;

mysql> GRANT ALL PRIVILEGES ON nova_api;.* TO'nova'@'localhost'  IDENTIFIED BY '123456';

mysql> GRANT ALL PRIVILEGES ON nova_api;.* TO'nova'@'%'  IDENTIFIED BY '123456';

mysql> GRANT ALL PRIVILEGES ON nova.* TO'nova'@'localhost'  IDENTIFIED BY '123456';

mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%'  IDENTIFIED BY '123456';

mysql> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';

mysql> GRANT ALL PRIVILEGES ON nova_cell0.* TO'nova'@'%'  IDENTIFIED BY '123456';

mysql> exit

 

2)        设置Keystone,创建nova的服务和endpoint

  #  . admin-openrc                                                                                                                                               

    # openstack usercreate --domain default --password-prompt nova                                                                                                                      #  openstack role add --project service --usernova admin                                                                                                                                     #openstack service create --name nova \                                                                                                                                  

       --description "OpenStack Compute"compute                                                                      

    #  openstack endpoint create --region RegionOne\                                                                                                                                                     compute public http://controller:8774/v2.1                                                                                                      

    # openstackendpoint create --region RegionOne \                                                                                                                                                      

       compute internal http://controller:8774/v2.1                                                                                                                                                         

    #  openstack endpoint create --region RegionOne\                                                                                                                                                      compute admin http://controller:8774/v2.1                                                                                                      

    # openstack usercreate --domain default --password-prompt placement                                                                                                                         #openstack role add --project service --user placement admin                                                                                                                                        # openstack service create --nameplacement --description "Placement API" placement                                                                                        #openstack endpoint create --region RegionOne placement publichttp://controller:8778                                                               #openstack endpoint create --region RegionOne placement internalhttp://controller:8778                                                                                   #openstack endpoint create --region RegionOne placement adminhttp://controller:8778                                                                               

      

3)        yum安装rpm

 

#  yum installopenstack-nova-api openstack-nova-conductor \

openstack-nova-consoleopenstack-nova-novncproxy \

 openstack-nova-scheduler openstack-nova-placement-api

 

 

4)        修改nova.conf

 

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller

enabled_apis = osapi_compute,metadata

my_ip = 10.0.0.11

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api_database]

# ...

connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

 

[database]

# ...

connection =mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[api]

# ...

auth_strategy = keystone

 

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS

[vnc]

enabled = true

# ...

vncserver_listen = $my_ip

vncserver_proxyclient_address = $my_ip

 

[glance]

# ...

api_servers = http://controller:9292

[oslo_concurrency]

# ...

lock_path = /var/lib/nova/tmp

[placement]

# ...

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:35357/v3

username = placement

password = PLACEMENT_PASS

 

5)        配置/etc/httpd/conf.d/00-nova-placement-api.conf

<Directory /usr/bin>

   <IfVersion>= 2.4>

      Require allgranted

  </IfVersion>

   <IfVersion< 2.4>

      Orderallow,deny

      Allow fromall

  </IfVersion>

</Directory>

 

6)        重启http服务。

# systemctl restart httpd

 

7)        创建数据库

# su -s /bin/sh -c "nova-manage api_db sync"nova

# su -s /bin/sh -c "nova-manage cell_v2map_cell0" nova

# su -s /bin/sh -c "nova-manage cell_v2 create_cell--name=cell1 --verbose" nova

# su -s /bin/sh -c "nova-manage db sync" nova

 

 

8)        验证cell0 cell1正确性。

# nova-manage cell_v2 list_cells

 

9)        重启nova服务并设置开机启动

 

# systemctl enable openstack-nova-api.service openstack-nova-cert.service\

openstack-nova-consoleauth.service openstack-nova-scheduler.service\

openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

 

# systemctl start openstack-nova-api.serviceopenstack-nova-cert.service  \

openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service  \

openstack-nova-conductor.serviceopenstack-nova-novncproxy.service

 

10)  下面操作,每次追加了计算节点后执行。

# openstack compute service list --service nova-compute

# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts--verbose" nova

# openstack compute service list --service nova-compute

2.       安装计算节点

1)        yum安装rpm

#  yum installopenstack-nova-compute

 

2)        修改配置文件nova.conf

 

[DEFAULT]

my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

enabled_apis = osapi_compute,metadata

transport_url = rabbit://openstack:RABBIT_PASS@controller

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

# ...

auth_strategy = keystone

 

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = NOVA_PASS

[vnc]

# ...

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url =http://controller:6080/vnc_auto.html

[glance]

# ...

api_servers = http://controller:9292

[oslo_concurrency]

# ...

lock_path = /var/lib/nova/tmp

[placement]

# ...

os_region_name = RegionOne

project_domain_name = Default

project_name = service

auth_type = password

user_domain_name = Default

auth_url = http://controller:35357/v3

username = placement

password = PLACEMENT_PASS

 

 

3)        检查Compute节点CPU对虚拟化的支持情况

$ egrep -c '(vmx|svm)' /proc/cpuinfo

 

#####如果没有返回值,或者返回值为0. 修改配置文件

[libvirt]                                            

virt_type=qemu

 

4)        重启nova-compute相关服务并配置开机启动

 

# systemctl enable libvirtd.serviceopenstack-nova-compute.service

# systemctl start libvirtd.serviceopenstack-nova-compute.service

 

五、  安装Dashboard

安装在控制节点

1)        yum安装rpm

# yuminstall openstack-dashboard

2)        修改Dashboard的配置文件

/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['horizon.example.com', 'localhost','192.168.56.101']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

 

CACHES = {

    'default': {

         'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',

        'LOCATION': 'controller:11211',

    }

}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" %OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {

   "identity": 3,

   "image": 2,

   "volume": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

OPENSTACK_NEUTRON_NETWORK = {

    ...

   'enable_router': False,

   'enable_quotas': False,

   'enable_distributed_router': False,

   'enable_ha_router': False,

    'enable_lb':False,

   'enable_firewall': False,

    'enable_vpn':False,

   'enable_fip_topology_check': False,

}

TIME_ZONE = "TIME_ZONE"

 

 

3)        启动Dashboard服务

# systemctl restart httpd.service memcached.service

4)        验证Dashboard是否可以登录

http://192.168.56.101(controller-ip)/dashboard

 

六、  安装Neutron

安装配置控制节点

1)        MySQL节点配置neutron数据库

 

$ mysql -u root -p123456

mysql> CREATE DATABASE neutron;

mysql> GRANT ALL PRIVILEGES ON neutron.* TO'neutron'@'localhost' IDENTIFIED BY '123456';

mysql> GRANT ALL PRIVILEGES ON neutron.* TO'neutron'@'%' IDENTIFIED BY '123456';

mysql> exit

 

2)        Keystone配置neutron的用户和角色

 

# openstack user create --domain default--password-prompt neutron

# openstack role add --project service --user neutronadmin

# openstack service create --name neutron --description"OpenStack Networking" network

# openstack endpoint create --region RegionOne  network public http://controller:9696

# openstack endpoint create --region RegionOne networkinternal http://controller:9696

# openstack endpoint create --region RegionOne  network admin http://controller:9696

 

3)        安装Neutron包,使用ml2作为二层core_plugin

 

$ yum install openstack-neutron openstack-neutron-ml2  openstack-neutron-linuxbridge ebtables

 

4)        修改neuron配置文件/etc/neutron/neutron.conf

 

[database]

connection = mysql+pymysql://neutron:123456@controller/neutron

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

transport_url = rabbit://openstack:openstack123@controller

auth_strategy = keystone

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

 

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = NEUTRON_PASS

[nova]

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = 123456

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

 

 

5)        配置ML2

 

修改/etc/neutron/plugins/ml2/ml2_conf.ini

 

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

 

[ml2_type_flat]

flat_networks = provider

 

[ml2_type_vxlan]

vni_ranges =1:1000

 

[securitygroup]

enable_ipset = True

 

6)        配置NOVA使用Neutron提供网络服务

 

修改/etc/nova/nova.conf

 

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = 123456

service_metadata_proxy = true

metadata_proxy_shared_secret = 123456

 

7)        建立ml2_conf.iniplugin.ini的软连接

 

#  ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

8)        生成数据库

 

# su -s /bin/sh -c "neutron-db-manage --config-file/etc/neutron/neutron.conf \

   --config-file/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

9)        重启computeneutron服务并设置开机启动

 

#systemctl restart openstack-nova-api.service

# systemctl enable neutron-server.service

# systemctl start neutron-server.service

配置网络节点

1)        准备工作

 

修改/etc/sysctl.conf

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

 

重新加载系统配置

# sysctl -p

 

2)        安装Openstack的网络服务

 

#yum install openstack-neutron openstack-neutron-ml2   openstack-neutron-linuxbridge ebtables

3)        配置/etc/neutron/neutron.conf

 

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = true

transport_url = rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 123456

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

 

 

4)        配置Network节点的ML2

 

修改/etc/neutron/plugins/ml2/ml2_conf.ini

 

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

 

 

[ml2_type_flat]

flat_networks = provider

 

 [ml2_type_vxlan]

vni_ranges =1:1000

 

[securitygroup]

enable_ipset = True

 

 

5)        配置Linux bridge agent

 

修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME

enable_vxlan = true

local_ip = OVERLAY_INTERFACE_IP_ADDRESS

l2_population = true

[securitygroup]

enable_security_group = true

firewall_driver =neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

: PROVIDER_INTERFACE_NAMEOVERLAY_INTERFACE_IP_ADDRESS替换成实际的网卡名和IP

 

6)        配置l3_agent.ini

 

[DEFAULT]

interface_driver = linuxbridge

 

7)        配置DHCP Agent,修改dhcp_agent.ini

 

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

 

8)        配置metadata agent,修改metadata_agent.ini

 

[DEFAULT]

nova_metadata_ip = controller

metadata_proxy_shared_secret = 123456

 

9)        创建软连接

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

 

10)   启动服务。

 

# systemctl enable neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

# systemctl start neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service

#systemctl enable neutron-l3-agent.service


 

# systemctl start neutron-l3-agent.service

 

 

 

11)   建立并重启neutron-openvswitch-agent服务

 

# systemctl enableneutron-openvswitch-agent.service neutron-l3-agent.service \

neutron-dhcp-agent.serviceneutron-metadata-agent.service neutron-ovs-cleanup.service

# systemctl startneutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.serviceneutron-metadata-agent.service

 

 

配置计算节点

 

1)        准备工作

##修改sysctl配置,/etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

##reload 配置

sysctl -p

 

2)        安装neutron的二层Agent

# yum install openstack-neutron-linuxbridge ebtablesipset

 

3)        配置计算节点的网络设置,/etc/neutron/neutron.conf

 

[DEFAULT]

transport_url = rabbit://openstack:openstack123@controller

auth_strategy = keystone

 

[oslo_messaging_rabbit]

rabbit_host=controller

rabbit_userid = openstack

rabbit_password = 123456

 

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = 123456

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

 

4)        配置Linux bridge agent

 

#修改 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings =provider:PROVIDER_INTERFACE_NAME

 

[vxlan]

enable_vxlan = true

local_ip = OVERLAY_INTERFACE_IP_ADDRESS

l2_population = true

 

 

[securitygroup]

enable_security_group = true

firewall_driver =neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

注:PROVIDER_INTERFACE_NAMEOVERLAY_INTERFACE_IP_ADDRESS改成本机的网卡名和IP.

 

5)        修改计算节点/etc/nova/nova.conf,配置使用neutron提供网络服务

 

[DEFAULT]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = 123456

 

 

6)        启动服务,并设置开机启动

 

# systemctl restart openstack-nova-compute.service

# systemctl enable neutron-linuxbridge-agent.service

# systemctl start neutron-linuxbridge-agent.service

 

安装FWaaS服务

1)        安装fwaas (控制、网络节点)

#yum install openstack-neutron-fwaas

2)        修改ControllerNetwork节点的/etc/neutron/neutron.conf

##### service_plugins添加fwaas

service_plugins=router,firewall

[service_providers]

service_provider =FIREWALL:Iptables:neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver:default

 

3)        网络节点修改fwaas_driver.ini配置文件 /etc/neutron/fwaas_driver.ini,

 

[fwaas]

driver =neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver

enabled = True

agent_version = v1

driver = iptables

enabled = True

conntrack_driver = conntrack

 

4)        网络节点修改/etc/neutron/l3_agent.ini配置文件。

[agent]

extensions = fwaas

5)        创建DB表。

# neutron-db-manage --subproject neutron-fwaas upgradehead

 

6)        重启neutron-serverneutron-l3-agent

 

#### restart neutron server @ controller node

# systemctl restart neutron-server

 

#####restart neutron-l3-agent @network node

# systemctl restart neutron-l3-agent

 

7)        修改dashboard配置,支持使用FWaaS

##下载插件代码进行安装

#git clone https://github.com/openstack/neutron-fwaas-dashboard.git

# cd neutron-fwaas-dashboard

# python setup.py install

# cp neutron_fwaas_dashboard/enabled/_7010_project_firewalls_panel.py /usr/share/openstack-dashboard/openstack_dashboard/enabled/

 

###openstack-dashboard的安装节点,/etc/openstack-dashboard/local_settings

OPENSTACK_NEUTRON_NETWORK = {

    'enable_firewall':True,

}

 

8)        重启dashboard

# systemctl restart httpd.service

 

安装LoadBalance服务

1)        安装lbaas

 

# yum install openstack-neutron-lbaas

2)        修改ControllerNetwork节点的/etc/neutron/neutron.conf

 

[DEFAULT]

service_plugins = router,firewall,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

3)              修改ControllerNetwork节点的/etc/neutron/neutron_lbaas.conf

[service_providers]

service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

 

4)        修改Network节点的/etc/neutron/lbaas_agent.ini

[DEFAULT]

interface_driver = linuxbridge

[haproxy]

user_group = haproxy

 

5)        NeutronDB更新

neutron-db-manage --subproject neutron-lbaas upgrade head

 

6)        重启控制节点的neutron-server和网络节点的neutron-lbaasv2-agent

@controller node

# systemctl restart neutron-server

@network node

# systemctl enable neutron-lbaasv2-agent

# systemctl restart neutron-lbaasv2-agent

 

1)        安装dashboard插件 @openstack-dashboard的安装节点

##下载插件代码进行安装

#git clone https://github.com/openstack/neutron-lbaas-dashboard.git

# cd neutron-lbaas-dashboard/

# python setup.py install

# cp neutron_lbaas_dashboard/enabled/_1481_project_ng_loadbalancersv2_panel.py /usr/share/openstack-dashboard/openstack_dashboard/enabled/

 

openstack-dashboard的安装节点,/etc/openstack-dashboard/local_settings

OPENSTACK_NEUTRON_NETWORK = {

    'enable_lb':True,

}

 

7)        重启dashboard

# systemctl restart httpd.service

 

8)       注意事项:

l   使用Opensatcklbaas服务创建pool,并添加membervip、状态监控后,如果发现memberINACTIVE状态,检查添加member时为虚拟机设置的端口是否开启。

l   安装的使用的haproxy实现的LB功能。 P版本,追加了一种新的实现方式 - Load-balancerservice (Octavia)

 

安装VPN服务

2)       Controller Network节点上安装openstack-neutron-vpnaas

 

# yum install openstack-neutron-vpnaas

 

3)       Network节点上安装libreswan

注:可以选择多种方式,此处使用的是libreswan

 

注意事项:

libreswan的安装版本请使用3.153.16版本

#####安装libreswan,

#rpm -ivh libreswan-3.16-1.el7_2.x86_64

### 执行如下命令

sysctl -a | egrep"ipv4.*(accept|send)_redirects" | awk -F "=" '{print$1"= 0"}' >> /etc/sysctl.conf

#####修改/etc/sysctl.conf

net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

#####修改成

net.ipv4.ip_forward = 1

net.ipv4.conf.default.rp_filter = 0

####执行如下命令

sysctl –p

 

执行下面的命令验证OpenSWan是否正确安装

#ipsec --version

启动ipsec验证ipsec

# systemctl start ipsec

# ipsec verify

 

4)       修改/etc/neutron/neutron.conf @controllernetwork node

 

[DEFAULT]

service_plugins = router,firewall,neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,neutron_vpnaas.services.vpn.plugin.VPNDriverPlugin

 

5)        修改/etc/neutron/neutron_vpnaas.conf @controllernetwork node

 

[DEFAULT]

service_provider = VPN:openswan:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

 

 

6)        修改/etc/neutron/vpn_agent.ini@network  node

[DEFAULT]

interface_driver = linuxbridge

[vpnagent]

vpn_device_driver = neutron_vpnaas.services.vpn.device_drivers.libreswan_ipsec.LibreSwanDriver

7)        创建DB表。

# neutron-db-manage --subproject neutron-vpnaas upgradehead

                                                             

8)        停止network结点上停止neutron-l3-agent

# systemctl stop neurton-l3-agent

#systemctl disable neutron-l3-agent.service

 

9)        网络节点启动neutron-vpn-agent服务,启动neutron-openvswitch-agent服务

# systemctl enable neutron-vpn-agent

# systemctl start neutron-vpn-agent

10)   安装dashboard插件 @openstack-dashboard的安装节点

 

###下载插件代码

#git clone https://github.com/openstack/neutron-vpnaas-dashboard.git

#cd neutron-vpnaas-dashboard

# python setup.py install

# cpneutron_vpnaas_dashboard/enabled/_7100_project_vpn_panel.py* /usr/share/openstack-dashboard/openstack_dashboard/enabled/

 

###openstack-dashboard的安装节点,/etc/openstack-dashboard/local_settings

OPENSTACK_NEUTRON_NETWORK = {

    'enable_vpn':True,

}

      

 

11)   重启dashboard

# systemctl restart httpd

 

12)   注意事项:

neutron-vpn-agentneutron-l3-agent不能同时部署运行

七、  安装Cinder

安装和配置存储节点

前提要求

1)        安装LVM包,并启动LVM metadata服务并且配置成开机启动。

# yum install lvm2

# systemctl enable lvm2-lvmetad.service

# systemctl start lvm2-lvmetad.service

2)        创建LVMphysical volume /dev/sdb

# pvcreate /dev/sdb

Physical volume "/dev/sdb" successfully created

[] /dev/sdb盘符名称,可通过fdisk –l 查看。

3)        创建LVM volume group cinder-volumes

# vgcreate cinder-volumes /dev/sdb

Volume group "cinder-volumes" successfullycreated

 

4)        编辑 /etc/lvm/lvm.conf配置实例可以访问volume

devices部分,配置如下filter内容。

devices {

...

filter = [ "a/sdb/", "r/.*/"]

 

安装和配置组件

 

5)        安装rpm包。

# yum install openstack-cinder targetcli python-keystone

6)      编辑 /etc/cinder/cinder.conf ,完成下面配置。

[database]部分,配置数据连接。

[database]

# ...

connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[]CINDER_DBPASS替换为设定的密码。

 

[DEFAULT]部分,配置RabbitMQ的访问url

[DEFAULT]

# ...

transport_url = rabbit://openstack:RABBIT_PASS@controller

[]RABBIT_PASS 替换为设定的RABBITMQ的密码。

 

[DEFAULT][keystone_authtoken] 部分,配置下面内容。

[DEFAULT]

# ...

auth_strategy = keystone

 

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = cinder

password = CINDER_PASS

[]CINDER_PASS替换为设定的密码。

 

       [DEFAULT] 部分,配置下面内容。

[DEFAULT]

# ...

my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

[]MANAGEMENT_INTERFACE_IP_ADDRESS替换成实际的保守IP

[lvm] 部分,配置下面信息,如果不存在lvm部分,则追加。

[lvm]

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

volume_group = cinder-volumes

iscsi_protocol = iscsi

iscsi_helper = lioadm

[DEFAULT]部分配置下面内容。

[DEFAULT]

# ...

enabled_backends = lvm

 

[DEFAULT]部分配置下面内容。

[DEFAULT]

# ...

glance_api_servers = http://controller:9292

[oslo_concurrency] 部分配置下面内容。

[oslo_concurrency]

# ...

lock_path = /var/lib/cinder/tmp

7)        启动cinder服务。并设定为开机启动。

 

# systemctl enable openstack-cinder-volume.servicetarget.service

# systemctl start openstack-cinder-volume.servicetarget.service

安装和配置控制节点

  事前要求

1)        创建DB,完成下面步骤。

$ mysql -u root -p

MariaDB [(none)]> CREATE DATABASE cinder;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO'cinder'@'localhost' \

  IDENTIFIED BY'CINDER_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO'cinder'@'%' \

  IDENTIFIED BY'CINDER_DBPASS';

MariaDB [(none)]> exit

[]CINDER_DBPASS替换为自定义的密码。

2)        Source admin-openrc

 

$ . admin-openrc

 

3)        创建service credentials,完成下面步骤。

 

a.      创建cinder用户。

$ openstack user create --domain default--password-prompt cinder

 

User Password:

Repeat User Password:

+---------------------+----------------------------------+

| Field              | Value                           |

+---------------------+----------------------------------+

| domain_id          | default                         |

| enabled            | True                            |

| id                 | 9d7e33de3e1a498390353819bc7d245d |

| name               | cinder                          |

| options            | {}                              |

| password_expires_at | None                             |

+---------------------+----------------------------------+

b.     赋予cinder用户admin角色。

$ openstack role add --project service --user cinderadmin

c.      创建cinderv2 cinderv3服务entities

$ openstack service create --name cinderv2 \

  --description "OpenStackBlock Storage" volumev2

 

+-------------+----------------------------------+

| Field       |Value                            |

+-------------+----------------------------------+

| description | OpenStack Block Storage          |

| enabled     |True                             |

| id          |eb9fd245bdbc414695952e93f29fe3ac |

| name        |cinderv2                         |

| type        |volumev2                         |

+-------------+----------------------------------+

 

$ openstack service create --name cinderv3 \

  --description "OpenStackBlock Storage" volumev3

 

+-------------+----------------------------------+

| Field       |Value                            |

+-------------+----------------------------------+

| description | OpenStack Block Storage          |

| enabled     |True                             |

| id          |ab3bbbef780845a1a283490d281e7fda |

| name        |cinderv3                         |

| type        |volumev3                         |

+-------------+----------------------------------+

 

4)        创建service API endpoints

$ openstack endpoint create --region RegionOne \

  volumev2 publichttp://controller:8776/v2/%\(project_id\)s

 

+--------------+------------------------------------------+

| Field        |Value                                   |

+--------------+------------------------------------------+

| enabled      |True                                    |

| id           |513e73819e14460fb904163f41ef3759        |

| interface    |public                                   |

| region       |RegionOne                               |

| region_id    |RegionOne                               |

| service_id   |eb9fd245bdbc414695952e93f29fe3ac        |

| service_name | cinderv2                                 |

| service_type | volumev2                                 |

| url          |http://controller:8776/v2/%(project_id)s |

+--------------+------------------------------------------+

 

$ openstack endpoint create --region RegionOne \

  volumev2 internalhttp://controller:8776/v2/%\(project_id\)s

 

+--------------+------------------------------------------+

| Field        |Value                                   |

+--------------+------------------------------------------+

| enabled      |True                                     |

| id           |6436a8a23d014cfdb69c586eff146a32        |

| interface    |internal                                |

| region       |RegionOne                               |

| region_id    |RegionOne                                |

| service_id   |eb9fd245bdbc414695952e93f29fe3ac        |

| service_name | cinderv2                                 |

| service_type | volumev2                                 |

| url          |http://controller:8776/v2/%(project_id)s |

+--------------+------------------------------------------+

 

$ openstack endpoint create --region RegionOne \

  volumev2 adminhttp://controller:8776/v2/%\(project_id\)s

 

+--------------+------------------------------------------+

| Field        |Value                                   |

+--------------+------------------------------------------+

| enabled      |True                                    |

| id           |e652cf84dd334f359ae9b045a2c91d96        |

| interface    |admin                                    |

| region       |RegionOne                               |

| region_id    |RegionOne                               |

| service_id   |eb9fd245bdbc414695952e93f29fe3ac        |

| service_name | cinderv2                                 |

| service_type | volumev2                                 |

| url          |http://controller:8776/v2/%(project_id)s |

+--------------+------------------------------------------+

$ openstack endpoint create --region RegionOne \

  volumev3 publichttp://controller:8776/v3/%\(project_id\)s

 

+--------------+------------------------------------------+

| Field        |Value                                   |

+--------------+------------------------------------------+

| enabled      |True                                     |

| id           |03fa2c90153546c295bf30ca86b1344b        |

| interface    |public                                  |

| region       |RegionOne                               |

| region_id    |RegionOne                                |

| service_id   |ab3bbbef780845a1a283490d281e7fda        |

| service_name | cinderv3                                 |

| service_type | volumev3                                 |

| url          |http://controller:8776/v3/%(project_id)s |

+--------------+------------------------------------------+

 

$ openstack endpoint create --region RegionOne \

  volumev3 internalhttp://controller:8776/v3/%\(project_id\)s

 

+--------------+------------------------------------------+

| Field        |Value                                    |

+--------------+------------------------------------------+

| enabled      |True                                    |

| id           |94f684395d1b41068c70e4ecb11364b2        |

| interface    |internal                                 |

| region       |RegionOne                               |

| region_id    |RegionOne                               |

| service_id   |ab3bbbef780845a1a283490d281e7fda        |

| service_name | cinderv3                                 |

| service_type | volumev3                                 |

| url          |http://controller:8776/v3/%(project_id)s |

+--------------+------------------------------------------+

 

$ openstack endpoint create --region RegionOne \

  volumev3 adminhttp://controller:8776/v3/%\(project_id\)s

 

+--------------+------------------------------------------+

| Field        |Value                                   |

+--------------+------------------------------------------+

| enabled      |True                                     |

| id           |4511c28a0f9840c78bacb25f10f62c98        |

| interface    |admin                                   |

| region       |RegionOne                               |

| region_id    |RegionOne                                |

| service_id   |ab3bbbef780845a1a283490d281e7fda        |

| service_name | cinderv3                                 |

| service_type | volumev3                                 |

| url          |http://controller:8776/v3/%(project_id)s |

+--------------+------------------------------------------+

 

  安装和配置组件

1)        安装openstack-cinder包。

# yum install openstack-cinder

2)        编辑/etc/cinder/cinder.conf,完成下面配置。

a.  [database] 部分,配置下面信息。

[database]

# ...

connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[]CINDER_DBPASS替换为实际的密码。

b.  [DEFAULT]部分,配置下面信息。

[DEFAULT]

# ...

transport_url = rabbit://openstack:RABBIT_PASS@controller

[]RABBIT_PASS 替换为实际的密码。

c.  [DEFAULT]  [keystone_authtoken]部分,配置下面信息。

[DEFAULT]

# ...

auth_strategy = keystone

 

[keystone_authtoken]

# ...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = cinder

password = CINDER_PASS

[]CINDER_PASS替换为实际的密码。

d.  [DEFAULT]部分,配置下面信息。

[DEFAULT]

# ...

my_ip = 10.0.0.11

[]10.0.0.11替换为本机实际的保守网段IP

e.   [oslo_concurrency]部分,配置下面信息。

[oslo_concurrency]

# ...

lock_path = /var/lib/cinder/tmp

3)        填充DB。请无视提示的有关deprecation的消息 

# su -s /bin/sh -c "cinder-manage db sync"cinder

安装和配置计算节点。

1)        编辑/etc/nova/nova.conf,配置如下信息。

[cinder]

os_region_name = RegionOne

控制节点上重启服务

1)        重启openstack-nova-api

# systemctl restart openstack-nova-api.service

2)        启动cinder服务,并设定为开机启动。

# systemctl enable openstack-cinder-api.serviceopenstack-cinder-scheduler.service

# systemctl start openstack-cinder-api.serviceopenstack-cinder-scheduler.service

八、  安装Heat

安装和配置控制节点

1)        创建Heat数据库,完成下面步骤。

$ mysql -u root –p

CREATE DATABASE heat;

GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \

  IDENTIFIED BY'HEAT_DBPASS';

GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \

  IDENTIFIED BY'HEAT_DBPASS';

 

[]HEAT_DBPASS 替换为定义的密码。

2)        Source admin credentials

$ . admin-openrc

3)        创建 service credentials,完成下面步骤。

a.    创建heat用户

$ openstack user create --domain default--password-prompt heat

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     |Value                            |

+-----------+----------------------------------+

| domain_id | e0353a670a9e496da891347c589539e9 |

| enabled   |True                             |

| id        |ca2e175b851943349be29a328cc5e360 |

| name      |heat                             |

+-----------+----------------------------------+

b.    赋予heat用户admin角色。

$ openstack role add --project service --user heat admin

c.    创建heat  heat-cfn 服务entities

$ openstack service create --name heat \

  --description "Orchestration"orchestration

+-------------+----------------------------------+

| Field       |Value                            |

+-------------+----------------------------------+

| description | Orchestration                    |

| enabled     |True                             |

| id          |727841c6f5df4773baa4e8a5ae7d72eb |

| name        |heat                             |

| type        |orchestration                    |

+-------------+----------------------------------+

 

$ openstack service create --name heat-cfn \

  --description "Orchestration"  cloudformation

+-------------+----------------------------------+

| Field       |Value                            |

+-------------+----------------------------------+

| description | Orchestration                    |

| enabled     |True                             |

| id          |c42cede91a4e47c3b10c8aedc8d890c6 |

| name        |heat-cfn                         |

| type        |cloudformation                   |

+-------------+----------------------------------+

4)        创建Orchestration 服务 API endpoints

$ openstack endpoint create --region RegionOne \

  orchestrationpublic http://controller:8004/v1/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        |Value                                   |

+--------------+-----------------------------------------+

| enabled      |True                                    |

| id           |3f4dab34624e4be7b000265f25049609        |

| interface    |public                                  |

| region       |RegionOne                               |

| region_id    |RegionOne                               |

| service_id   |727841c6f5df4773baa4e8a5ae7d72eb        |

| service_name | heat                                    |

| service_type | orchestration                           |

| url          |http://controller:8004/v1/%(tenant_id)s |

+--------------+-----------------------------------------+

 

$ openstack endpoint create --region RegionOne \

  orchestrationinternal http://controller:8004/v1/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        |Value                                   |

+--------------+-----------------------------------------+

| enabled      |True                                    |

| id           | 9489f78e958e45cc85570fec7e836d98        |

| interface    |internal                                |

| region       |RegionOne                               |

| region_id    |RegionOne                               |

| service_id   |727841c6f5df4773baa4e8a5ae7d72eb        |

| service_name | heat                                    |

| service_type | orchestration                           |

| url          |http://controller:8004/v1/%(tenant_id)s |

+--------------+-----------------------------------------+

 

$ openstack endpoint create --region RegionOne \

  orchestrationadmin http://controller:8004/v1/%\(tenant_id\)s

+--------------+-----------------------------------------+

| Field        |Value                                   |

+--------------+-----------------------------------------+

| enabled      |True                                    |

| id           |76091559514b40c6b7b38dde790efe99        |

| interface    |admin                                   |

| region       |RegionOne                               |

| region_id    |RegionOne                               |

| service_id   |727841c6f5df4773baa4e8a5ae7d72eb        |

| service_name | heat                                    |

| service_type | orchestration                           |

| url          |http://controller:8004/v1/%(tenant_id)s |

+--------------+-----------------------------------------+

$ openstack endpoint create --region RegionOne \

  cloudformationpublic http://controller:8000/v1

+--------------+----------------------------------+

| Field        |Value                            |

+--------------+----------------------------------+

| enabled      |True                             |

| id           |b3ea082e019c4024842bf0a80555052c |

| interface    |public                           |

| region       |RegionOne                        |

| region_id    |RegionOne                        |

| service_id   |c42cede91a4e47c3b10c8aedc8d890c6 |

| service_name | heat-cfn                         |

| service_type | cloudformation                   |

| url          |http://controller:8000/v1        |

+--------------+----------------------------------+

 

$ openstack endpoint create --region RegionOne \

  cloudformationinternal http://controller:8000/v1

+--------------+----------------------------------+

| Field        |Value                            |

+--------------+----------------------------------+

| enabled      |True                             |

| id           |169df4368cdc435b8b115a9cb084044e |

| interface    |internal                         |

| region       |RegionOne                        |

| region_id    |RegionOne                        |

| service_id   |c42cede91a4e47c3b10c8aedc8d890c6 |

| service_name | heat-cfn                         |

| service_type | cloudformation                   |

| url          |http://controller:8000/v1        |

+--------------+----------------------------------+

 

$ openstack endpoint create --region RegionOne \

  cloudformationadmin http://controller:8000/v1

+--------------+----------------------------------+

| Field        |Value                            |

+--------------+----------------------------------+

| enabled      |True                             |

| id           |3d3edcd61eb343c1bbd629aa041ff88b |

| interface    |internal                         |

| region       |RegionOne                        |

| region_id    |RegionOne                        |

| service_id   |c42cede91a4e47c3b10c8aedc8d890c6 |

| service_name | heat-cfn                         |

| service_type | cloudformation                   |

| url          |http://controller:8000/v1        |

+--------------+----------------------------------+

 

5)        编排需要追加设定一些认证信息便于对stack管理。完成下面步骤

a.     创建 heat domain

$ openstack domain create --description "Stackprojects and users" heat

+-------------+----------------------------------+

| Field       |Value                            |

+-------------+----------------------------------+

| description | Stack projects and users         |

| enabled     |True                             |

| id          |0f4d1bd326f2454dacc72157ba328a47 |

| name        |heat                             |

+-------------+----------------------------------+

b.     创建 heat_domain_admin 用户

$ openstack user create --domain heat --password-promptheat_domain_admin

User Password:

Repeat User Password:

+-----------+----------------------------------+

| Field     |Value                            |

+-----------+----------------------------------+

| domain_id | 0f4d1bd326f2454dacc72157ba328a47 |

| enabled   |True                             |

| id        |b7bd1abfbcf64478b47a0f13cd4d970a |

| name      |heat_domain_admin                |

+-----------+----------------------------------+

c.     赋予 heat_domain_admin 用户admin角色。

$ openstack role add --domain heat --user-domain heat--user heat_domain_admin admin

d.     创建 heat_stack_owner 角色:

$ openstack role create heat_stack_owner

+-----------+----------------------------------+

| Field     |Value                            |

+-----------+----------------------------------+

| domain_id | None                             |

| id        |15e34f0c4fed4e68b3246275883c8630 |

| name      |heat_stack_owner                 |

+-----------+----------------------------------+

e.     添加 heat_stack_owner 角色到 demo 项目和demo用户。

$ openstack role add --project demo --user demoheat_stack_owner

f.      创建 heat_stack_user 角色。

$ openstack role create heat_stack_user

+-----------+----------------------------------+

| Field     |Value                            |

+-----------+----------------------------------+

| domain_id | None                             |

| id        |88849d41a55d4d1d91e4f11bffd8fc5c |

| name      |heat_stack_user                  |

+-----------+----------------------------------+

6)        安装heat 包。

# yum install openstack-heat-api openstack-heat-api-cfn \

 openstack-heat-engine

7)        编辑 /etc/heat/heat.conf ,完成下面配置。

a. [database] 部分,配置下面信息。

[database]

...

connection =mysql+pymysql://heat:HEAT_DBPASS@controller/heat

[]HEAT_DBPASS 替换为实际的密码。

b.  [DEFAULT] 部分,配置下面信息

[DEFAULT]

...

transport_url = rabbit://openstack:RABBIT_PASS@controller

[]RABBIT_PASS 替换为实际的密码。

c. keystone_authtoken], [trustee], [clients_keystone], [ec2authtoken] 部分,配置下面信息

[keystone_authtoken]

...

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = heat

password = HEAT_PASS

 

[trustee]

...

auth_type = password

auth_url = http://controller:35357

username = heat

password = HEAT_PASS

user_domain_name = default

[clients_keystone]

...

auth_uri = http://controller:35357

[ec2authtoken]

...

auth_uri = http://controller:5000/v3

[] HEAT_PASS 替换为实际的密码。

d.   [DEFAULT] 部分,配置下面信息

[DEFAULT]

...

heat_metadata_server_url = http://controller:8000

heat_waitcondition_server_url =http://controller:8000/v1/waitcondition

e.  [DEFAULT] 部分,配置下面信息

[DEFAULT]

...

stack_domain_admin = heat_domain_admin

stack_domain_admin_password = HEAT_DOMAIN_PASS

stack_user_domain_name = heat

[] HEAT_DOMAIN_PASS 替换为实际的密码。

8)        填充编排的数据库。

# su -s /bin/sh -c "heat-manage db_sync" heat

9)        启动Heat服务并设定为开机启动。

# systemctl enable openstack-heat-api.service \

 openstack-heat-api-cfn.service openstack-heat-engine.service

# systemctl start openstack-heat-api.service \

 openstack-heat-api-cfn.service openstack-heat-engine.service

 

验证操作

. admin-openrc

$ openstack orchestration service list

+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+

| hostname   |binary      | engine_id                            | host       | topic | updated_at                 |status |

+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+

| controller | heat-engine |3e85d1ab-a543-41aa-aa97-378c381fb958 | controller | engine |2015-10-13T14:16:06.000000 | up     |

| controller | heat-engine |45dbdcf6-5660-4d5f-973a-c4fc819da678 | controller | engine |2015-10-13T14:16:06.000000 | up     |

| controller | heat-engine |51162b63-ecb8-4c6c-98c6-993af899c4f7 | controller | engine |2015-10-13T14:16:06.000000 | up     |

| controller | heat-engine |8d7edc6d-77a6-460d-bd2a-984d76954646 | controller | engine |2015-10-13T14:16:06.000000 | up     |

+------------+-------------+--------------------------------------+------------+--------+----------------------------+--------+

 

 

阅读更多
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/weixin_39992639/article/details/79033584
个人分类: OpenStack
想对作者说点什么? 我来说一句

没有更多推荐了,返回首页

不良信息举报

OpenStack Pike版本部署手册

最多只允许输入30个字

加入CSDN,享受更精准的内容推荐,与500万程序员共同成长!
关闭
关闭