openstack queens双节点安装

openstack queens双节点安装

一 基础环境配置

参考文献:https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-queens
实验环境:
两台虚拟机(centos7.5)
每台虚拟机需要两张网卡

1、配置ip
用VMware虚拟的是这样
controller节点

[root@controller ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE="eth0"
BOOTPROTO="static"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
IPV6INIT="no"
PERSISTENT_DHCLIENT="1"
IPADDR=192.168.100.10
NETMASK=255.255.255.0
GATEWAY=192.168.100.1
[root@controller ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE="eth1"
BOOTPROTO="static"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
IPV6INIT="no"
PERSISTENT_DHCLIENT="1"
IPADDR=192.168.200.10
NETMASK=255.255.255.0
[root@controller ~]# systemctl restart network   //重启网卡

compute节点

 [root@compute ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 
    DEVICE="eth0"
    BOOTPROTO="static"
    ONBOOT="yes"
    TYPE="Ethernet"
    USERCTL="yes"
    PEERDNS="yes"
    IPV6INIT="no"
    PERSISTENT_DHCLIENT="1"
    IPADDR=192.168.100.20
    NETMASK=255.255.255.0
    GATEWAY=192.168.100.1
    [root@compute ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
    DEVICE="eth1"
    BOOTPROTO="static"
    ONBOOT="yes"
    TYPE="Ethernet"
    USERCTL="yes"
    PEERDNS="yes"
    IPV6INIT="no"
    PERSISTENT_DHCLIENT="1"
    IPADDR=192.168.200.20
    NETMASK=255.255.255.0
    [root@compute ~]# systemctl restart network   //重启网卡

其中eth0 eth1是网卡,也有叫enp8s0 这样的
IPADDR后面跟的是ip,填写的ip地址要和网卡是一个网段

这是服务器上的样子:
在这里插入图片描述
2、配置主机名
controller节点

[root@controller-1 ~]# hostnamectl set-hostname controller
[root@controller-1 ~]# bash
[root@controller ~]# 

compute节点

[root@controller-1 ~]# hostnamectl set-hostname compute
[root@controller-1 ~]# bash
[root@compute ~]# 

3、配置主机名映射

controller和compute都要改

[root@controller ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    
192.168.100.10 controller
192.168.100.20 compute

4、关闭防火墙和配置selinux
controller和compute都要改

[root@controller ~]# systemctl stop firewalld
[root@controller ~]# systemctl disable firewalld
[root@controller ~]# vi /etc/selinux/config 
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled     								 //改成  SELINUX=disabled 
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

//修改了 /etc/selinux/config  文件才执行命令
[root@controller ~]#  reboot

5、配置阿里yum源
controller和compute都要改

[root@controller ~]# rm -rfv /etc/yum.repos.d/*
[root@controller ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@controller ~]# yum clean all
[root@controller ~]# yum list

如果不能联网,就先下载下来再上传
如果你有iso文件可以搭建本地yum仓库
过几天专门写,这里就不演示了

6、安装NTP时钟服务(所有节点)

 [root@controller ~]# yum install chrony -y

controller节点
编辑/etc/chrony.conf文件 (compute也要)
注释掉这四行
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
如:

# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst号
# server 3.centos.pool.ntp.org iburst   (  # 号后面要有空格)

在最下面添加
local stratum 10
server controller iburst
allow 192.168.100.0/24
如:

# Select which information is logged.
#log measurements statistics tracking
local stratum 10
server controller iburst
allow 192.168.100.0/24
[root@controller ~]# systemctl enable chronyd.service
[root@controller ~]# systemctl start chronyd.service
验证时钟同步服务
[root@controller ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* controller                   10  10   377  330m   -131ns[-2868ns] +/- 7190ns
注意   ^* controller  这里是 * 才是对的,如是 ?就没成功 ,检查配置文件和重启(其他错误也这样做)

compute节点
编辑/etc/chrony.conf文件
在最后面添加
server controller iburst

 [root@compute ~]# chronyc sources

7、更新所有节点软件包(所有节点)

yum upgrade

跟新后把/etc/yum.repos.d/下多出来的文件删除
8、安装openstack client端(所有节点)

yum install python-openstackclient -y

9、安装openstack-selinux(所有节点)

yum install openstack-selinux -y

二、安装数据库(controller节点执行)

yum install mariadb mariadb-server python2-PyMySQL -y

编辑/etc/my.cnf.d/mariadb-server.cnf

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

在[mysqld]下添加下面几行
bind-address = 192.168.100.10
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

[root@controller ~]#systemctl enable mariadb.service
[root@controller ~]#systemctl start mariadb.service

1、配置数据库

[root@controller ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):      //这按 enter也就是 回车
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n]       //按 y + 回车
New password:   					//输入密码
Re-enter new password: 			//输入密码
Password updated successfully!
Reloading privilege tables..
 ... Success!

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n]    //输入y+回车
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n]       //输入n+回车
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n]  //输入y+回车
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n]    //输入y+回车
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

看配置对没有
[root@controller ~]# mysql -uroot -p密码

[root@controller ~]# mysql -uroot -p000000    //六个000000是我的密码
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> 

2、在controller节点安装、配置RabbitMQ

[root@controller ~]# yum install rabbitmq-server -y
[root@controller ~]# systemctl enable rabbitmq-server.service
[root@controller ~]# systemctl start rabbitmq-server.service

创建openstack 用户

[root@controller ~]# rabbitmqctl add_user openstack 000000
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
验证
[root@controller ~]# rabbitmqctl list_users
Listing users ...
openstack       []                          //有这行才创建成功
guest   [administrator]

3、在controller节点安装缓存数据库Memcached

yum install memcached python-memcached -y

编辑/etc/sysconfig/memcached
改成下面一样

OPTIONS="-l 10.71.11.12,::1,controller"

4、在controller节点安装Etcd服务
安装服务

 yum install etcd -y

编辑/etc/etcd/etcd.conf

  #[Member]
    #ETCD_CORS=""
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    #ETCD_WAL_DIR=""
    ETCD_LISTEN_PEER_URLS="http://192.168.100.10:2380"         //控制节点的ip
    ETCD_LISTEN_CLIENT_URLS="http://192.168.100.10:2379"			//控制节点的ip
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    ETCD_NAME="controller"		//控制节点主机名
    #ETCD_SNAPSHOT_COUNT="100000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    #ETCD_QUOTA_BACKEND_BYTES="0"
    #ETCD_MAX_REQUEST_BYTES="1572864"
    #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
    #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
    #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
    #
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.10:2380"   	//控制节点的ip
    ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.10:2379"		//控制节点的ip
    #ETCD_DISCOVERY=""
    #ETCD_DISCOVERY_FALLBACK="proxy"
    #ETCD_DISCOVERY_PROXY=""
    #ETCD_DISCOVERY_SRV=""
    ETCD_INITIAL_CLUSTER="controller=http://192.168.100.10:2380"	//控制节点的主机名与ip
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"		//改为etcd-cluster-01
    ETCD_INITIAL_CLUSTER_STATE="new"		//控制节点的ip
    #ETCD_STRICT_RECONFIG_CHECK="true"

设置服务开机启动与启动服务

[root@controller ~]# systemctl enable etcd
[root@controller ~]# systemctl start etcd

三、在controller节点安装keystone组件

安装keystone

1、创建keystone数据库并授权

[root@controller ~]# mysql -u root -p000000
MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '000000';

2、安装相关服务

yum install openstack-keystone httpd mod_wsgi -y

3、编辑 /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:000000@controller/keystone
[token]
provider = fernet

4、同步keystone数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

5、数据库初始化

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

6、引导身份认证服务

keystone-manage bootstrap --bootstrap-password 000000 --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

配置apache http服务
1、编辑/etc/httpd/conf/httpd.conf

# This can often be determined automatically, but we recommend you specify
# it explicitly to prevent problems during startup.
#
# If your host doesn't have a registered DNS name, enter its IP address here.
#
#ServerName www.example.com:80
ServerName controller                  //添加的是这行
#
# Deny access to the entirety of your server's filesystem. You must
# explicitly permit access to web content directories in other
# <Directory> blocks below.
#
<Directory />
    AllowOverride none
    Require all denied
</Directory>

2、创建 /usr/share/keystone/wsgi-keystone.conf链接文件

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

3、启动服务

systemctl enable httpd.service
systemctl start httpd.service

创建administrative脚本

[root@controller ~]# vi administrative
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

创建 domain, projects, users, roles
1、创建域

[root@controller ~]# source administrative
[root@controller ~]# openstack domain create --description "Domain" example

2、创建服务项目

[root@controller ~]# openstack project create --domain default   --description "Service Project" service

3、创建demo项目

[root@controller ~]# openstack project create --domain default --description "Demo Project" demo

4、创建demo用户

[root@controller ~]# openstack user create --domain default  --password 000000 demo

5、创建用户角色

[root@controller ~]# openstack role create user

6、添加用户角色到demo项目和用户

[root@controller ~]# openstack role add --project demo --user demo user

创建openstack 客户端环境脚本
1、创建admin-openrc脚本

[root@controller ~]# vi admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

2、创建demo-openrc脚本

[root@controller ~]# vi demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=000000
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

3、验证

[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2018-04-01T08:17:29+0000                                                                                                                                                                |
| id         | gAAAAABawIeJ0z-3R2ltY6ublCGqZX80AIi4tQUxqEpw0xvPsFP9BLV8ALNsB2B7bsVivGB14KvhUncdoRl_G2ng5BtzVKAfzHyB-OxwiXeqAttkpQsuLCDKRHd3l-K6wRdaDqfNm-D1QjhtFoxHOTotOcjtujBHF12uP49TjJtl1Rrd6uVDk0g |
| project_id | 4205b649750d4ea68ff5bea73de0faae                                                                                                                                                        |
| user_id    | 475b31138acc4cc5bb42ca64af418963                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

四、在controller节点安装Glance服务

1.创建glance数据库

[root@controller ~]# mysql -u root -p000000
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '000000';

2、创建glance用户

[root@controller ~]# source admin-openrc
[root@controller ~]# openstack user create --domain default --password 000000 glance

3、把admin用户添加到glance用户和项目中

[root@controller ~]# openstack role add --project service --user glance admin

4、创建glance服务

[root@controller ~]# openstack service create --name glance  --description "OpenStack Image" image

5、创建镜像服务API端点

[root@controller ~]# openstack endpoint create --region RegionOne  image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne  image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne  image admin http://controller:9292

安装和配置glance组件
1.安装openstack-glance

[root@controller ~]# yum install openstack-glance -y

2.编辑/etc/glance/glance-api.conf

[database]
connection = mysql+pymysql://glance:000000@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 000000

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

3.编辑/etc/glance/glance-registry.conf

[database]
connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 000000

[paste_deploy]
flavor = keystone

4.同步镜像服务数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
[root@controller ~]# systemctl start openstack-glance-api.service  openstack-glance-registry.service

验证

[root@controller ~]# glance image-create --name "centos7" --disk-format qcow2 --container-format bare --progress < /opt/CentOS_7.2_x86_64_XD.qcow2

五、在controller节点安装nova

1、创建nova_api, nova, nova_cell0数据库

[root@controller ~]# mysql -u root -p000000
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '000000';

2、创建nova用户

[root@controller ~]# source admin-openrc
[root@controller ~]# openstack user create --domain default --password 000000 nova

3、添加admin用户为nova用户

[root@controller ~]# openstack role add --project service --user nova admin

4、创建nova服务端点

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

5、创建compute API 服务端点

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

6、创建一个placement服务用户

[root@controller ~]# openstack user create --domain default --password 000000 placement

7、添加placement用户为项目服务admin角色

[root@controller ~]# openstack role add --project service --user placement admin

8、在服务目录创建Placement API服务

[root@controller ~]# openstack service create --name placement --description "Placement API" placement

9、创建Placement API服务端点

[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

安装和配置nova组件
1.安装nova

 [root@controller ~]# yum install openstack-nova-api openstack-nova-conductor  openstack-nova-console openstack-nova-novncproxy  openstack-nova-scheduler openstack-nova-placement-api -y

2.编辑 /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@controller
my_ip = 192.168.100.10
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]

connection = mysql+pymysql://nova:000000@controller/nova_api

[database]

connection = mysql+pymysql://nova:000000@controller/nova

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 000000

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 000000

3、编辑/etc/httpd/conf.d/00-nova-placement-api.conf
在最后添加

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

4、重新http服务

[root@controller ~]# systemctl restart httpd

5、同步nova-api数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning

出现这种报错不要管
6、继续

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

7、验证

[root@controller ~]# nova-manage cell_v2 list_cells
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
|  Name |                 UUID                 |           Transport URL            |               Database Connection               |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@controller/nova_cell0 |
| cell1 | 6c689e8c-3e13-4e6d-974c-c2e4e22e510b | rabbit://openstack:****@controller |    mysql+pymysql://nova:****@controller/nova    |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+

有cell0和cell1就是正确的
8、设置服务为开机启动

[root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

5.5在compute节点安装nova

1、安装nova
[root@compute ~]# yum install openstack-nova-compute -y
2、编辑/etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:000000@controller
my_ip = 192.168.100.20
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 000000

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.100.10:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 000000
[libvirt]
virt_type=qemu

3、设置服务开机启动

[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service

4、验证

 [root@controller ~]. admin-openrc
 [root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+---------+------+---------+-------+----------------------------+
| ID | Binary       | Host    | Zone | Status  | State | Updated At                 |
+----+--------------+---------+------+---------+-------+----------------------------+
|  8 | nova-compute | compute | nova | enabled | up    | 2018-04-01T22:24:14.000000 |
+----+--------------+---------+------+---------+-------+----------------------------+

5、发现计算节点

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': 6c689e8c-3e13-4e6d-974c-c2e4e22e510b
Found 1 unmapped computes in cell: 6c689e8c-3e13-4e6d-974c-c2e4e22e510b
Checking host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6bd
Creating host mapping for compute host 'compute': 32861a0d-894e-4af9-a57c-27662d27e6b

在controller节点验证计算服务操作

[root@controller ~]#. admin-openrc
[root@controller ~]# openstack compute service list

----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | controller | internal | enabled | up    | 2018-10-10T14:37:10.000000 |
|  2 | nova-scheduler   | controller | internal | enabled | up    | 2018-10-10T14:37:11.000000 |
|  3 | nova-conductor   | controller | internal | enabled | up    | 2018-10-10T14:37:11.000000 |
|  6 | nova-compute     | compute    | nova     | enabled | up    | 2018-10-10T14:37:05.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
[root@controller ~]# openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| placement | placement | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           |                                         |
| glance    | image     | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           |                                         |
| keystone  | identity  | RegionOne                               |
|           |           |   admin: http://controller:5000/v3/     |
|           |           | RegionOne                               |
|           |           |   public: http://controller:5000/v3/    |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:5000/v3/  |
|           |           |                                         |
| nova      | compute   | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+
[root@controller ~]# nova-status upgrade check
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
Option "os_region_name" from group "placement" is deprecated. Use option "region-name" from group "placement".
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Resource Providers      |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: API Service Version     |
| Result: Success                |
| Details: None                  |
+--------------------------------+

6、在controller节点安装Neutron 服务

1.1 创建 neutron 数据库

[root@controller ~]# mysql -u root -p000000
MariaDB [(none)]> CREATE DATABASE neutron; 
MariaDB [(none)]>  GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '000000';
MariaDB [(none)]>  GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '000000';

1.2创建 neutron 用户

[root@controller ~]# . admin-openrc
[root@controller ~]# openstack user create --domain default --password 000000 neutron 

1.3 添加 admin 角色到 neutron 用户

[root@controller ~]# openstack role add --project service --user neutron admin 

1.4 创建服务实体

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

1.5创建服务端点

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network admin http://controller:9696 

1.6安装 neutron 相关软件包

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y 

1.7编辑/etc/neutron/neutron.conf 文件

[DEFAULT]
core_plugin = ml2 
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone 
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database] 
connection = mysql+pymysql://neutron:000000@controller/neutron 
[keystone_authtoken]
auth_uri = http://controller:5000 
auth_url = http://controller:35357 
memcached_servers = controller:11211 
auth_type = password 
project_domain_name = default 
user_domain_name = default 
project_name = service
username = neutron 
password = 000000 
[nova] 
auth_url = http://controller:35357
auth_type = password 
project_domain_name = default 
user_domain_name = default 
region_name = RegionOne 
project_name = service
username = nova
password = 000000 
[oslo_concurrency] 
lock_path = /var/lib/neutron/tmp 

1.8编辑/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan 
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider 
[ml2_type_vxlan] 
vni_ranges = 1:1000 
[securitygroup]
enable_ipset = true 

1.9编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge] 
physical_interface_mappings = provider:enp9s0    ## 填写实际第二张网卡名称
[vxlan]
enable_vxlan = true 
local_ip = 192.168.200.10  ## enp9s0 的 ip 地址 
l2_population = true
[securitygroup] 
enable_security_group = true 
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

2.1 编辑/etc/neutron/l3_agent.ini

[DEFAULT] 
interface_driver = linuxbridge

2.2 编辑/etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
2.3编辑/etc/neutron/metadata_agent.ini

[DEFAULT] 
nova_metadata_host = controller 
metadata_proxy_shared_secret = 000000

2.4 编辑/etc/nova/nova.conf

[neutron] 
url = http://controller:9696 
auth_url = http://controller:35357 
auth_type = password 
project_domain_name = default 
user_domain_name = default
region_name = RegionOne 
project_name = service
username = neutron 
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = 000000 

2.5创建链接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini 

2.6同步数据库

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron 

INFO  [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586, Add binding index to RouterL3AgentBinding 
INFO  [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d, Remove availability ranges.   
OK

2.7重启 nova-api 服务

[root@controller ~]# systemctl restart openstack-nova-api.service 

2.8启动 neutron 相关服务

[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service 

[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

6.5、在 compute 节点安装neutron

1.1安装 neutron 相关服务

yum install openstack-neutron-linuxbridge ebtables ipset -y 

1.2编辑/etc/neutron/neutron.conf

[DEFAULT]
transport_url = rabbit://openstack:000000@controller
auth_strategy = keystone

[keystone_authtoken] 
auth_uri = http://controller:5000
auth_url = http://controller:35357 
memcached_servers = controller:11211 
auth_type = password 
project_domain_name = default 
user_domain_name = default 
project_name = service
username = neutron
password = 000000
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp 

1.3编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:enp9s0  ## 此处填写实际网卡名称 

[vxlan] 
enable_vxlan = true 
local_ip = 192.168.200.20
l2_population = true

[securitygroup] 
enable_security_group = true 
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 

1.4 编辑/etc/nova/nova.conf

[neutron] 
url = http://controller:9696 
auth_url = http://controller:35357 
auth_type = password
project_domain_name = default 
user_domain_name = default 
region_name = RegionOne 
project_name = service
username = neutron 
password = 000000

1.5重启 nova-compute 服务

[root@compute ~]# systemctl restart openstack-nova-compute.service

1.6启动网桥代理服务

[root@compute ~]# systemctl start neutron-linuxbridge-agent.service 
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service

验证
在 controller 节点执行以下命令

[root@controller ~]# openstack network agent list 

 Linux bridge agent       controller 节点
Metadata agent 			controller 节点
neutron-l3-agent        controller 节点
DHCP agent               controller 节点
Linux bridge agent      compute 节点

7、安装dashboard(Horizon)服务

1.安装软件包

yum install openstack-dashboard -y

2.编辑/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = ['*']
//下面两个添加
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}



OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

重启服务

systemctl restart httpd.service memcached.service

在浏览器输入http://192.168.100.10/dashboard 访问openstack的web页面

有错请指正

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值