可以看链接https://www.linuxidc.com/Linux/2017-04/142431.htm下面是我自己配的,
日志是在/var/log/目录下,dashboard日志可以通过/var/log/apache2/中查看
sudo apt install vim
一、搭建基础环境
192.168.30.145 controller【2vCPU、4G内存、40G存储、双网卡】(控制节点)
192.168.30.146 compute【2vCPU、4G内存、40G存储、双网卡】(计算节点)我只设置了controller节点的信息,我配置的是opsentack ocata 单机(all in one)不知道怎么配置计算节点,这个可以暂时不配,不影响搭建环境
1.安装ssh并配置root密码
$ sudo apt install ssh
$ sudo passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
2.获取临时认证令牌
# openssl rand -hex 10
bdb5cad50653d4e85b7d
3.添加阿里云镜像
# cp /etc/apt/sources.list /etc/apt/sources.list.bak
# vim /etc/apt/sources.list
deb-src http://archive.Ubuntu.com/ubuntu xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main restricted multiverse universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted multiverse universe
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb http://archive.canonical.com/ubuntu xenial partner
deb-src http://archive.canonical.com/ubuntu xenial partner
deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted multiverse universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-security multiverse
4.配置网络接口IP
# ip addr
# vim /etc/network/interfaces
auto ens33
iface ens33 inet static
address 192.168.30.145
netmask 255.255.255.0
gateway 192.168.30.2
dns-nameserver 114.114.114.114
因为我的ubuntu16.04系统是连接的wifi的,没有网线,这个我以前配过,但是出错,这个文件我没动,我是直接连上wifi,手动添加的ip地址,右击右上角的wifi标志-编辑连接-点击连接的wifi名字-点击编辑-ipv4-手动,然后添加信息如下:
ip:192.168.30.145 子网掩码:255.255.255.0 网关192.168.30.2 dns:192.168.30.2 点击保存
# The provider network interface(配置第二个接口为提供者接口)
auto ens34
iface ens34 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
这个我暂时没配
5.配置host
# vim /etc/hosts
192.168.30.145 controller
192.168.30.146 compute (计算节点)计算节点暂时不懂,相关信息我都没配
6.配置NTP时间协议
# dpkg-reconfigure tzdata ##修改时区
Current default time zone: 'Asia/Chongqing'
Local time is now: Tue Mar 28 20:54:33 CST 2017.
Universal Time is now: Tue Mar 28 12:54:33 UTC 2017.
# apt -y install chrony ##安装chrony时间同步软件
Controller Node
# vim /etc/chrony/chrony.conf
allow 192.168.30.0/24 ##设置允许该网段与自己同步时间
# service chrony restart
Compute Node 这个关于计算节点的我也暂时没配,
# vim /etc/chrony/chrony.conf
# pool 2.debian.pool.ntp.org offline iburst
server 192.168.30.145 iburst ##设置时间同步服务器地址
# service chrony restart
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 6 377 33 -375us[ -422us] +/- 66ms
7.在所有节点启用openstack库、安装openstack客户端
# apt -y install software-properties-common
# add-apt-repository cloud-archive:ocata
# apt -y update && apt -y dist-upgrade
# apt -y install python-openstackclient
8.安装并配置数据库服务(Controller Node)
# apt -y install mariadb-server python-pymysql
# vim /etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
bind-address = 192.168.30.145
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
# service mysql restart
# mysql_secure_installation
##运行该脚本来保证数据库安全,为root账户设置一个合适的密码
9.安装并配置Rabbitmq消息队列服务(Controller Node)
# apt -y install rabbitmq-server
# rabbitmqctl add_user openstack openstack ##添加OpenStack用户并配置密码
Creating user "openstack" ...
##允许openstack用户的配置、写、读权限
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
# rabbitmqctl list_users ##列出用户
Listing users ...
guest[administrator]
openstack[]
# rabbitmqctl list_user_permissions openstack ##列出该用户权限
Listing permissions for user "openstack" ...
/.*.*.*
# rabbitmqctl status ##查看RabbitMQ相关信息
# rabbitmq-plugins list ##查看RabbitMQ相关插件
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit@openstack1
|/
......
# rabbitmq-plugins enable rabbitmq_management ##启用该插件
The following plugins have been enabled:
mochiweb
webmachine
rabbitmq_web_dispatch
amqp_client
rabbitmq_management_agent
rabbitmq_management
Applying plugin configuration to rabbit@openstack1... started 6 plugins.
浏览器输入http://localhost:15672,默认用户名密码都是guest。
10.安装并配置Memcached缓存服务【对认证服务进行缓存】(Controller Node)
# apt -y install memcached python-memcache
# vim /etc/memcached.conf
#-l 127.0.0.1
-l 192.168.30.145
# service memcached restart
二、配置 Keystone 认证服务(Controller Node)
1.创建 keystone 数据库
# mysql
MariaDB [(none)]> CREATE DATABASE keystone; ##创建 keystone 数据库
##对 keystone 数据库授权[用户名@控制节点...BY 密码]
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'192.168.30.145' \
IDENTIFIED BY 'keystone';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'keystone';
MariaDB [(none)]> flush privileges;
2.安装并配置 Keystone
# apt -y install keystone
# vim /etc/keystone/keystone.conf
[database]---配置数据库访问[用户名:密码@控制节点]
connection = mysql+pymysql://keystone:keystone@192.168.30.145/keystone
[token]---配置Fernet UUID令牌的提供者
provider = fernet
# grep ^[a-z] /etc/keystone/keystone.conf
connection = mysql+pymysql://keystone:keystone@192.168.30.145/keystone
provider = fernet
3.初始化身份认证服务数据库
# su -s /bin/sh -c "keystone-manage db_sync" keystone
4.初始化Fernet keys
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
5.配置引导标识服务
# keystone-manage bootstrap --bootstrap-password qaz123 \
--bootstrap-admin-url http://192.168.30.145:35357/v3/ \
--bootstrap-internal-url http://192.168.30.145:5000/v3/ \
--bootstrap-public-url http://192.168.30.145:5000/v3/ \
--bootstrap-region-id RegionOne
6.配置 HTTP 服务器
# vim /etc/apache2/apache2.conf
ServerName controller
# service apache2 restart ##重启Apache服务
# service apache2 status
# rm -f /var/lib/keystone/keystone.db ##删除默认的SQLite数据库
7.配置管理账户
# export OS_USERNAME=admin
# export OS_PASSWORD=qaz123
# export OS_PROJECT_NAME=admin
# export OS_USER_DOMAIN_NAME=default
# export OS_PROJECT_DOMAIN_NAME=default
# export OS_AUTH_URL=http://192.168.30.145:35357/v3
# export OS_IDENTITY_API_VERSION=3
8.创建 service 项目
# openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 945e37831e74484f8911fb742c925926 |
| is_domain | False |
| name | service |
| parent_id | default |
+-------------+----------------------------------+
9.配置普通(非管理)任务项目和用户权限
a.创建 demo 项目
# openstack project create --domain default \
--description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 2ef20ce389eb499696f2d7497c6009b0 |
| is_domain | False |
| name | demo |
| parent_id | default |
+-------------+----------------------------------+
b.创建 demo 用户
# openstack user create --domain default \
--password-prompt demo
User Password: (这个密码是自己设置的,我设置的是123456)
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 7cfc508fd5d44b468aac218bd4029bae |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
c.创建 user 角色
# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 83b6ab2af4414ad387b2fc9daf575b3a |
| name | user |
+-----------+----------------------------------+
d.添加 user 角色到 demo 项目和用户
# openstack role add --project demo --user demo user
10.禁用临时身份验证令牌机制
# vim /etc/keystone/keystone-paste.ini
[pipeline:public_api]即把这里面的下方代码删掉
# pipeline = admin_token_auth
[pipeline:admin_api]
# pipeline = admin_token_auth
[pipeline:api_v3]
# pipeline = admin_token_auth
11.重置 OS_AUTH_URL 和 OS_PASSWORD 环境变量
# unset OS_AUTH_URL OS_PASSWORD
12.使用 admin 用户,请求认证令牌(密码为admin用户密码)
# openstack --os-auth-url http://192.168.30.145:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password: (这个密码是上方配置的admin的密码qaz123)
+------------+-----------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------+
| expires | 2017-03-28T15:11:50+0000 |
| id | gAAAAABY2m8mE9pMATPuFW9YpgoBMTg9mCI6GcmFeQAudwbhGiVblXZP |
| | kmSmHc5aFwTZSIdjLzPJaMd1k16UZghj59v45Gvzdh5CLhSFGWPsT8rL |
| | fRJD4eE1D_eRz2Jjjk5rDmwAHm5mmffuszJLSe4B2KJyBXkdmmznXL-A |
| project_id | 2461396f6a344c21a2360a612d4f6abe |
| user_id | 63ca263543fb4b02bb34410e3dc8a801 |
+------------+-----------------------------------------------------------+
13.使用 demo 用户,请求认证令牌(密码为demo用户密码)
# openstack --os-auth-url http://192.168.30.145:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password: (这个密码是上方配置的demo的密码123456)
+------------+-----------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------+
| expires | 2017-03-28T15:13:50+0000 |
| id | gAAAAABY2m-eSIWmQg1SyZFaiGcP2kjHf742ktr8YcVH3Q4aHKTflDJ |
| | RLAfgmeoDW2z1sbdHQmKQNSb--F-1Pn_hTFHYqgyMlIxYpEQxGhJ-rg |
| | b0EuxUT9opwl0m5onaA5Cv_MBX6awxeity8Gh1dc50NUeYela5Yl4uSG |
| project_id | 2ef20ce389eb499696f2d7497c6009b0 |
| user_id | 7cfc508fd5d44b468aac218bd4029bae |
+------------+-----------------------------------------------------------+
14.创建脚本
a.创建并编辑文件 admin-openrc 并添加如下内容:
# vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=qaz123
export OS_AUTH_URL=http://192.168.30.145:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
b.创建并编辑文件 demo-openrc 并添加如下内容:
# vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123456
export OS_AUTH_URL=http://192.168.30.145:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
15.使用脚本
a.加载脚本
# . admin-openrc
b.请求身份认证令牌
# openstack token issue
+------------+----------------------------------------------------------+
| Field | Value |
+------------+----------------------------------------------------------+
| expires | 2017-03-28T15:22:55+0000 |
| id | gAAAAABY2nG_diuPBMl66vJye3mV3S7CWZKesIiSnbicq5XddujfHhc3x|
| | PHni3iHWPcTQAjHoIEMTvSH6yKOQ6Z74QL6hVbshqP1dJrRJ6xEa9WvIk|
| | F7H5j7lPmM7ncfVvr9k96gLJ6Uhz38R5qRnHBWkxrlNsgw1jdnAjxf5e |
| project_id | 2461396f6a344c21a2360a612d4f6abe |
| user_id | 63ca263543fb4b02bb34410e3dc8a801 |
+------------+----------------------------------------------------------+
三、配置 Glance 镜像服务(Controller Node)
1.创建 glance 数据库
# mysql
MariaDB [(none)]> CREATE DATABASE glance; ##创建 glance 数据库
##对 glance 数据库授权[用户名@控制节点...BY 密码]
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'192.168.30.145' \
IDENTIFIED BY 'glance';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'glance';
MariaDB [(none)]> flush privileges;
2.获取管理员访问权限
# . admin-openrc
3.创建服务证书
a.创建glance用户:
# openstack user create --domain default --password-prompt glance
User Password: (自己设置的,我设置的也是123456)
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 3edeaaae87e14811ac2c6767ab657d6b |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
b.添加 admin 角色到 glance 用户和 service 项目上:
# openstack role add --project service --user glance admin
c.创建“glance”服务实体:
# openstack service create --name glance \
--description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 22a0875ba92c4512989666f116ae1585 |
| name | glance |
| type | image |
+-------------+----------------------------------+
d.创建镜像服务的 API 端点:
# openstack endpoint create --region RegionOne \
image public http://192.168.30.145:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ff6d9ed365cf4e7f8cc53d47e57cd46b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 22a0875ba92c4512989666f116ae1585 |
| service_name | glance |
| service_type | image |
| url | http://192.168.30.145:9292 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne \
image internal http://192.168.30.145:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 7408dd72bc1745758cdf23e136ef7392 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 22a0875ba92c4512989666f116ae1585 |
| service_name | glance |
| service_type | image |
| url | http://192.168.30.145:9292 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne \
image admin http://192.168.30.145:9292
--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 8ed4e7e1a5834177b4ce1896c21e6cb9 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 22a0875ba92c4512989666f116ae1585 |
| service_name | glance |
| service_type | image |
| url | http://192.168.30.145:9292 |
+--------------+----------------------------------+
4.安装并配置 Glance 组件
a.配置镜像API
# apt -y install glance
# vim /etc/glance/glance-api.conf
[database]---配置数据库访问[用户名:密码@控制节点]
connection = mysql+pymysql://glance:glance@192.168.30.145/glance
[keystone_authtoken]---配置身份服务访问
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456 (这个密码是上方自己设置的glance密码)
[paste_deploy]
flavor = keystone
[glance_store]---配置本地文件系统存储和图像文件位置
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
# grep ^[a-z] /etc/glance/glance-api.conf
sqlite_db = /var/lib/glance/glance.sqlite
backend = sqlalchemy
connection = mysql+pymysql://glance:glance@192.168.30.145/glance
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,ploop.root-tar
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
flavor = keystone
b.配置镜像注册服务
# vim /etc/glance/glance-registry.conf
[database]---配置数据库访问[用户名:密码@控制节点]
connection = mysql+pymysql://glance:glance@192.168.30.145/glance
[keystone_authtoken]---配置身份服务访问
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
[paste_deploy]
flavor = keystone
# grep ^[a-z] /etc/glance/glance-registry.conf
sqlite_db = /var/lib/glance/glance.sqlite
backend = sqlalchemy
connection = mysql+pymysql://glance:glance@192.168.30.145/glance
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456
flavor = keystone
5.同步镜像服务数据库
# su -s /bin/sh -c "glance-manage db_sync" glance
6.重启服务
# service glance-registry restart
# service glance-api restart
# service glance-registry status
# service glance-api status
7.验证操作
使用 CirrOS 对镜像服务进行验证
CirrOS是一个小型的Linux镜像,可以用来进行 OpenStack部署测试。
a.获取管理员权限
# . admin-openrc
b.下载源镜像
# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
c.使用 QCOW2 磁盘格式, bare 容器格式上传镜像到镜像服务并设置公共可见
# openstack image create "cirros"\
--file cirros-0.3.5-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at | 2017-03-29T05:57:56Z |
| disk_format | qcow2 |
| file | /v2/images/4b6ebd57-80ab-4b79-8ecc-53a026f3e898/file |
| id | 4b6ebd57-80ab-4b79-8ecc-53a026f3e898 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 2461396f6a344c21a2360a612d4f6abe |
| protected | False |
| schema | /v2/schemas/image |
| size | 13267968 |
| status | active |
| tags | |
| updated_at | 2017-03-29T05:57:56Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
d.确认镜像的上传并验证属性
# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 4b6ebd57-80ab-4b79-8ecc-53a026f3e898 | cirros | active |
+--------------------------------------+--------+--------+
四 nova的配置
https://blog.csdn.net/chenvast/article/details/71036233
https://www.cnblogs.com/yangdonghao/p/6762472.html
https://blog.csdn.net/zhujie_hades/article/details/52181244
上方是参考的
1. 先决条件
在安装和配置 Compute 服务前,你必须创建数据库服务的凭据以及 API endpoints。
① 为了创建数据库,必须完成这些步骤:
# mysql
MariaDB [(none)]> create database nova_api;
MariaDB [(none)]>create database nova;
MariaDB [(none)]> create database nova_cell0;
MariaDB [(none)]>grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'nova';
MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by 'nova';
MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by 'nova';
MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by 'nova';
MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'nova';
MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'nova';
MariaDB [(none)]> flush privileges ;
MariaDB [(none)]> exit
② 获得admin凭证来获取只有管理员能执行的命令的访问权限
. admin-openrc
③ 创建计算服务凭证
创建nova用户:
openstack user create --domain default --password-prompt nova
此处需要输入密码:
给nova用户添加 admin 角色:
openstack role add --project service --user nova admin
创建nova服务实体
openstack service create --name nova --description "OpenStack Computr" compute
④ 创建计算API服务端点
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
⑤ 创建placement service的一个用户placement
openstack user create --domain default --password-prompt placement
此处需要输入密码:
⑥ 增加placement用户到service project和admin角色
openstack role add --project service --user placement admin
⑦ 创建placement API服务目录
openstack service create --name placement --description "Placement API" placement
⑧ 创建placement API 服务端点
openstack endpoint create --region RegionOne placement public http://http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
2. 安装并配置组件
① 安装数据包
sudo apt-get install nova-api nova-conductor nova-console nova-novncproxy nova-scheduler nova-placement-api
② 编辑/etc/nova/nova.conf文件完成以下操作。
在[DEFAULT]部分下,只激活compute和metadata API
[DEFAULT]
enabled_apis = osapi_compute,metadata
在[api_database]和[database]部分下,配置数据库连接
[api_database]
connection = mysql+pymysql://nova:nova@192.168.30.145/nova_api
[database]
connection = mysql+pymysql://nova:nova@192.168.30.145/nova
在 “[DEFAULT]”部分,配置 “RabbitMQ” 消息队列访问:
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@c192.168.30.145
#RABBIT_PASS:也就是安装rabbitmq时候创建的openstack的用户名和密码,上方我设置的密码是openstack
在 “[api]” 和 “[keystone_authtoken]” 部分,配置认证服务访问:
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS (NOVA_PASS是上方创建nova账户的密码即nova)
在 [DEFAULT 部分,配置my_ip来使用控制节点的管理接口的IP 地址。
[DEFAULT]
my_ip = 192.168.30.145
在 [DEFAULT] 部分,激活支持Networking 服务:
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
注意:
默认情况下,计算服务使用内置的防火墙服务。由于网络服务包含了防火墙服务,你必须使用nova.virt.firewall.NoopFirewallDriver防火墙服务来禁用掉计算服务内置的防火墙服务。
在[vnc]部分,配置VNC代理使用控制节点的管理接口IP地址 :
[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
在 [glance] 区域,配置镜像服务 API 的位置:
[glance]
api_servers = http:// 192.168.30.145:9292
在 [oslo_concurrency] 部分,配置锁路径:
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
在[placement]部分,配置placement API:
[placement]
os_region_name = RegionOne
project_domain_name = default
project_name = service
auth_type = password
user_domain_name = default
auth_url = http:// 192.168.30.145:35357/v3
username = placement
password = PLACEMENT_PASS(placement的密码)
由于一个打包的 bug ,必须从 [DEFAULT] 区域去除 logdir 选项。但我没找到
③ 填充nova-api数据库(同步compute数据库)
su -s /bin/sh -c "nova-manage api_db sync" nova
④ 注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
⑤ 创建cell1 cell
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
⑥ 填充nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
⑦ 核查nova cell0和cell1 是否注册成功
nova-manage cell_v2 list_cells
3. 完成安装
启动compute服务和设置开机自动启动
service nova-api restart
service nova-consoleauth restart
service nova-scheduler restart
service nova-conductor restart
service nova-novncproxy restart
安装和配置一个计算节点
本节描述如何安装和配置在一个计算节点的计算服务。服务支持多种虚拟机监控程序或vm部署实例。为简单起见,这个配置使用QEMU虚拟机监控程序在计算节点上通过KVM扩展,支持虚拟机的硬件加速。在传统硬件,这个配置使用通用QEMU虚拟机监控程序。您可以遵循这些指令和少量修改与额外的计算节点横向扩展您的环境。
这部分假设您遵循本指南中的说明一步一步配置第一个计算节点。如果你想配置额外的计算节点,他们准备以类似的方式对第一个计算节点在示例架构部分。每个额外的计算节点需要一个唯一的IP地址。
接下来操作在计算节点上操作
1. 安装和配置组件
① 安装数据包
apt-get install nova-compute
② 编辑/etc/nova/nova.conf文件完成以下操作。
在[DEFAULT]部分,仅仅激活compute和metadata APIS:
[DEFAULT]
enabled_apis = osapi_compute,metadata
在[DEFAULT]部分,配置RabbitMQ信息队列接入:
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@192.168.30.145
在[api]和[keystone_authtoken]部分,配置身份服务接入:
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
在[DEFAULT]部分,配置my_ip选项:(ip是计算节点的管理段ip)
[DEFAULT]
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
将其中的 MANAGEMENT_INTERFACE_IP_ADDRESS 替换为计算节点上的管理网络接口的IP 地址,我没配置计算节点,因此这一项我没配
在[DEFAULT]部分,激活支持networking服务:
[DEFAULT]
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
通知:
默认情况下,计算使用内部防火墙服务。因为网络包含防火墙服务,您必须使用nova.virt.firewall禁用防火墙计算服务 通过nova.virt.firewall.NoopFirewallDriver firewall driver
在[vnc]部分,激活和配置远程console接入:
[vnc]
#vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.30.145:6080/vnc_auto.html
在[glance]部分,配置镜像服务的API位置:
[glance]
api_servers = http://192.168.30.145:9292
在[oslo_concurrency]部分,配置锁定路径:
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
在[placement]部分,配置placement API:
[placement]
os_region_name = RegionOne
project_domain_name = default
project_name = service
auth_type = password
user_domain_name = default
auth_url = http://controller:35357/v3
username = placement
password = PLACEMENT_PASS
2. 完成安装
① 确定您的计算节点支持虚拟机的硬件加速:(关于计算节点的这一项我也没配置)
egrep -c '(vmx|svm)' /proc/cpuinfo
如果这个命令的返回值是1或者更大,则代表支持虚拟化。
如果这个命令的返回值为零,计算节点不支持硬件加速,您必须配置libvirt KVM使用QEMU。
编辑/etc/nova/nova.conf,在[libvirt]部分,virt_type = qemu
② 启动计算服务包括依赖项和开机自动启动
service nova-compute restart
3. 增加一个计算节点到cell数据库(这一项我也没配置)
接下来操作在控制节点上操作
1. 获得 admin 凭证来获取只有管理员能执行的命令的访问权限,并确认1个计算节点是在主机列表中:
. admin-openrc
openstack hypervisor list
2. 发现compute主机列表:
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
注意:
当你添加新的计算节点,您必须运行nova-manage cell_v2 discover_hosts控制器节点上注册新计算节点
验证操作
接下来操作在控制节点上操作
1. 获得 admin 凭证来获取只有管理员能执行的命令的访问权限,
. admin-openrc
2. 列出服务组件:admin身份运行openstack compute servic list
该输出显示三个服务组件在控制节点上启用,一个服务组件在计算节点上启用。
3. 在标识服务API端点列表与身份验证连接服务:
注意:下面端点列表可能不同,这取决于OpenStack的安装组件。在这个输出忽略任何警告。
openstack catalog list
4. 验证镜像服务和镜像列表:
openstack image list
五、配置 Neutron 网络服务【各节点皆要配置】
1.创建 neutron 数据库
# mysql
MariaDB [(none)] CREATE DATABASE neutron; ##创建 neutron 数据库
##对 neutron 数据库授权[用户名@控制节点...BY 密码]
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'192.168.30.145' \\
IDENTIFIED BY 'neutron';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\
IDENTIFIED BY 'neutron';
MariaDB [(none)]> flush privileges;
2.获取管理员访问权限
# . admin-openrc
3.创建服务证书
a.创建 neutron 用户
# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password: (neutron的密码,此时我设置的是123456)
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 54cd9e72295c411090ea9f641cb02135 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
b.添加 admin 角色到 neutron 用户
# openstack role add --project service --user neutron admin
c.创建 neutron 服务实体
# openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 720687745d354718862255a56d7aea46 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
d.创建 neutron 服务API端点
# openstack endpoint create --region RegionOne \
network public http://192.168.30.145:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a9b1b5b8fbb842a8b14a9cecca7a58a8 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 720687745d354718862255a56d7aea46 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.30.145:9696 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne \
network internal http://192.168.30.145:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 61e2c14b0c8f4003a7099012e9a6331f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 720687745d354718862255a56d7aea46 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.30.145:9696 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne \
network admin http://192.168.30.145:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6719539759c34487bd519c0dffb5509d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 720687745d354718862255a56d7aea46 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.30.145:9696 |
+--------------+----------------------------------+
4.配置网络类型2:私有网络
a.安装组件
# apt -y install neutron-server neutron-plugin-ml2 \
neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
neutron-metadata-agent
b.配置 Neutron 组件
# vim /etc/neutron/neutron.conf
[database]----配置数据库访问[用户名:密码@控制节点]
#connection = sqlite:var/lib/neutron/neutron.sqlite
connection = mysql+pymysql://neutron:neutron@192.168.30.145/neutron
[DEFAULT]----启用ML2插件、路由器服务和overlapping IP addresses
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
[DEFAULT]----配置 RabbitMQ 消息队列访问[用户名:密码@控制节点]
transport_url = rabbit://openstack:openstack@192.168.30.145
[DEFAULT]----配置认证服务访问
auth_strategy = keystone
[keystone_authtoken]----配置认证服务访问
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[DEFAULT]----配置网络服务来通知计算节点的网络拓扑变化
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]----配置网络服务来通知计算节点的网络拓扑变化
auth_url = http://192.168.30.145:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456
# grep ^[a-z] /etc/neutron/neutron.conf
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstack:openstack@192.168.30.145
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
connection = mysql+pymysql://neutron:neutron@192.168.30.145/neutron
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
region_name = RegionOne
auth_url = http://192.168.30.145:35357
auth_type = password
password =123456
project_domain_name = default
project_name = service
user_domain_name = default
username = nova
c.配置 Modular Layer 2 (ML2) 插件
ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施
# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]----启用flat,VLAN以及VXLAN网络
type_drivers = flat,vlan,vxlan
[ml2]----启用VXLAN私有网络
tenant_network_types = vxlan
[ml2]----启用Linuxbridge和layer-2机制
mechanism_drivers = linuxbridge,l2population
[ml2]----启用端口安全扩展驱动
extension_drivers = port_security
[ml2_type_flat]----配置公共虚拟网络为flat网络
flat_networks = provider
[ml2_type_vxlan]----为私有网络配置VXLAN网络识别的网络范围
vni_ranges = 1:1000
[securitygroup]----启用 ipset 增加安全组规则的高效性
enable_ipset = true
# grep ^[a-z] /etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
flat_networks = provider
vni_ranges = 1:1000
enable_ipset = true
注:Linuxbridge代理只支持VXLAN覆盖网络
d.配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]----对应公共虚拟网络和公共物理网络接口
physical_interface_mappings = provider:wlp3s0 (因为这个ip是我手动点击图形化界面加的,网络接口是wlps0,不是enp4s0)
[vxlan]----启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,并启用layer-2 population
enable_vxlan = true
local_ip = 192.168.30.145
l2_population = true
[securitygroup]----启用安全组并配置防火墙服务
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# grep ^[a-z] /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:wlp3s0
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
enable_vxlan = true
local_ip = 192.168.30.145
l2_population = true
e.配置layer-3代理
Layer-3代理为私有虚拟网络提供路由和NAT服务
# vim /etc/neutron/l3_agent.ini
[DEFAULT]----配置Linuxbridge接口驱动和外部网络网桥
interface_driver = linuxbridge
# grep ^[a-z] /etc/neutron/l3_agent.ini
interface_driver = linuxbridge
f.配置DHCP代理
DHCP代理为虚拟网络提供DHCP服务
# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]----配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
# grep ^[a-z] /etc/neutron/dhcp_agent.ini
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
g.配置元数据代理----负责提供配置信息
# vim /etc/neutron/metadata_agent.ini
[DEFAULT]----配置元数据主机以及共享密码
nova_metadata_ip = 192.168.30.145
metadata_proxy_shared_secret = qaz123
# grep ^[a-z] /etc/neutron/metadata_agent.ini
nova_metadata_ip = 192.168.30.145
metadata_proxy_shared_secret = qaz123
5.在控制节点上为计算节点配置网络服务
# vim /etc/nova/nova.conf
[neutron]----配置访问参数,启用元数据代理并设置密码
url = http://192.168.30.145:9696
auth_url = http://192.168.30.145:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = qaz123
# grep ^[a-z] /etc/nova/nova.conf
6.完成安装
a.同步数据库
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
......
OK
注:数据库的同步发生在 Networking 之后,因为脚本需要完成服务器和插件的配置文件
b.重启计算 API 服务
# service nova-api restart
c.重启 Networking 服务
对于两种网络类型:
# service neutron-server restart
# service neutron-linuxbridge-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
对于网络类型 2 ,还需重启 L3 服务:
# service neutron-l3-agent restart
d.确认启动与否
# service nova-api status
# service neutron-server status
# service neutron-linuxbridge-agent status
# service neutron-dhcp-agent status
# service neutron-metadata-agent status
# service neutron-l3-agent status
7.配置 Compute Node 的 Neutron 网络服务
# apt -y install neutron-linuxbridge-agent
# vim /etc/neutron/neutron.conf
[database]----计算节点不直接访问数据库
#connection = sqlite:var/lib/neutron/neutron.sqlite
[DEFAULT]----配置 RabbitMQ 消息队列访问[用户名:密码@控制节点]
transport_url = rabbit://openstack:openstack@192.168.30.145
[DEFAULT]----配置认证服务访问
auth_strategy = keystone
[keystone_authtoken]----配置认证服务访问
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
# grep ^[a-z] /etc/neutron/neutron.conf
auth_strategy = keystone
core_plugin = ml2
transport_url = rabbit://openstack:openstack@192.168.30.145
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
8.为计算节点配置网络服务
# vim /etc/nova/nova.conf
[neutron]----配置访问参数
url = http://192.168.30.145:9696
auth_url = http://192.168.30.145:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
# grep ^[a-z] /etc/nova/nova.conf
9.完成安装
a.重启计算服务:
# service nova-compute restart
# service nova-compute status
b.重启Linuxbridge代理:
# service neutron-linuxbridge-agent restart
# service neutron-linuxbridge-agent status
10.在计算节点上配置网络类型2(这一项我没配)
配置Linuxbridge代理----为实例建立layer-2虚拟网络并且处理安全组规则
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]----对应公共虚拟网络和公共物理网络接口
physical_interface_mappings = provider:wlp3s0
[vxlan]----启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population
enable_vxlan = true
local_ip = 192.168.30.146
l2_population = true
[securitygroup]----启用安全组并配置firewall_driver
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# grep ^[a-z] /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:wlp3s0
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
enable_vxlan = true
local_ip = 192.168.30.146
l2_population = true
11.在控制节点上验证操作
a.获取管理员权限
# . admin-openrc
b.列出加载的扩展来验证 neutron-server 进程是否正常启
# openstack extension list --network
+----------------------+----------------------+--------------------------+
| Name | Alias | Description |
+----------------------+----------------------+--------------------------+
| Default Subnetpools | default-subnetpools | Provides ability to mark |
| | | and use a subnetpool as |
| | | the default |
| Network IP | network-ip- | Provides IP availability |
| Availability | availability | data for each network |
| | | and subnet. |
| Network Availability |network_availability_z| Availability zone |
| Zone | one | support for network. |
| Auto Allocated | auto-allocated- | Auto Allocated Topology |
| Topology Services | topology | Services. |
| Neutron L3 | ext-gw-mode | Extension of the router |
| Configurable external| | abstraction for |
| gateway mode | | specifying whether SNAT |
| | | should occur on the |
| | | external gateway |
| Port Binding | binding | Expose port bindings of |
| | | a virtual port to |
| | | external application |
| agent | agent | The agent management |
| | | extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of |
| | | subnets from a subnet |
| | | pool |
| L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among |
| | | l3 agents |
| Tag support | tag | Enables to set tag on |
| | | resources. |
| Neutron external | external-net | Adds external network |
| network | | attribute to network |
| | | resource. |
| Neutron Service | flavors | Flavor specification for |
| Flavors | | Neutron advanced |
| | | services |
| Network MTU | net-mtu | Provides MTU attribute |
| | | for a network resource. |
| Availability Zone | availability_zone | The availability zone |
| | | extension. |
| Quota management | quotas | Expose functions for |
| support | | quotas management per |
| | | tenant |
| HA Router extension | l3-ha | Add HA capability to |
| | | routers. |
| Provider Network | provider | Expose mapping of |
| | | virtual networks to |
| | | physical networks |
|Multi Provider Network| multi-provider | Expose mapping of |
| | | virtual networks to |
| | | multiple physical |
| | | networks |
| Address scope | address-scope | Address scopes |
| | | extension. |
| Neutron Extra Route | extraroute | Extra routes |
| | | configuration for L3 |
| | | router |
| Subnet service types | subnet-service-types | Provides ability to set |
| | | the subnet service_types |
| | | field |
| Resource timestamps | standard-attr- | Adds created_at and |
| | timestamp | updated_at fields to all |
| | | Neutron resources that |
| | | have Neutron standard |
| | | attributes. |
| Neutron Service Type | service-type | API for retrieving |
| Management | | service providers for |
| | | Neutron advanced |
| | | services |
| Router Flavor | l3-flavors | Flavor support for |
| Extension | | routers. |
| Port Security | port-security | Provides port security |
| Neutron Extra DHCP | extra_dhcp_opt | Extra options |
| opts | | configuration for DHCP. |
| | | For example PXE boot |
| | | options to DHCP clients |
| | | can be specified (e.g. |
| | | tftp-server, server-ip- |
| | | address, bootfile-name) |
| Resource revision | standard-attr- | This extension will |
| numbers | revisions | display the revision |
| | | number of neutron |
| | | resources. |
| Pagination support | pagination | Extension that indicates |
| | | that pagination is |
| | | enabled. |
| Sorting support | sorting | Extension that indicates |
| | | that sorting is enabled. |
| security-group | security-group | The security groups |
| | | extension. |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among |
| | | dhcp agents |
| Router Availability |router_availability_zo| Availability zone |
| Zone | ne | support for router. |
| RBAC Policies | rbac-policies | Allows creation and |
| | | modification of policies |
| | | that control tenant |
| | | access to resources. |
| Tag support for | tag-ext | Extends tag support to |
| resources: subnet, | | more L2 and L3 |
| subnetpool, port, | | resources. |
| router | | |
| standard-attr- | standard-attr- | Extension to add |
| description | description | descriptions to standard |
| | | attributes |
| Neutron L3 Router | router | Router abstraction for |
| | | basic L3 forwarding |
| | | between L2 Neutron |
| | | networks and access to |
| | | external networks via a |
| | | NAT gateway. |
| Allowed Address Pairs| allowed-address-pairs| Provides allowed address |
| | | pairs |
| project_id field | project-id | Extension that indicates |
| enabled | | that project_id field is |
| | | enabled. |
| Distributed Virtual | dvr | Enables configuration of |
| Router | | Distributed Virtual |
| | | Routers. |
+----------------------+----------------------+--------------------------+
c.启动 neutron 代理验证是否成功
# neutron agent-list
+--------------------------------------+--------------------+------------+
| id | agent_type | host |
+--------------------------------------+--------------------+------------+
| 23601054-312a-497c-b728-4b791ce76e64 | L3 agent | controller |
| 9a7546d9-73ec-47e0-ab23-ca2a5366660f | Linux bridge agent | controller |
| acd42d89-1af4-413f-be77-3172d38a805d | Metadata agent | controller |
| b438ae93-aaf3-41f0-a7b7-d1502a1986c9 | DHCP agent | controller |
| e1d32b6b-07c6-468b-965d-ce9dfd09b338 | Linux bridge agent | compute |
+--------------------------------------+--------------------+------------+
+-------------------+-------+----------------+---------------------------+
| availability_zone | alive | admin_state_up | binary |
+-------------------+-------+----------------+---------------------------+
| nova | :-) | True | neutron-l3-agent |
| | :-) | True | neutron-linuxbridge-agent |
| | :-) | True | neutron-metadata-agent |
| nova | :-) | True | neutron-dhcp-agent |
| | :-) | True | neutron-linuxbridge-agent |
+-------------------+-------+----------------+---------------------------+
更多详情见请继续阅读下一页的精彩内容: http://www.linuxidc.com/Linux/2017-04/142431p2.htm
六、配置 Dashboard 仪表盘服务(Controller Node)
1.配置Dashboard
# apt -y install openstack-dashboard
# vim /etc/openstack-dashboard/local_settings.py
OPENSTACK_HOST = "192.168.30.145" ##配置仪表盘以使用 OpenStack 服务
ALLOWED_HOSTS = ['*'] ##允许所有主机访问仪表板
##配置 memcached 会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '192.168.30.145:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST ##启用第3版认证API
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True ##启用对域的支持
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
} ##配置API版本
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default" ##通过仪表盘创建用户时的默认域配置
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" ##通过仪表盘创建的用户默认角色配置
TIME_ZONE = "Asia/Chongqing" ##配置时区
# cat /etc/openstack-dashboard/local_settings.py|grep -v "#"|grep -v ^$
2.更改 dashboard 密钥文件权限
# chown www-data:www-data /var/lib/openstack-dashboard/secret_key
# service apache2 reload ##重新加载 web 服务器配置
3.验证仪表盘服务
浏览器输入 http://controller/horizon 访问仪表盘。
使用 admin 或者 demo 用户凭证和 default 域凭证验证。
Openstack安装过程中遇到的问题汇总https://www.jb51.net/article/111508.htm
本文介绍在ubuntu 16.04下单点安装Mitaka Horizon的过程。http://www.aboutyun.com/home.php?mod=space&uid=1310&do=blog&quickforward=1&id=3126
https://ask.openstack.org/en/question/6483/havana-dashboard-internal-server-error/
https://stackoverflow.com/questions/42632130/cant-launch-openstack-horizon-dashboard-ioerror-errno-13-permission-denied
访问链接http://controller/horizon/时,不出现超时错误了,但是出现无法访问服务的错误,
查看apache日志 (/var/log/apache2/error.log),出现错误:
"Truncated or oversized response headers received from daemon process 'horizon':
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi"
解决方案:在文件/etc/apache2/conf-available/openstack-dashboard.conf中添加一句话:
WSGIApplicationGroup %{GLOBAL}