Openstack(Icehouse)安装——残卷

Openstack(Icehouse)
由于创建实例时无法动态获取IP,可能是之前的neutron配置有误。由于时间原因,暂时只能搁下OpenStack实验。下面还差一个cinder(可持续化存储没有做)

一、控制节点(Controller)

master1作为控制节点、master2作为计算节点、master3作为网络节点

1、环境准备

1.0 openstack镜像源

建议选择较新版本实验,并根据官方文档进行配置。各版本配置方法都有区别。。。
https://repos.fedorapeople.org/repos/openstack/

旧版本:
https://repos.fedorapeople.org/repos/openstack/EOL/ 

如果一定要实验旧版本,建议检测较旧的镜像源和CD镜像源配合:
http://vault.centos.org/

1.1 环境准备

将master1(控制节点)硬件内存调整至2G以上,准备两块网卡,eth0作为内部通信,eth1外网通信;

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷

将master2(计算节点)cpu开启虚拟化功能,内存调整至2G以上。准备两块网卡,eth0作为内部通信,eth1与网络节点做GRE隧道;

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷

将master3(网络节点)准备三块网卡,eth0作为内部通信,eth1与计算节点做GRE隧道,eth1外网通信;

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷


记得要在所有节点网卡配置加上
NM_CONTROLLED='no'

所有节点停止NetworkManager
[root@master1 ~]# systemctl stop NetworkManager
[root@master1 ~]# systemctl disable NetworkManager

为了实验方便,清空所有防火墙规则
# iptables -F

所有节点配置主机名,并修改主机名:
[root@master1 ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.1 controller.com controller
192.168.1.2 compute1.com compute1
192.168.1.3 network1.com network1

[root@master1 ~]# scp /etc/hosts root@192.168.1.2:/etc/
[root@master1 ~]# scp /etc/hosts root@192.168.1.3:/etc/

[root@master1 ~]# hostnamectl set-hostname controller.com

[root@master2 ~]# hostnamectl set-hostname compute1.com

[root@master3 ~]# hostnamectl set-hostname network1.com

1.3 master1配置nat转发规则(为了让master2能够访问外网)

删除firewalld
[root@master1 ~]# systemctl stop firewalld
[root@master1 ~]# systemctl disable firewalld
[root@master1 ~]# systemctl mask firewalld

安装启动iptables
[root@master1 ~]# yum install iptables iptables-services
[root@master1 ~]# systemctl start iptables.service
清楚默认规则
[root@master1 ~]# iptables -F

配置nat:
[root@master1 ~]# iptables -t nat -A POSTROUTING -s 192.168.1.0/24 ! -d 192.168.1.0/24 -j SNAT --to-source 10.201.106.131

查看nat规则:
[root@master1 ~]# iptables -L -n -t nat

master1开启路由转发
[root@master1 ~]# vim /etc/sysctl.conf 

net.ipv4.ip_forward = 1

立即生效:
[root@master1 ~]# sysctl -p
net.ipv4.ip_forward = 1

master2配置默认网关地址为master1的eth0地址后可以访问外网:
[root@master2 ~]# route add default gw 192.168.1.1

1.4 各节点重名后对应

hostnamectl改完主机名后需要退出重新登录生效
master1=controller
master2=compute1
master3=network1

1.5 master1安装MariaDB

master1安装MariaDB:
[root@master1 ~]# yum install mariadb-server

创建数据库目录:
[root@controller ~]# mkdir -pv /mydata/data
目录授权:
[root@controller ~]# chown -R mysql:mysql /mydata/data/

配置
[root@controller ~]# vim /etc/my.cnf

[mysqld]
datadir=/mydata/data
default-storage-engine = innodb
character-set-server = utf8
innodb_file_per_table = on
skip_name_resolve = on

[mysql]
default-character-set=utf8  

启动
[root@master1 ~]# systemctl start mariadb

设置密码:
MariaDB [(none)]> GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY '设置密码' WITH GRANT OPTION;
MariaDB [(none)]> GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY '设置密码密码' WITH GRANT OPTION;
刷新权限表:
MariaDB [(none)]> FLUSH PRIVILEGES;

2、Keystone

2.1 导入镜像源

icehouse地址  
https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-7/

[root@zz ~]# vim /etc/yum.repos.d/rdo-release.repo 

[openstack-I]
name=OpenStack I Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-7/
gpgcheck=0
enabled=1

# yum clean all
# yum repolist

安装
[root@controller ~]# yum install openstack-keystone python-keystoneclient openstack-utils

2.2 mysql创建keystone的数据库,并授权访问

创建数据库:
MariaDB [(none)]> CREATE DATABASE keystone;

授权:
MariaDB [(none)]> GRANT ALL ON keystone.* to 'keystone'@'%' IDENTIFIED BY '设置密码';
MariaDB [(none)]> GRANT ALL ON keystone.* to 'keystone'@'localhost' IDENTIFIED BY '设置密码';

刷新授权表:
MariaDB [(none)]> FLUSH PRIVILEGES;

初始化同步keystone数据库:
[root@controller ~]# su -s /bin/sh -c 'keystone-manage db_sync' keystone

查看数据库:
MariaDB [keystone]> SHOW DATABASES;
MariaDB [keystone]> USE keystone;
MariaDB [keystone]> SHOW tables;

配置:
[root@controller ~]# vim /etc/keystone/keystone.conf 
[database]
connection=mysql://keystone:keystone@192.168.1.1/keystone

2.3 keystone其他初始化配置

定义存放token值的变量
[root@controller ~]# ADMIN_TOKEN=$(openssl rand -hex 10)
保存值到文件
[root@controller ~]# echo $ADMIN_TOKEN
2506715b010f7e9ea0e0
[root@controller ~]# echo $ADMIN_TOKEN > .admin_toekn.rc

配置:
[root@controller ~]# vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token=2506715b010f7e9ea0e0

设置本地PKI(证书):
[root@controller ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

修改PKI目录权限:
[root@controller ~]# chown -R keystone:keystone /etc/keystone/ssl/
[root@controller ~]# chmod -R o-rwx /etc/keystone/ssl/

启动keystone服务:
[root@controller ~]# systemctl enable openstack-keystone
[root@controller ~]# systemctl start openstack-keystone

定义keystone的TOEKN变量,用于命令执行TOKEN(令牌)验证,有了这个变量,敲keystone命令时,默认不用加--os-toekn参数:
[root@controller ~]# export OS_SERVICE_TOKEN=$ADMIN_TOKEN
[root@controller ~]# echo $OS_SERVICE_TOKEN
2506715b010f7e9ea0e0

[root@controller ~]# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

列出用户(不用加--os-toekn参数也可执行)
[root@controller ~]# keystone --os-token $ADMIN_TOKEN user-list

[root@controller ~]# keystone user-list

[root@controller ~]# 

2.5 创建管理员用户

查看keystone命令帮助:
[root@controller ~]# keystone help
[root@controller ~]# keystone help create-user
[root@controller ~]# keystone help role-create
[root@controller ~]# keystone help user-role-add

创建管理用户:
[root@controller ~]# keystone user-create --name=admin --pass=admin --email=admin@qq.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |           admin@qq.com           |
| enabled  |               True               |
|    id    | 032b9e8e5722495c9a71c413fcb70e6e |
|   name   |              admin               |
| username |              admin               |
+----------+----------------------------------+

列出用户:
[root@controller ~]# keystone user-list
+----------------------------------+-------+---------+--------------+
|                id                |  name | enabled |    email     |
+----------------------------------+-------+---------+--------------+
| 032b9e8e5722495c9a71c413fcb70e6e | admin |   True  | admin@qq.com |
+----------------------------------+-------+---------+--------------+

创建拥有管理权限的角色:
[root@controller ~]# keystone role-create --name=admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 2b54b14daca041c2a3dc66325f5048ce |
|   name   |              admin               |
+----------+----------------------------------+

查看角色:
[root@controller ~]# keystone role-list
+----------------------------------+----------+
|                id                |   name   |
+----------------------------------+----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| 2b54b14daca041c2a3dc66325f5048ce |  admin   |
+----------------------------------+----------+

创建租户(租客)
[root@controller ~]# keystone tenant-create --name=admin --description="Admin Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | abfe5df994e54c6190e98e3f3f3dab38 |
|     name    |              admin               |
+-------------+----------------------------------+

把刚才创建的admin用户添加到admin角色,并放到admin租户上。
[root@controller ~]# keystone user-role-add --user admin --role admin --tenant admin

把admin用户添加到_member_角色(可以WEB_GUI访问)
[root@controller ~]# keystone user-role-add --user admin --role _member_ --tenant admin

查看用户拥有的角色:
[root@controller ~]# keystone user-role-list --user admin --tenant admin
+----------------------------------+----------+----------------------------------+----------------------------------+
|                id                |   name   |             user_id              |            tenant_id             |
+----------------------------------+----------+----------------------------------+----------------------------------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | 032b9e8e5722495c9a71c413fcb70e6e | abfe5df994e54c6190e98e3f3f3dab38 |
| 2b54b14daca041c2a3dc66325f5048ce |  admin   | 032b9e8e5722495c9a71c413fcb70e6e | abfe5df994e54c6190e98e3f3f3dab38 |
+----------------------------------+----------+----------------------------------+----------------------------------+

2.6 创建普通用户

[root@controller ~]# keystone user-create --name=demo --pass=demo --email=demo@qq.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |           demo@qq.com            |
| enabled  |               True               |
|    id    | 472d9776f8984bb99a728985760ad5ba |
|   name   |               demo               |
| username |               demo               |
+----------+----------------------------------+

创建测试租户:
[root@controller ~]# keystone tenant-create --name=demo --description="Demo Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Demo Tenant            |
|   enabled   |               True               |
|      id     | fbff77c905114d50b5be94ffd46203cd |
|     name    |               demo               |
+-------------+----------------------------------+

将用户放入_member_角色:
[root@controller ~]# keystone user-role-add --user=demo --role=_member_ --tenant=demo

查看用户属于什么角色: 
[root@controller ~]# keystone user-role-list --tenant=demo --user=demo
+----------------------------------+----------+----------------------------------+----------------------------------+
|                id                |   name   |             user_id              |            tenant_id             |
+----------------------------------+----------+----------------------------------+----------------------------------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | 472d9776f8984bb99a728985760ad5ba | fbff77c905114d50b5be94ffd46203cd |
+----------------------------------+----------+----------------------------------+----------------------------------+

2.7 创建service租户(后面安装的服务添加到里面,作为容器管理)

基本容器,存放内部服务。
[root@controller ~]# keystone tenant-create --name=service --description="Service Teant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Teant           |
|   enabled   |               True               |
|      id     | f9f13bac5d6f40449b2e4560ab16536d |
|     name    |             service              |
+-------------+----------------------------------+

2.8 定义服务访问端点

相关命令帮助:
[root@controller ~]# keystone help service-create

把keystone添加到服务目录里面:
[root@controller ~]# keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Identity        |
|   enabled   |               True               |
|      id     | 882370ee97724a5a93dbde574b3f9dd9 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+

列出现在所有的服务:
[root@controller ~]# keystone service-list
+----------------------------------+----------+----------+--------------------+
|                id                |   name   |   type   |    description     |
+----------------------------------+----------+----------+--------------------+
| 882370ee97724a5a93dbde574b3f9dd9 | keystone | identity | OpenStack Identity |
+----------------------------------+----------+----------+--------------------+

[root@controller ~]# keystone service-list | grep -i keystone | awk '{print $2}'
882370ee97724a5a93dbde574b3f9dd9

创建keystone的访问端点:
[root@controller ~]# keystone endpoint-create --service-id=$(keystone service-list | awk '/identity/ {print $2}') \
> --publicurl=http://controller:5000/v2.0 \
> --internalurl=http://controller:5000/v2.0 \
> --adminurl=http://controller:35357/v2.0
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://controller:35357/v2.0   |
|      id     | d85844bf91274283bc15e97a16cde9be |
| internalurl |   http://controller:5000/v2.0    |
|  publicurl  |   http://controller:5000/v2.0    |
|    region   |            regionOne             |
|  service_id | 882370ee97724a5a93dbde574b3f9dd9 |
+-------------+----------------------------------+
[root@controller ~]# 

查看端点信息:
[root@controller ~]# keystone endpoint-list
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
|                id                |   region  |          publicurl          |         internalurl         |           adminurl           |            service_id            |
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
| d85844bf91274283bc15e97a16cde9be | regionOne | http://controller:5000/v2.0 | http://controller:5000/v2.0 | http://controller:35357/v2.0 | 882370ee97724a5a93dbde574b3f9dd9 |
+----------------------------------+-----------+-----------------------------+-----------------------------+------------------------------+----------------------------------+
[root@controller ~]# 

如果服务定义错误,可以删除,重建:
[root@controller ~]# keystone help | grep delete
    ec2-credentials-delete
    endpoint-delete     Delete a service endpoint.
    role-delete         Delete role.
    service-delete      Delete service from Service Catalog.
    tenant-delete       Delete tenant.
    user-delete         Delete user.

日志查看:
[root@controller ~]# tail -50 /var/log/keystone/keystone.log

2.9 修改认证方式,使用用户密码操作

取消TOKEN变量:
[root@controller ~]# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

使用用户密码访问测试:
[root@controller ~]# keystone --os-username=admin --os-password=admin --os-auth-url=http://controller:35357/v2.0 token-get

能够获取信息:

Openstack(Icehouse)安装——残卷

声明用户密码方式的环境变量:
[root@controller ~]# vim ~/.admin-openrc.sh

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0

[root@controller ~]# source ~/.admin-openrc.sh

再次测试,不用加用户密码参数,直接敲命令成功得到结果:
[root@controller ~]# keystone user-list
+----------------------------------+-------+---------+--------------+
|                id                |  name | enabled |    email     |
+----------------------------------+-------+---------+--------------+
| 032b9e8e5722495c9a71c413fcb70e6e | admin |   True  | admin@qq.com |
| 472d9776f8984bb99a728985760ad5ba |  demo |   True  | demo@qq.com  |
+----------------------------------+-------+---------+--------------+

3、glance

####(Image Service,存放元数据,用于在OpenStack中注册、发现、及获取 VM映像文件) ####

3.1 安装glance程序包

[root@controller ~]# yum install openstack-glance python-glanceclient

查看其生成文件:
[root@controller ~]# rpm -ql openstack-glance

3.2 数据库配置

设置数据库本地连接,不需要密钥:
[root@controller ~]# vim .my.cnf 

[mysql]
user=root
password=密码
host=localhost

Openstack(Icehouse)安装——残卷

创建数据库:
MariaDB [(none)]> CREATE DATABASE glance CHARACTER SET utf8;

授权:
MariaDB [(none)]> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

MariaDB [(none)]> GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';

MariaDB [(none)]> FLUSH PRIVILEGES;

数据库同步初始化:
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

查看:
MariaDB [(none)]> USE glance;
MariaDB [glance]> SHOW TABLES;
+------------------+
| Tables_in_glance |
+------------------+
| image_locations  |
| image_members    |
| image_properties |
…………

3.3 编辑配置文件

先做个备份:
[root@controller ~]# cd /etc/glance/
[root@controller glance]# cp glance-api.conf{,.bak}
[root@controller glance]# cp glance-registry.conf{,.bak}

api配置:
[root@controller ~]# vim /etc/glance/glance-api.conf

[database]
connection=mysql://glance:glance@192.168.1.1/glance

registry配置:
[root@controller ~]# vim /etc/glance/glance-registry.conf

[database]
connection=mysql://glance:glance@192.168.1.1/glance

可以查看日志有没有报错:
[root@controller ~]# tail -50 /var/log/glance/api.log

3.4 在keystone添加glance用户

[root@controller ~]# keystone user-create --name=glance --pass=glance --email=glance@qq.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          glance@qq.com           |
| enabled  |               True               |
|    id    | 232d3bfe23334050aa87bc4d7c6d491d |
|   name   |              glance              |
| username |              glance              |
+----------+----------------------------------+

将glance用户放在service租户,admin角色里面:
[root@controller ~]# keystone user-role-add --user=glance --tenant=service --role=admin

查看:
[root@controller ~]# keystone user-role-list --user=glance --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 2b54b14daca041c2a3dc66325f5048ce | admin | 232d3bfe23334050aa87bc4d7c6d491d | f9f13bac5d6f40449b2e4560ab16536d |
+----------------------------------+-------+----------------------------------+----------------------------------+

3.5 继续修改配置文件

api:
[root@controller ~]# vim /etc/glance/glance-api.conf

[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_port=35357
auth_protocol=http
admin_tenant_name=service
admin_user=glance
admin_password=glance

[paste_deploy]
#认证方式
flavor=keystone

Registry(和上面的配置一样,可以直接粘贴):
[root@controller ~]# vim /etc/glance/glance-registry.conf

Openstack(Icehouse)安装——残卷

3.6 添加端点

创建服务:
[root@controller ~]# keystone service-create --name=glance --type=image --description="OpenStack Image Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |     OpenStack Image Service      |
|   enabled   |               True               |
|      id     | a3e76b6f69014258be5bca4463de201b |
|     name    |              glance              |
|     type    |              image               |
+-------------+----------------------------------+

将服务添加至端点:
# keystone endpoint-create --service-id=$(keystone service-list | awk '/image/{print $2}') \
--publicurl=http://controller:9292 \
--internalurl=http://controller:9292 \
--adminurl=http://controller:9292

+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:9292      |
|      id     | 947dbd1ba3284e6d81f531b4ec8ecb39 |
| internalurl |      http://controller:9292      |
|  publicurl  |      http://controller:9922      |
|    region   |            regionOne             |
|  service_id | a3e76b6f69014258be5bca4463de201b |
+-------------+----------------------------------+

启动api和registry服务:
[root@controller ~]# for svc in api registry;do systemctl start openstack-glance-$svc;systemctl enable openstack-glance-$svc;done

正常运行中:

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷

可以看下日志有没报错:
[root@controller ~]# tail -50 /var/log/glance/api.log 
[root@controller ~]# tail -50 /var/log/glance/registry.log 

3.7 上传保存磁盘映像文件

如果运行glance命令报错,在glance-api提示认证错误;然后继续排查keystone日志,发现日志有大量找不到用户,找不到租户,找不到角色的日志。可以尝试重启下数据库服务。我做实验的时候,keystone日志报了很多用户找不到,然后数据库卡死了,无法停止。强制杀死数据库后恢复

以下是日志报错
[root@controller ~]# glance image-list
Request returned failure status.
Invalid OpenStack Identity credentials.

2018-04-30 00:41:13.258 11637 WARNING keystoneclient.middleware.auth_token [-] Verify error: Command 'openssl' returned non-zero exit status 4
2018-04-30 00:41:13.260 11637 WARNING keystoneclient.middleware.auth_token [-] Authorization failed for token
2018-04-30 00:41:13.262 11637 INFO keystoneclient.middleware.auth_token [-] Invalid user token - deferring reject downstream
2018-04-30 00:41:13.264 11637 INFO glance.wsgi.server [-] 10.201.106.131 - - [30/Apr/2018 00:41:13] "GET /v1/images/detail?sort_key=name&sort_dir=asc&limit=20 HTTP/1.

2018-04-30 00:38:19.654 11617 WARNING keystone.common.wsgi [-] Could not find user, glance.
2018-04-30 00:38:19.745 11617 WARNING keystone.common.wsgi [-] Could not find role, admin.
2018-04-30 00:38:19.862 11617 WARNING keystone.common.wsgi [-] Could not find project, service.

Openstack(Icehouse)安装——残卷

解决办法:
[root@controller ~]# kiall mariadb
[root@controller ~]# pkill mariadb
[root@controller ~]# systemctl start mariadb

[root@controller ~]# glance image-list
+----+------+-------------+------------------+------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+----+------+-------------+------------------+------+--------+
+----+------+-------------+------------------+------+--------+

默认目录:
filesystem_store_datadir=/var/lib/glance/images/
[root@controller ~]# ll -d /var/lib/glance/images/
drwxr-xr-x 2 glance glance 6 Apr 29 22:33 /var/lib/glance/images/

修改其他路径,记得修改属主属组权限

创建磁盘映像文件:
查看帮助:
[root@controller ~]# glance help image-create

使用网上提供的镜像文件创建:
[root@controller ~]# ls
cirros-no_cloud-0.3.0-i386-disk.img  cirros-no_cloud-0.3.0-x86_64-disk.img

创建并上传镜像:
安装qemu-img查看映像的格式
[root@controller ~]# yum install qemu-img

[root@controller ~]# qemu-img info cirros-no_cloud-0.3.0-i386-disk.img 
image: cirros-no_cloud-0.3.0-i386-disk.img
file format: qcow2

上传:
[root@controller ~]# glance image-create --name=cirros-0.3.0-i386 --disk-format=qcow2 --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-i386-disk.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ccdb7b71efb7cbae0ea4a437f55a5eb9     |
| container_format | bare                                 |
| created_at       | 2018-04-30T03:11:58                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 18a0019f-48e5-4f78-9f13-1166b4d53a12 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.0-i386                    |
| owner            | abfe5df994e54c6190e98e3f3f3dab38     |
| protected        | False                                |
| size             | 11010048                             |
| status           | active                               |
| updated_at       | 2018-04-30T03:11:58                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

上传另一个64位的:
[root@controller ~]# glance image-create --name=cirros-0.3.0-x86_64 --disk-format=qcow2 --container-format=bare --is-public=true < /root/cirros-no_cloud-0.3.0-x86_64-disk.img 
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 2b35be965df142f00026123a0fae4aa6     |
| container_format | bare                                 |
| created_at       | 2018-04-30T03:16:13                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | ca9993b8-91d5-44d1-889b-5496fd62114c |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros-0.3.0-x86_64                  |
| owner            | abfe5df994e54c6190e98e3f3f3dab38     |
| protected        | False                                |
| size             | 11468800                             |
| status           | active                               |
| updated_at       | 2018-04-30T03:16:14                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

查看映像文件列表:
[root@controller ~]# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 18a0019f-48e5-4f78-9f13-1166b4d53a12 | cirros-0.3.0-i386   | qcow2       | bare             | 11010048 | active |
| ca9993b8-91d5-44d1-889b-5496fd62114c | cirros-0.3.0-x86_64 | qcow2       | bare             | 11468800 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+

(文件名和ID号保持一致)
[root@controller ~]# ls -lht /var/lib/glance/images/
total 22M
-rw-r----- 1 glance glance 11M Apr 30 11:16 ca9993b8-91d5-44d1-889b-5496fd62114c
-rw-r----- 1 glance glance 11M Apr 30 11:11 18a0019f-48e5-4f78-9f13-1166b4d53a12

3.8 glance其他命令

查看映像文件详细信息:
[root@controller ~]# glance image-show cirros-0.3.0-i386

下载磁盘映像文件:
[root@controller ~]# glance image-download --file=/tmp/cirros-0.3.0-i386.img --progress cirros-0.3.0-i386
[=============================>] 100%
[root@controller ~]# ls /tmp/cirros-0.3.0-i386.img 
/tmp/cirros-0.3.0-i386.img

4、Nova(Controller节点)

4.0 安装Qpid(消息队列)

安装:
[root@controller ~]# yum install qpid-cpp-server

关闭认证功能:
[root@controller ~]# vim /etc/qpid/qpidd.conf 

auth=no

开启Qpid服务:
[root@controller ~]# systemctl start qpidd
[root@controller ~]# systemctl status qpidd
[root@controller ~]# systemctl enable qpidd

4.1 安装相关程序包

[root@controller ~]# yum install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

查看生成的文件:
[root@controller ~]# rpm -ql openstack-nova-api
[root@controller ~]# rpm -ql openstack-nova-console

4.2 数据库配置

创建nova数据库:
MariaDB [(none)]> CREATE DATABASE nova CHARACTER SET 'utf8';

数据库用户授权:
MariaDB [(none)]> CREATE DATABASE nova CHARACTER SET 'utf8';
MariaDB [(none)]> GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
MariaDB [(none)]> FLUSH PRIVILEGES;

查看权限:
MariaDB [mysql]> USE mysql;
MariaDB [mysql]> SHOW GRANTS FOR 'nova';

修改数据库连接相关配置:
[root@controller ~]# cd /etc/nova/
[root@controller nova]# cp nova.conf{,.bak} 

[root@controller ~]# vim /etc/nova/nova.conf

[database]
connection=mysql://nova:nova@192.168.1.1/nova

其他配置:
[root@controller ~]# vim /etc/nova/nova.conf
#设置rpc后端(消息队列)
[DEFAULT]
qpid_hostname=controller    
rpc_backend=qpid

my_ip=192.168.1.1
vncserver_listen=192.168.1.1
vncserver_proxyclient_address=192.168.1.1

初始化同步数据库:
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

查看数据库表(I版有100多张张表):
MariaDB [(none)]> USE nova;
MariaDB [nova]> SHOW TABLES;

Openstack(Icehouse)安装——残卷

可以看下日志有没有报错:
[root@controller ~]# 
[root@controller ~]# tail -50 /var/log/nova/nova-manage.log

4.3 keystone创建nova用户,并分配角色

[root@controller ~]# keystone user-create --name=nova --pass=nova --email=nova@qq.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |           nova@qq.com            |
| enabled  |               True               |
|    id    | 772409dae5af4d819bc87e3cc90634c0 |
|   name   |               nova               |
| username |               nova               |
+----------+----------------------------------+

查看用户:
[root@controller ~]# keystone user-list

将nova用户加入admin角色及service租户
[root@controller ~]# keystone user-role-add --user=nova --role=admin --tenant=service
查看:
[root@controller ~]# keystone user-role-list --user=nova --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 2b54b14daca041c2a3dc66325f5048ce | admin | 772409dae5af4d819bc87e3cc90634c0 | f9f13bac5d6f40449b2e4560ab16536d |
+----------------------------------+-------+----------------------------------+----------------------------------+

4.4 认证、端点配置

nova认证配置:
[root@controller ~]# vim /etc/nova/nova.conf

[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
auth_version=v2.0
admin_user=nova
admin_password=nova
admin_tenant_name=service

keystone创建服务,并添加端点(访问服务)
[root@controller ~]# keystone service-create --name=nova --type=compute --description="OpenStack Compute"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |        OpenStack Compute         |
|   enabled   |               True               |
|      id     | a5600093b48145f1a8986481c0dd30ff |
|     name    |               nova               |
|     type    |             compute              |
+-------------+----------------------------------+

添加端点:
keystone endpoint-create \
--service-id=$(keystone service-list | awk '/compute/{print $2}') \
--publicurl=http://controller:8774/v2/%\(tenant_id\)s \
--internalurl=http://controller:8774/v2/%\(tenant_id\)s \
--adminurl=http://controller:8774/v2/%\(tenant_id\)s

+-------------+-----------------------------------------+
|   Property  |                  Value                  |
+-------------+-----------------------------------------+
|   adminurl  | http://controller:8774/v2/%(tenant_id)s |
|      id     |     c4d8cc632b4541288a10fff74c6bc166    |
| internalurl | http://controller:8774/v2/%(tenant_id)s |
|  publicurl  | http://controller:8774/v2/%(tenant_id)s |
|    region   |                regionOne                |
|  service_id |     a5600093b48145f1a8986481c0dd30ff    |
+-------------+-----------------------------------------+

4.5 启动nova相关服务

[root@controller ~]# for svc in api cert consoleauth scheduler conductor novncproxy;do systemctl start openstack-nova-$svc;systemctl enable openstack-nova-$svc;done

由于websockify版本过高,导致novnc启动失败,参考链接:https://www.unixhot.com/article/27

报错日志:
Apr 30 22:28:14 controller systemd: Starting OpenStack Nova NoVNC Proxy Server...
Apr 30 22:28:18 controller python: detected unhandled Python exception in '/usr/bin/nova-novncproxy'
Apr 30 22:28:20 controller abrt-server: Package 'openstack-nova-novncproxy' isn't signed with proper key
Apr 30 22:28:20 controller abrt-server: 'post-create' on '/var/spool/abrt/Python-2018-04-30-22:28:19-16407' exited with 1
Apr 30 22:28:20 controller abrt-server: Deleting problem directory '/var/spool/abrt/Python-2018-04-30-22:28:19-16407'
Apr 30 22:28:20 controller nova-novncproxy: WARNING: no 'numpy' module, HyBi protocol will be slower
Apr 30 22:28:20 controller nova-novncproxy: Traceback (most recent call last):
Apr 30 22:28:20 controller nova-novncproxy: File "/usr/bin/nova-novncproxy", line 10, in <module>
Apr 30 22:28:20 controller nova-novncproxy: sys.exit(main())
Apr 30 22:28:20 controller nova-novncproxy: File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", line 87, in main
Apr 30 22:28:20 controller nova-novncproxy: wrap_cmd=None)
Apr 30 22:28:20 controller nova-novncproxy: File "/usr/lib/python2.7/site-packages/nova/console/websocketproxy.py", line 47, in __init__
Apr 30 22:28:20 controller nova-novncproxy: ssl_target=None, *args, **kwargs)
Apr 30 22:28:20 controller nova-novncproxy: File "/usr/lib/python2.7/site-packages/websockify/websocketproxy.py", line 231, in __init__
Apr 30 22:28:20 controller nova-novncproxy: websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, **kwargs)
Apr 30 22:28:20 controller nova-novncproxy: TypeError: __init__() got an unexpected keyword argument 'no_parent'
Apr 30 22:28:20 controller systemd: openstack-nova-novncproxy.service: main process exited, code=exited, status=1/FAILURE
Apr 30 22:28:20 controller systemd: Unit openstack-nova-novncproxy.service entered failed state.

Openstack(Icehouse)安装——残卷

解决:
[root@controller ~]# yum install python-pip
[root@controller ~]# /usr/bin/pip2.7 install websockify==0.5.1
[root@controller ~]# systemctl start openstack-nova-novncproxy

Openstack(Icehouse)安装——残卷

查看进程
[root@controller ~]# ps aux | grep nova | grep -v grep
nova     16617  2.8  2.2 329508 66236 ?        Ss   22:55   1:30 /usr/bin/python /usr/bin/nova-api
nova     16627  0.3  2.5 425268 72820 ?        Ss   22:55   0:10 /usr/bin/python /usr/bin/nova-cert
nova     16634  0.3  2.5 425212 72848 ?        Ss   22:55   0:10 /usr/bin/python /usr/bin/nova-consoleauth
nova     16650  0.3  2.5 425772 73432 ?        Ss   22:55   0:10 /usr/bin/python /usr/bin/nova-scheduler
nova     16657  2.8  1.4 302184 42920 ?        Ss   22:55   1:28 /usr/bin/python /usr/bin/nova-conductor
nova     16669  0.1  1.1 369780 34520 ?        Ssl  22:55   0:04 /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/
nova     16681  0.0  1.4 308920 42972 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16682  0.0  1.4 308920 42972 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16683  0.2  2.4 425308 69912 ?        S    22:55   0:08 /usr/bin/python /usr/bin/nova-conductor
nova     16684  0.2  2.4 425300 69896 ?        S    22:55   0:07 /usr/bin/python /usr/bin/nova-conductor
nova     16696  0.0  2.1 329508 61448 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16697  0.0  2.1 329508 61448 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16709  0.0  2.1 329508 61440 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api
nova     16710  0.0  2.1 329508 61440 ?        S    22:55   0:00 /usr/bin/python /usr/bin/nova-api

4.6 另一个报错

日志:
2018-05-01 10:15:21.566 17183 ERROR stevedore.extension [-] Could not load 'file': cannot import name util
2018-05-01 10:15:21.567 17183 ERROR stevedore.extension [-] cannot import name util
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension Traceback (most recent call last):
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 162, in _load_plugins
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     verify_requirements,
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 177, in _load_one_plugin
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     plugin = ep.load(require=verify_requirements)
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     entry = __import__(self.module_name, globals(),globals(), ['__name__'])
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/image/download/file.py", line 23, in <module>
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     import nova.virt.libvirt.utils as lv_utils
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py", line 15, in <module>
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     from nova.virt.libvirt import driver
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 59, in <module>
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension     from eventlet import util as eventlet_util
2018-05-01 10:15:21.567 17183 TRACE stevedore.extension ImportError: cannot import name util

参考链接:http://blog.sina.com.cn/s/blog_69a636860102v91c.html

处理办法:重装eventlet老版本
[root@controller ~]# pip install eventlet==0.15.2

再次重启所有服务:
[root@controller ~]# for svc in api cert consoleauth scheduler conductor novncproxy;do systemctl restart openstack-nova-$svc;done

查看日志还有没有报错(全部日志查看一遍,并且以最后时间为准):
[root@controller ~]# date
Tue May  1 10:40:43 CST 2018
[root@controller ~]# tail -50 /var/log/nova/nova-
nova-api.log          nova-cert.log         nova-conductor.log    nova-consoleauth.log  nova-manage.log       nova-scheduler.log 

现在时间40分后,重启服务已经没有报错了:
2018-05-01 10:35:51.000 17790 ERROR stevedore.extension [-] Could not load 'file': cannot import name util
2018-05-01 10:35:51.000 17790 ERROR stevedore.extension [-] cannot import name util
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension Traceback (most recent call last):
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 162, in _load_plugins
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     verify_requirements,
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/stevedore/extension.py", line 177, in _load_one_plugin
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     plugin = ep.load(require=verify_requirements)
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     entry = __import__(self.module_name, globals(),globals(), ['__name__'])
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/image/download/file.py", line 23, in <module>
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     import nova.virt.libvirt.utils as lv_utils
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/__init__.py", line 15, in <module>
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     from nova.virt.libvirt import driver
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 59, in <module>
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension     from eventlet import util as eventlet_util
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension ImportError: cannot import name util
2018-05-01 10:35:51.000 17790 TRACE stevedore.extension 
2018-05-01 10:35:51.028 17790 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.
2018-05-01 10:40:26.549 17957 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.
2018-05-01 10:40:26.602 17957 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.
2018-05-01 10:40:27.488 17957 WARNING keystoneclient.middleware.auth_token [-] Configuring admin URI using auth fragments. This is deprecated, use 'identity_uri' instead.

4.7 使用nova命令测试

查看磁盘映像文件:
[root@controller ~]# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 18a0019f-48e5-4f78-9f13-1166b4d53a12 | cirros-0.3.0-i386   | ACTIVE |        |
| ca9993b8-91d5-44d1-889b-5496fd62114c | cirros-0.3.0-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

二、compute1节点

5、Nova(Node节点,hypervisor节点)

5.1 查看是否支持虚拟化

查看是否支持虚拟化:
[root@compute1 ~]# egrep --color=auto -i "(svm|vmx)" /proc/cpuinfo

Openstack(Icehouse)安装——残卷

5.2 安装compute相关组件包

配置yum源:
[root@compute1 ~]# vim /etc/yum.repos.d/C7-local.repo

[openstack-I]
name=OpenStack I Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-7/
gpgcheck=0
enabled=1

更新yum:
# yum clean all
# yum repolist

安装
[root@compute1 ~]# yum install openstack-nova-compute

报错:
Error: Package: python-nova-2014.1.5-1.el7.centos.noarch (Openstack-I)
           Requires: python-greenlet

需要安装greenlet包,epel上没有,得自己下载:
[root@compute1 ~]# ls python-greenlet-0.4.2-4.el7.x86_64.rpm 
python-greenlet-0.4.2-4.el7.x86_64.rpm
[root@compute1 ~]# yum install -y python-greenlet-0.4.2-4.el7.x86_64.rpm

重新安装:
[root@compute1 ~]# yum install openstack-nova-compute

5.3 配置

备份配置:
[root@compute1 ~]# cd /etc/nova/
[root@compute1 nova]# cp nova.conf{,.bak}

开始配置:
[root@compute1 ~]# vim /etc/nova/nova.conf

#数据库连接配置:
connection=mysql://nova:nova@192.168.1.1/nova

#qpid配置:
qpid_hostname=192.168.1.1
rpc_backend=qpid

#认证
[DEFAULE]
auth_strategy=keystone

[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
auth_version=v2.0
admin_user=nova
admin_password=nova
admin_tenant_name=service

#glance配置
glance_host=controller

#vnc配置:
my_ip=192.168.1.2
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.2
vnc_enabled=true
novncproxy_base_url=http://controller:6080/vnc_auto.html

#设置支持的虚拟化方式(前提是支持kvm虚拟化,否则改成qemu)
virt_type=kvm

#修改虚拟网络异常,报错超时时间
vif_plugging_timeout=10

#设置虚拟机网络接口异常时,依然可以启动虚拟机
vif_plugging_is_fatal=false

5.4 服务启动

[root@compute1 ~]# lsmod | grep kvm
kvm_intel             162153  0 
kvm                   525259  1 kvm_intel

先启动libvirtd服务:
[root@compute1 ~]# systemctl start libvirtd.service

启动messagebus(总线服务)
[root@compute1 ~]# systemctl start messagebus

启动openstack-compute服务:
[root@compute1 ~]# systemctl start openstack-nova-compute

又出现eventlet认不到报错,而且在安装oepnstack-compute组件时,自动更新了其他组件导致yum损坏。

报错日志:
[root@compute1 ~]# rpm
error: Failed to initialize NSS library

error: Failed to initialize NSS library
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:

   cannot import name ts

Please install a package which provides this module, or
verify that the module is installed correctly.

It's possible that the above module doesn't match the
current version of Python, which is:
2.7.5 (default, Nov 20 2015, 02:00:19) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-4)]

修复(参考链接:https://www.huangzz.xyz/jie-jue-failed-to-initialize-nss-library-de-wen-ti.html):
[root@compute1 ~]# wget https://www.huangzz.xyz/wp-content/uploads/MyUploads/libnspr4.so.tar
[root@compute1 ~]# tar xf libnspr4.so.tar
[root@compute1 ~]# mv libnspr4.so /usr/lib64/
mv: overwrite ‘/usr/lib64/libnspr4.so’? y
[root@compute1 ~]# yum install glibc.i686 nspr

安装eventlet旧版本:
[root@compute1 ~]# yum install python-pip
[root@compute1 ~]# pip install eventlet==0.15.2

已经可以启动openstack-nova-compute进程:
[root@compute1 ~]# systemctl start openstack-nova-compute

5.5 又一个报错,libvirt相关(此处巨坑,请先备份nova配置文件)

此处巨坑,请先备份nova配置文件,甚至做个镜像
好像libvirt的版本跟这版本的compute有不兼容。。。。
日志报错:
May 01 14:50:47 compute1.com libvirtd[4336]: 2018-05-01 06:50:47.622+0000: 4341: error : virDBusCall:1570 : error from service: CheckAuthorization: Connection is closed
May 01 14:50:47 compute1.com libvirtd[4336]: 2018-05-01 06:50:47.622+0000: 4336: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error
May 01 14:50:55 compute1.com libvirtd[4336]: 2018-05-01 06:50:55.209+0000: 4340: error : virDBusCall:1570 : error from service: CheckAuthorization: Connection is closed
May 01 14:50:55 compute1.com libvirtd[4336]: 2018-05-01 06:50:55.617+0000: 4336: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error

2018-05-01 14:41:43.839 4248 INFO oslo.messaging._drivers.impl_qpid [-] Connected to AMQP server on 192.168.1.1:5672
2018-05-01 14:50:22.359 4248 WARNING nova.virt.libvirt.driver [-] Connection to libvirt lost: 0
2018-05-01 14:50:47.623 4248 ERROR nova.virt.libvirt.driver [-] Connection to libvirt failed: error from service: CheckAuthorization: Connection is closed
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 789, in _connect
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     libvirt.openAuth, uri, auth, flags)
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     rv = execute(f, *args, **kwargs)
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     six.reraise(c, e, tb)
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     rv = meth(*args, **kwargs)
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver     if ret is None:raise libvirtError('virConnectOpenAuth() failed')
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver libvirtError: error from service: CheckAuthorization: Connection is closed
2018-05-01 14:50:47.623 4248 TRACE nova.virt.libvirt.driver 
2018-05-01 14:50:47.766 4248 ERROR nova.openstack.common.periodic_task [-] Error during ComputeManager.update_available_resource: Connection to the hypervisor is broken on host: compute1.com
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task Traceback (most recent call last):
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py", line 182, in run_periodic_tasks
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     task(self, context)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5529, in update_available_resource
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     rt.update_available_resource(context)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py", line 249, in inner
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     return f(*args, **kwargs)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 293, in update_available_resource
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     resources = self.driver.get_available_resource(self.nodename)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4204, in get_available_resource
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     stats = self.get_host_stats(refresh=True)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4902, in get_host_stats
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     return self.host_state.get_host_stats(refresh=refresh)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5310, in get_host_stats
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     self.update_status()
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5344, in update_status
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     data["memory_mb"] = self.driver.get_memory_mb_total()
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3839, in get_memory_mb_total
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     return self._conn.getInfo()[1]
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 723, in _get_connection
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     wrapped_conn = self._get_new_connection()
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 676, in _get_new_connection
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     wrapped_conn = self._connect(self.uri(), self.read_only)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 798, in _connect
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task     raise exception.HypervisorUnavailable(host=CONF.host)
2018-05-01 14:50:47.766 4248 TRACE nova.openstack.common.periodic_task HypervisorUnavailable: Connection to the hypervisor is broken on host: compute1.com

处理(libvirt原始版本3.2):
更换镜像源地址:http://vault.centos.org/7.2.1511/os/x86_64/
或者使用7.2的CD镜像源
[root@compute1 ~]# yum clean all
[root@compute1 ~]# yum repolist

卸载老版本:
[root@compute1 ~]# yum remove libvirt-daemon libvirt-libs libvirt-python

安装1.2.17的libvirt的时候,又报cyrus-sasl-lib版本高,强制降低版本
[root@compute1 ~]# rpm -Uvh cyrus-sasl-lib-2.1.26-19.2.el7.x86_64.rpm --force --nodeps

重新安装(记得关闭其他镜像源,只保留CD镜像和openstack的镜像源,安装libvirt1.2版本的):
[root@compute1 ~]# yum install openstack-nova-compute
libvirt libvirt-python

启动messagebus(总线服务)
[root@compute1 ~]# systemctl restart messagebus

启动libvirt
[root@compute1 ~]# systemctl start libvirtd

还是报错,不理了。

Openstack(Icehouse)安装——残卷

启动openstack-nova-compute
[root@compute1 ~]# systemctl start openstack-nova-compute.service

Openstack(Icehouse)安装——残卷

设置服务开机启动:
[root@compute1 ~]# systemctl enable libvirtd
[root@compute1 ~]# systemctl enable messagebus
[root@compute1 ~]# systemctl enable openstack-nova-compute

5.6 验证

回到controller节点验证:
可以到compute1计算节点了:
[root@controller ~]# nova hypervisor-list
+----+---------------------+
| ID | Hypervisor hostname |
+----+---------------------+
| 1  | compute1.com        |
+----+---------------------+

查看comput1的资源统计情况
[root@controller ~]# nova hypervisor-stats
+----------------------+-------+
| Property             | Value |
+----------------------+-------+
| count                | 1     |
| current_workload     | 0     |
| disk_available_least | 77    |
| free_disk_gb         | 78    |
| free_ram_mb          | 2286  |
| local_gb             | 78    |
| local_gb_used        | 0     |
| memory_mb            | 2798  |
| memory_mb_used       | 512   |
| running_vms          | 0     |
| vcpus                | 2     |
| vcpus_used           | 0     |
+----------------------+-------+

查看compute1详细信息:
[root@controller ~]# nova hypervisor-show compute1.com

三、网络配置

6、Neutron Server(controller)

开始使用第三个网络节点,network1

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷

6.1 配置neutron数据库信息

创建数据库:
MariaDB [(none)]> CREATE DATABASE neutron CHARACTER SET 'utf8';

授权:
MariaDB [(none)]> GRANT ALL ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
MariaDB [(none)]> GRANT ALL ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
MariaDB [(none)]> FLUSH PRIVILEGES;

6.2 keystone创建neutron用户

[root@controller ~]# keystone user-create --name=neutron --pass=neutron --email=neutron@qq.com
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |          neutron@qq.com          |
| enabled  |               True               |
|    id    | fa48f4bfed2746d2b2711c46da825407 |
|   name   |             neutron              |
| username |             neutron              |
+----------+----------------------------------+

把用户添加到管理角色和service租户里面:
[root@controller ~]# keystone user-role-add --user=neutron --tenant=service --role=admin

查看:
[root@controller ~]# keystone user-role-list --user=neutron --tenant=service
+----------------------------------+-------+----------------------------------+----------------------------------+
|                id                |  name |             user_id              |            tenant_id             |
+----------------------------------+-------+----------------------------------+----------------------------------+
| 2b54b14daca041c2a3dc66325f5048ce | admin | fa48f4bfed2746d2b2711c46da825407 | f9f13bac5d6f40449b2e4560ab16536d |
+----------------------------------+-------+----------------------------------+----------------------------------+

6.3 创建neutron服务和访问端点(访问接口)

添加服务:
[root@controller ~]# keystone service-create --name neutron --type network --description "OpenStack Networking"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |       OpenStack Networking       |
|   enabled   |               True               |
|      id     | da34b8a9c89446c6901888e27db931e3 |
|     name    |             neutron              |
|     type    |             network              |
+-------------+----------------------------------+

添加端点:
keystone endpoint-create \
--service-id $(keystone service-list | awk '/network/{print $2}') \
--publicurl http://controller:9696 \
--adminurl http://controller:9696 \
--internalurl http://controller:9696
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |      http://controller:9696      |
|      id     | 6f9f3b37e1d9451896a36bfed1ed1536 |
| internalurl |      http://controller:9696      |
|  publicurl  |      http://controller:9696      |
|    region   |            regionOne             |
|  service_id | da34b8a9c89446c6901888e27db931e3 |
+-------------+----------------------------------+

6.4 安装neutron程序包

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 python-neutronclient

6.5 配置neutron

备份配置
[root@controller neutron]# cd /etc/neutron/
[root@controller neutron]# cp neutron.conf neutron.conf.bak

查看tenant中service的ID号
[root@controller ~]# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| abfe5df994e54c6190e98e3f3f3dab38 |  admin  |   True  |
| fbff77c905114d50b5be94ffd46203cd |   demo  |   True  |
| f9f13bac5d6f40449b2e4560ab16536d | service |   True  |
+----------------------------------+---------+---------+

配置:
connection = mysql://neutron:neutron@192.168.1.1:3306/neutron

[DEFAULT]
#可以显示详细日志
verbose = True
auth_strategy =keystone

[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_password=neutron
admin_tenant_name=service

rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.1.1

#配置网络拓扑改变的事件通知
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = f9f13bac5d6f40449b2e4560ab16536d
nova_admin_password = nova
nova_admin_auth_url = http://controller:35357/v2.0

#配置网络核心插件:
core_plugin = ml2
service_plugins = router

6.6 配置ml2插件

备份:
[root@controller ~]# cd /etc/neutron/plugins/ml2/
[root@controller ml2]# cp ml2_conf.ini ml2_conf.ini.bak

配置
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallD
river

6.9 compute配置使用网络

配置:
[root@controller ~]# vim /etc/nova/nova.conf

[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://controller:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_admin_auth_url=http://controller:35357/v2.0
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

创建超链接:
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@controller ~]# ll /etc/neutron/plugin.ini
lrwxrwxrwx 1 root root 37 May  2 01:36 /etc/neutron/plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini

6.10 重启nova的几个服务

[root@controller ~]# systemctl restart openstack-nova-api
[root@controller ~]# systemctl restart openstack-nova-scheduler
[root@controller ~]# systemctl restart openstack-nova-conductor

6.11 启动neutron服务

[root@controller ~]# systemctl start neutron-server
[root@controller ~]# systemctl enable neutron-server

查看状态:

Openstack(Icehouse)安装——残卷

查看日志:
[root@controller ~]# tail -60 /var/log/neutron/server.log

网上说日志找不到插件是正常的,好吧我信了。。。

Openstack(Icehouse)安装——残卷

[root@controller ~]# grep -i "error" /var/log/neutron/server.log 
[root@controller ~]# grep -i "fa" /var/log/neutron/server.log 

7、netwrok node (network1节点配置)

7.1 配置好相关OpenStack版本的镜像源

7.2 编辑内核参数

[root@network1 ~]# vim /etc/sysctl.conf 

net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

激活:
[root@network1 ~]# sysctl -p

7.3 安装neutron程序包(这里需要epel源。。。,确保openvswitch的安装包来自openstack镜像源即可)

[root@network1 ~]# yum install python-greenlet-0.4.2-4.el7.x86_64.rpm

[root@network1 ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

7.4 neutron配置

备份:
[root@network1 ~]# cd /etc/neutron/
[root@network1 neutron]# cp neutron.conf neutron.conf.bak

配置:
[root@network1 ~]# vim /etc/neutron/neutron.conf

[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_password=neutron
admin_tenant_name=service

#QPID
rpc_backend=neutron.openstack.common.rpc.impl_qpid

qpid_hostname = 192.168.1.1

core_plugin = ml2

service_plugins = router

7.5 配置l3插件

[root@network1 ~]# cd /etc/neutron/
[root@network1 neutron]# cp l3_agent.ini l3_agent.ini.bak

[root@network1 ~]# vim /etc/neutron/l3_agent.ini 

[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True

7.6 配置DHCP

[root@network1 ~]# vim /etc/neutron/dhcp_agent.ini 

[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

强制限制帧大小
[root@network1 ~]# vim /etc/neutron/dnsmasq-neutron.conf

#强制限制帧大小
dhcp-option-force=26,1454

7.7 配置metadata

[root@network1 ~]# vim /etc/neutron/metadata_agent.ini 

verbose = True
auth_url = http://controller:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = neutron
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET

7.8 回到controller节点,配置metadata

[root@controller ~]# vim /etc/nova/nova.conf

service_neutron_metadata_proxy=true
neutron_metadata_proxy_shared_secret=METADATA_SECRET

重启nova-api服务:
[root@controller ~]# systemctl restart openstack-nova-api

7.9 继续在network1节点,配置l2插件

[root@network1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini 

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]
tunnel_id_ranges = 1:1000

[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

#自己加一段
[ovs]
local_ip = 192.168.2.254
tunnel_type = gre
enable_tunneling = True

7.10 启动network1上的openvswitch服务并配置

启动openvswitch:
[root@network1 ~]# systemctl start openvswitch
[root@network1 ~]# systemctl enable openvswitch

增加一个内部桥:
[root@network1 ~]# ovs-vsctl add-br br-in

增加一个外部桥
[root@network1 ~]# ovs-vsctl add-br br-ex

去除eth2的地址、网关、掩码。把eth2添加进br-ex外部桥
[root@network1 ~]# ifconfig eth2 0;ifconfig br-ex10.201.106.133/24 up;ovs-vsctl add-port br-ex eth2

另外在eth2配置中去除地址、网关、掩码

配置默认路由
[root@network1 ~]# route add default gw 10.201.106.2

查看
[root@network1 ~]# ovs-vsctl show
c95ca634-3c90-4aca-ae62-21d4f740e3b5
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth2"
            Interface "eth2"

设置外部桥ID号:
[root@network1 ~]# ovs-vsctl br-set-external-id br-ex bridge-id br-ex

关闭外网eth2网卡的gro功能,增加网络性能:
[root@network1 ~]# ethtool -K eth2 gro off

7.11 其他配置

创建l2配置软链接:
[root@network1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

由于bug原因,需要修改openvswitch的init启动脚本
备份:
[root@network1 ~]# rpm -ql openstack-neutron-openvswitch | grep agent.service
/usr/lib/systemd/system/neutron-openvswitch-agent.service

[root@network1 ~]# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service{,.orig}

替换:
[root@network1 ~]# vim /usr/lib/systemd/system/neutron-openvswitch-agent.service 

:%s@plugins/openvswitch/ovs_neutron_plugin.ini@plugin.ini@ig

修改服务链接:
[root@network1 ~]# systemctl disable neutron-openvswitch-agent
[root@network1 ~]# systemctl enable neutron-openvswitch-agent

7.12 network1服务启动

[root@network1 ~]# for svc in openvswitch l3 dhcp metadata;do systemctl start neutron-${svc}-agent;systemctl enable neutron-${svc}-agent;done

查看状态:
[root@network1 ~]# for svc in openvswitch l3 dhcp metadata;do systemctl status neutron-${svc}-agent;done

看看日志:
[root@network1 ~]# tail -50 /var/log/neutron/
dhcp-agent.log         l3-agent.log           metadata-agent.log     openvswitch-agent.log 

8、compute1节点的网络配置

8.1 开启内核网络功能

[root@compute1 ~]# vim /etc/sysctl.conf 

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-arptables=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

激活:
[root@compute1 ~]# sysctl -p

8.2 安装相关程序包(依赖包需要epel)

[root@compute1 ~]# yum install openstack-neutron-ml2 openstack-neutron-openvswitch

8.3 配置

备份:
[root@compute1 ~]# cp /etc/neutron/neutron.conf{,.bak}

[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_host=controller
auth_protocol=http
auth_port=35357
admin_user=neutron
admin_password=neutron
admin_tenant_name=service

# QPID
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = 192.168.1.1

core_plugin = ml2
service_plugins = router

8.4 ml2配置

做个备份先:
[root@compute1 ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
[root@compute1 ~]# ls /etc/neutron/plugins/ml2/ml2_conf.ini*
/etc/neutron/plugins/ml2/ml2_conf.ini  /etc/neutron/plugins/ml2/ml2_conf.ini.bak

拷贝network1的配置
[root@network1 ~]# scp /etc/neutron/plugins/ml2/ml2_conf.ini 192.168.1.2:/etc/neutron/plugins/ml2/

修改权限
chown root:neutron /etc/neutron/plugins/ml2/ml2_conf.ini

继续修改:
[root@compute1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ovs]
#改成自己的eth1地址
local_ip = 192.168.2.2

8.5 启动openvswitch并配置

[root@compute1 ~]# systemctl start openvswitch
[root@compute1 ~]# systemctl enable openvswitch

添加内部桥:
[root@compute1 ~]# ovs-vsctl add-br br-in

8.6 修改nova配置文件

[root@compute1 ~]# vim /etc/nova/nova.conf

network_api_class=nova.network.neutronv2.api.API
neutron_url=http://controller:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_admin_auth_url=http://controller:5000/v2.0

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

8.7 启动服务

创建l2配置软链接
[root@compute1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

重启compute1的nova服务:
[root@compute1 ~]# systemctl restart openstack-nova-compute

由于bug原因,需要修改openvswitch的init启动脚本
备份:
[root@compute1 ~]# rpm -ql openstack-neutron-openvswitch | grep agent.service
/usr/lib/systemd/system/neutron-openvswitch-agent.service

[root@compute1 ~]# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service{,.orig}

启动neutron-agent服务:
[root@compute1 ~]# systemctl start neutron-openvswitch-agent
[root@compute1 ~]# systemctl enable neutron-openvswitch-agent

如果无法启动,请检查配置文件的属主属组权限,该服务是用neutron用户进行,如果配置文件用root用户拷贝,属主属组全是root将无法启动。默认权限是root:neutron(属组是neutron)

root@network1 ~]# ll /etc/neutron/plugins/ml2/ml2_conf.ini 
-rw-r----- 1 root neutron 2567 May  2 16:21 /etc/neutron/plugins/ml2/ml2_conf.ini

8.8 验证

在controller端使用网络命令
[root@controller ~]# neutron net-list

[root@controller ~]# 
没有报错,默认为空

如果出现鉴权错误。请把neutron配置的auth_uri改成identity_uri  后重启neutron所有服务

[root@network1 ~]# for i in openvswitch l3 dhcp metadata;do systemctl restart neutron-${i}-agent;done

[root@compute1 ~]# systemctl restart neutron-openvswitch-agent

验证:
在compute1和network1加载用户环境文件
vim  /~/.admin-openrc.sh 
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0

source /~/.admin-openrc.sh 

然后运行neutron net-list服务没报错即可

9、neutron配置网络

9.1 创建外部网络

创建外部网络:
[root@controller ~]# neutron net-create ext-net --shared --router:external=True 
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 765c7736-23e1-4628-a30f-8c7a6b3fb112 |
| name                      | ext-net                              |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1                                    |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | abfe5df994e54c6190e98e3f3f3dab38     |
+---------------------------+--------------------------------------+

在上面的物理网络基础上创建subnet,并关闭DHCP,并指明分配地址范围,网关等(子网,三层)
[root@controller ~]# neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.201.106.150,end=10.201.106.180 --disable-dhcp --gateway 10.201.106.2 10.201.106.0/24
Created a new subnet:
+------------------+------------------------------------------------------+
| Field            | Value                                                |
+------------------+------------------------------------------------------+
| allocation_pools | {"start": "10.201.106.150", "end": "10.201.106.180"} |
| cidr             | 10.201.106.0/24                                      |
| dns_nameservers  |                                                      |
| enable_dhcp      | False                                                |
| gateway_ip       | 10.201.106.2                                         |
| host_routes      |                                                      |
| id               | 4a1f4b34-c05d-4e0c-94a3-e793baf77903                 |
| ip_version       | 4                                                    |
| name             | ext-subnet                                           |
| network_id       | 8b1ce93e-d53f-4db8-9618-ea0e5a44c7e2                 |
| tenant_id        | abfe5df994e54c6190e98e3f3f3dab38                     |
+------------------+------------------------------------------------------+

    由于是用admin用户创建,所以tenantID为admin:
    [root@controller ~]# keystone tenant-list
    +----------------------------------+---------+---------+
    |                id                |   name  | enabled |
    +----------------------------------+---------+---------+
    | abfe5df994e54c6190e98e3f3f3dab38 |  admin  |   True  |
    | fbff77c905114d50b5be94ffd46203cd |   demo  |   True  |
    | f9f13bac5d6f40449b2e4560ab16536d | service |   True  |
    +----------------------------------+---------+---------+

9.2 切换普通用户demo,管理网络

[root@controller ~]# cp .admin-openrc.sh .demo-os.sh

编辑demo变量:
[root@controller ~]# vim .demo-os.sh 

export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://controller:35357/v2.0

激活变量
[root@controller ~]# source .demo-os.sh

查看网络:
[root@controller ~]# neutron net-list
+--------------------------------------+---------+------------------------------------------------------+
| id                                   | name    | subnets                                              |
+--------------------------------------+---------+------------------------------------------------------+
| 765c7736-23e1-4628-a30f-8c7a6b3fb112 | ext-net | 9750a55a-1993-4e6a-a972-e91ffb700c08 10.201.106.0/24 |
+--------------------------------------+---------+------------------------------------------------------+

9.3 demo用户创建管理网络

创建二层网络:
[root@controller ~]# neutron net-create ext-net --shared --router:external=True 
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 8b1ce93e-d53f-4db8-9618-ea0e5a44c7e2 |
| name                      | ext-net                              |
| provider:network_type     | gre                                  |
| provider:physical_network |                                      |
| provider:segmentation_id  | 1                                    |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | abfe5df994e54c6190e98e3f3f3dab38     |
+---------------------------+--------------------------------------+

在二层网络基础上创建三层子网(默认会提供DHCP服务)
[root@controller ~]# neutron subnet-create demo-net --name demo-subnet --gateway 192.168.3.254 192.168.3.0/24
Created a new subnet:
+------------------+--------------------------------------------------+
| Field            | Value                                            |
+------------------+--------------------------------------------------+
| allocation_pools | {"start": "192.168.3.1", "end": "192.168.3.253"} |
| cidr             | 192.168.3.0/24                                   |
| dns_nameservers  |                                                  |
| enable_dhcp      | True                                             |
| gateway_ip       | 192.168.3.254                                    |
| host_routes      |                                                  |
| id               | 2e2902f4-a8e7-4468-b025-e3192c107c63             |
| ip_version       | 4                                                |
| name             | demo-subnet                                      |
| network_id       | 7f967a07-b98c-4684-ba2a-dd2ad4dc7171             |
| tenant_id        | fbff77c905114d50b5be94ffd46203cd                 |
+------------------+--------------------------------------------------+

9.3.1 手动创建router(路由器)

查看帮助:
[root@controller ~]# neutron help router-create

创建路由
[root@controller ~]# neutron router-create demo-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 2f000bf3-cbc3-49b5-8470-9f5b4db54aa5 |
| name                  | demo-router                          |
| status                | ACTIVE                               |
| tenant_id             | fbff77c905114d50b5be94ffd46203cd     |
+-----------------------+--------------------------------------+

路由器添加一个接口(网关),
[root@controller ~]# neutron router-interface-add demo-router demo-subnet
Added interface bdefa77a-4a7b-4fdc-8196-cfdd7337a822 to router demo-router.
[root@controller ~]# neutron port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| bdefa77a-4a7b-4fdc-8196-cfdd7337a822 |      | fa:16:3e:5a:12:cf | {"subnet_id": "2e2902f4-a8e7-4468-b025-e3192c107c63", "ip_address": "192.168.3.254"} |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+

将到路由器跟外部网桥关联(我的好像有bug,添加通过route-port-list命令没有看到分配的外网IP和接口。但是删除ext-subnet时又报错说有分配IP无法删除):
[root@controller ~]# neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router

Openstack(Icehouse)安装——残卷

在network1节点查看:
[root@network1 ~]# ip netns list
qrouter-2f000bf3-cbc3-49b5-8470-9f5b4db54aa5
qrouter-eb098169-d68a-400e-ac5d-4b95bc6229b1

可以看到分配出去的IP

Openstack(Icehouse)安装——残卷

10、Horizon(dashboard图形界面)

10.1 安装程序包

[root@controller ~]# yum install memcached python-memcached mod_wsgi openstack-dashboard

10.2 启动memcached

[root@controller ~]# systemctl enable memcached
[root@controller ~]# systemctl start memcached

[root@controller ~]# netstat -tanp | grep memcached
tcp        0      0 0.0.0.0:11211           0.0.0.0:*               LISTEN      25002/memcached     
tcp6       0      0 :::11211                :::*                    LISTEN      25002/memcached 

10.3 配置dashboard

备份
[root@controller ~]# cp -p /etc/openstack-dashboard/local_settings{,.bak}

配置
[root@controller ~]# vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"
#授权所有主机访问:
ALLOWED_HOSTS = ['*']

#禁用:
#CACHES = {
#    'default': {
#        'BACKEND' : 'django.core.cache.backends.locmem.LocMemCache'
#    }
#}

#启动memcached
CACHES = {
    'default': {
        'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION' : '192.168.1.1:11211',
    }
}

#时区
TIME_ZONE = "Asia/Chongqing"

启动httpd服务:
[root@controller ~]# systemctl start httpd
[root@controller ~]# systemctl enable httpd

10.4 访问测试

浏览器访问:http://10.201.106.131/dashboard

Openstack(Icehouse)安装——残卷

用keystone的用户登录,这里用admin用户登录。

登录后界面:

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷

demo用户登录:

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷

10.5 图形界面无法连接到Neutron 问题解决

Openstack(Icehouse)安装——残卷

参考链接:https://blog.csdn.net/wmj2004/article/details/53216024

处理:
[root@controller ~]# vim /usr/share/openstack-dashboard/openstack_dashboard/api

    def is_simple_associate_supported(self):
        def is_supported(self):
            network_config = getattr(settings, 'OPENSTACK_NEUTRON_NETWORK', {})
            return network_config.get('enable_router', True)

重启web服务:
[root@controller ~]# systemctl restart httpd

11、创建管理实例(虚拟机)

11.1 生成demo用户的密钥

加载demo用户变量:
[root@controller ~]# source .demo-os.sh

生成密钥和公钥,之前已经有了,不覆盖原来的:
[root@controller ~]# ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? n

导入公钥:
[root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub demokey

列出密钥对
[root@controller ~]# nova keypair-list
+---------+-------------------------------------------------+
| Name    | Fingerprint                                     |
+---------+-------------------------------------------------+
| demokey | 35:78:7f:bf:9f:75:d3:ef:7a:b1:ee:a2:7f:2f:e3:27 |
+---------+-------------------------------------------------+

11.2 启动实例前准备

列出默认内置的flavor
[root@controller ~]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@control

创建一个自定义flavor:
切换admin用户:
[root@controller ~]# source .admin-openrc.sh
查看帮助:
[root@controller ~]# nova help flavor-create

创建:
[root@controller ~]# nova flavor-create --is-public true m1.cirros 6 128 1 1
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | m1.cirros | 128       | 1    | 0         |      | 1     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

[root@controller ~]# nova flavor-list | grep cirros
| 6  | m1.cirros | 128       | 1    | 0         |      | 1     | 1.0         | True      |

切换回demo:
[root@controller ~]# source .demo-os.sh 

查看可以使用的磁盘映像文件:
[root@controller ~]# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| 18a0019f-48e5-4f78-9f13-1166b4d53a12 | cirros-0.3.0-i386   | ACTIVE |        |
| ca9993b8-91d5-44d1-889b-5496fd62114c | cirros-0.3.0-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+

如果普通用户看不到,需要用admin用户修改镜像的权限为公有,蛋(以后命令上传要加参数--is-public=true,之前把true写成ture了)疼:

Openstack(Icehouse)安装——残卷

Openstack(Icehouse)安装——残卷

查看自己可以使用的网络和subnet:
[root@controller ~]# nova net-list
+--------------------------------------+----------+------+
| ID                                   | Label    | CIDR |
+--------------------------------------+----------+------+
| 8b1ce93e-d53f-4db8-9618-ea0e5a44c7e2 | ext-net  | -    |
| 7f967a07-b98c-4684-ba2a-dd2ad4dc7171 | demo-net | -    |
+--------------------------------------+----------+------+
[root@controller ~]# neutron subnet-list 
+--------------------------------------+-------------+-----------------+------------------------------------------------------+
| id                                   | name        | cidr            | allocation_pools                                     |
+--------------------------------------+-------------+-----------------+------------------------------------------------------+
| 4a1f4b34-c05d-4e0c-94a3-e793baf77903 | ext-subnet  | 10.201.106.0/24 | {"start": "10.201.106.150", "end": "10.201.106.180"} |
| 2e2902f4-a8e7-4468-b025-e3192c107c63 | demo-subnet | 192.168.3.0/24  | {"start": "192.168.3.1", "end": "192.168.3.253"}     |
+--------------------------------------+-------------+-----------------+------------------------------------------------------+

查看可用的安全组
[root@controller ~]# nova secgroup-list
+--------------------------------------+---------+-------------+
| Id                                   | Name    | Description |
+--------------------------------------+---------+-------------+
| 3b28dbde-1640-4043-81c9-d01cb822020b | default | default     |
+--------------------------------------+---------+-------------+

查看组内角色规则:
[root@controller ~]# nova secgroup-list-rules default
+-------------+-----------+---------+----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+----------+--------------+
|             |           |         |          | default      |
|             |           |         |          | default      |
+-------------+-----------+---------+----------+--------------+

11.3 正式启动虚拟机实例(加入网络需要网络的ID号)

[root@controller ~]# nova boot --flavor m1.cirros --image cirros-0.3.0-i386 --key-name demokey --nic net-id=7f967a07-b98c-4684-ba2a-dd2ad4dc7171 --security-group default demo-0001
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-STS:power_state               | 0                                                        |
| OS-EXT-STS:task_state                | scheduling                                               |
| OS-EXT-STS:vm_state                  | building                                                 |
| OS-SRV-USG:launched_at               | -                                                        |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| adminPass                            | vNpTGgs6uzDq                                             |
| config_drive                         |                                                          |
| created                              | 2018-05-03T09:43:51Z                                     |
| flavor                               | m1.cirros (6)                                            |
| hostId                               |                                                          |
| id                                   | f5a4aec6-100e-482c-9944-b319189facba                     |
| image                                | cirros-0.3.0-i386 (18a0019f-48e5-4f78-9f13-1166b4d53a12) |
| key_name                             | demokey                                                  |
| metadata                             | {}                                                       |
| name                                 | demo-0001                                                |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | BUILD                                                    |
| tenant_id                            | fbff77c905114d50b5be94ffd46203cd                         |
| updated                              | 2018-05-03T09:43:52Z                                     |
| user_id                              | 472d9776f8984bb99a728985760ad5ba                         |
+--------------------------------------+----------------------------------------------------------+

查看实例状态:
[root@controller ~]# nova list
+--------------------------------------+-----------+--------+------------+-------------+----------+
| ID                                   | Name      | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+----------+
| f5a4aec6-100e-482c-9944-b319189facba | demo-0001 | BUILD  | spawning   | NOSTATE     |          |
+--------------------------------------+-----------+--------+------------+-------------+----------+

过了一段时间,终于创建好了:
[root@controller ~]# nova list
+--------------------------------------+-----------+--------+------------+-------------+----------------------+
| ID                                   | Name      | Status | Task State | Power State | Networks             |
+--------------------------------------+-----------+--------+------------+-------------+----------------------+
| f5a4aec6-100e-482c-9944-b319189facba | demo-0001 | ACTIVE | -          | Running     | demo-net=192.168.3.1 |
+--------------------------------------+-----------+--------+------------+-------------+----------------------+

如果无法启动,检查下compute和network1节点的neutron-openvswitch-agent有没有启动

查看VNC访问地址:
[root@controller ~]# nova get-vnc-console demo-0001 novnc
+-------+---------------------------------------------------------------------------------+
| Type  | Url                                                                             |
+-------+---------------------------------------------------------------------------------+
| novnc | http://controller:6080/vnc_auto.html?token=c72f7bc4-e7f4-47b1-b4e5-362fcb9811c3 |
+-------+---------------------------------------------------------------------------------+

通过浏览器访问:http://controller:6080/vnc_auto.html?token=c72f7bc4-e7f4-47b1-b4e5-362fcb9811c3

Openstack(Icehouse)安装——残卷

11.5 安全组设置

开启安全组ping功能:
[root@controller ~]# source .admin-openrc.sh
[root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+

转载于:https://blog.51cto.com/zhongle21/2112475

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值