云安全:openstack云平台搭建及经验总结

目录

 

前言

一、与Cloudstack 的对比

二、搭建过程

三、问题与解决

3.1、问题1:neutron验证时出错(已解决)

3.2、问题2:实例无法启动(已解决)

3.3、问题3:在安装块存储服务,进行验证时,出现错误(已解决)

3.4、问题4:块存储,将卷添加到实例出错(已解决)

四、总结

五、参考资料


前言

经过这两天的尝试,初步搭建了openstack云平台,基本功能均已实现。

一、与Cloudstack 的对比

相比Cloudstack,OpenStack的搭建明显要复杂些。

Cloustack只需要安装management和client。

而openstack需要安装的组件就多了,认证服务keystone、镜像服务glance、放置服务placement、计算服务nova、网络服务neutron、Web UI horizon及块存储服务cinder。

 

二、搭建过程

openstack的文档还是比较多的,具体的安装过程参考:

官方文档:https://docs.openstack.org/install-guide/openstack-services.html

最近教程:https://blog.csdn.net/chengyinwu/category_9242444.html

搭建结果:

运行中的实例图

登陆实例图

一个有意思的事情是,虽然openstack的搭建过程比cloudstack要复杂。但对比这两次搭建过程,发现openstack的搭建过程要顺利的多,也快的多。

分析一下原因可能是:

1、相对而言,openstack的社区确实要活跃的多,随便搜索一下安装教程一大把,而且可以找到比较新的做参考,cloudstack的教程就比较少且时间相对较早。

2、cloudstack安装的时候自动化程度较高,但出现问题时却可能完全不清楚究竟是哪个部分除了问题。openstack每个服务安装后都会验证,解决错误时更有针对性。

还可能的原因是,之前毕竟已经学过cloudstack,虽然平台不同,但大致思路是相通的,而且openstack是用python开发的,相较于cloudstack的java,我对python更熟悉些,更容易通过日志找到出错原因。

 

三、问题与解决

虽然搭建openstack的过程相对顺利,但还是遇到了一些问题,有些已经解决,有些正在尝试。

问题1

3.1、问题1:neutron验证时出错(已解决)

这是正常的验证结果

$ openstack network agent list

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 0400c2f6-4d3b-44bc-89fa-99093432f3bf | Metadata agent     | controller | None              | True  | UP    | neutron-metadata-agent    |
| 83cf853d-a2f2-450a-99d7-e9c6fc08f4c3 | DHCP agent         | controller | nova              | True  | UP    | neutron-dhcp-agent        |
| ec302e51-6101-43cf-9f19-88a78613cbee | Linux bridge agent | compute    | None              | True  | UP    | neutron-linuxbridge-agent |
| fcb9bc6e-22b1-43bc-9054-272dd517d025 | Linux bridge agent | controller | None              | True  | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

 

这是出错的结果

[root@controller ~]# openstack network agent list
+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+
| ID                                   | Agent Type     | Host       | Availability Zone | Alive | State | Binary                 |
+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+
| 3a004b63-a091-4b14-acdc-efac793065f2 | DHCP agent     | controller | nova              | :-)   | UP    | neutron-dhcp-agent     |
| 71a74d59-26f7-4a32-8505-4d88a485bcf6 | Metadata agent | controller | None              | :-)   | UP    | neutron-metadata-agent |
+--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+

 

可以看到controller和computer1各有一个Linux bridge agent未启动

 

查看neutron-linuxbirdge-agent状态

[root@controller ~]# systemctl status neutron-linuxbridge-agent
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Thu 2020-04-02 08:23:53 CST; 151ms ago
  Process: 4685 ExecStart=/usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent --log-file /var/log/neutron/linuxbridge-agent.log (code=exited, status=1/FAILURE)
  Process: 4679 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
 Main PID: 4685 (code=exited, status=1/FAILURE)

发现日志文件位置/var/log/neutron/plugins/linuxbridge-agent.log

 

查找日志文件发现这样一条错误信息

2020-04-02 08:24:17.914 5032 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent
.linuxbridge_neutron_agent [-] Interface eth0 for physical network provider 
does not exist. Agent terminated!

错误提示,物理网卡eth0不存在

查看网络状态

[root@controller ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:60:42:1a brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe60:421a/64 scope link 
       valid_lft forever preferred_lft forever

可以看到controller的网卡是ens33,修改网卡名称和修改linuxbridge_agent配置都可以解决

这里选择修改linuxbridge_agent

[root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]


[linux_bridge]
physical_interface_mappings = provider:eth0

查看neutron-linuxbirdge-agent是否正常启动

[root@controller ~]# systemctl status neutron-linuxbridge-agent
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2020-04-02 08:37:36 CST; 7s ago
  Process: 14381 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
 Main PID: 14387 (/usr/bin/python)
   CGroup: /system.slice/neutron-linuxbridge-agent.service
           └─14387 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file ...

问题解决

[root@controller ~]#  openstack network agent list
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| 3a004b63-a091-4b14-acdc-efac793065f2 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 71a74d59-26f7-4a32-8505-4d88a485bcf6 | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
| c56d1eff-d6bf-45e6-b06a-33f7c2efaa4f | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
| cbd5b8bd-4dfc-4af8-8dab-f551f761f39c | Linux bridge agent | computer1  | None              | :-)   | UP    | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

3.2、问题2:实例无法启动(已解决)

创建实例并连接,错误提示:

Booting from hard disk
Boot failed. not a bootable disk

实例创建成功但无法登陆

搜索之后找到相同的问题

https://ask.openstack.org/en/question/4117/boot-failed-not-a-bootable-disk/

根据里面的讨论可能是镜像出了问题

找到cirrors镜像部分,到原网站查看,

官网显示的文件信息

      cirros-0.4.0-x86_64-disk.img                       2017-11-19 19:59   12M

这是之前下载的文件,只有273字节(其实当时下载的时候我也注意到了大小可能不太对,但没太在意)

[root@controller download]# ll
total 4
-rw-r--r--. 1 root root 273 Apr  1 13:01 cirros-0.4.0-x86_64-disk.img

大小明显不一样,对比发现,教程中的下载命令是

wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

虚拟机没装wget 考虑到节约虚拟机资源,我就用curl -o 下载

curl -O http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

结果。。。。。

以前我两个命令是混着用的,现在来看两个命令似乎还是有区别的

note:

curl虽然也能下载文件,但基础功能浏览网页 -o只是将获得的信息保存到文件中

wget基础功能就是下载文件
用wget下载

wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
--2020-04-02 15:28:17--  http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 64.90.42.85, 2607:f298:6:a036::bd6:a72a
Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|64.90.42.85|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img [following]
--2020-04-02 15:28:18--  https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img
Resolving github.com (github.com)... 13.250.177.223
Connecting to github.com (github.com)|13.250.177.223|:443... failed: Connection refused.

这个链接经过跳转似乎最终需要从github上下载,我仿佛意识到了什么,赶紧查看之前用curl -o下载的文件。

[root@controller download]# cat cirros-0.4.0-x86_64-disk.img 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img">here</a>.</p>
</body></html>

果然,原来我之前下载的是一个跳转页面。

连上网,用wget重新下载

[root@controller test]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
--2020-04-02 15:30:39--  http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 64.90.42.85, 2607:f298:6:a036::bd6:a72a
Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|64.90.42.85|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img [following]
--2020-04-02 15:30:39--  https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img
Resolving github.com (github.com)... 52.74.223.119
Connecting to github.com (github.com)|52.74.223.119|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/219785102/b2074f00-411a-11ea-9620-afb551cf9af3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200402%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200402T072953Z&X-Amz-Expires=300&X-Amz-Signature=723552189308fe93947e74365620857df661dbe4536d96181d2db1ef5aca0f1b&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dcirros-0.4.0-x86_64-disk.img&response-content-type=application%2Foctet-stream [following]
--2020-04-02 15:30:40--  https://github-production-release-asset-2e65be.s3.amazonaws.com/219785102/b2074f00-411a-11ea-9620-afb551cf9af3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200402%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200402T072953Z&X-Amz-Expires=300&X-Amz-Signature=723552189308fe93947e74365620857df661dbe4536d96181d2db1ef5aca0f1b&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dcirros-0.4.0-x86_64-disk.img&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.143.188
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.143.188|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12716032 (12M) [application/octet-stream]
Saving to: ‘cirros-0.4.0-x86_64-disk.img’

100%[==============================================================================>] 12,716,032  65.2KB/s   in 3m 49s 

2020-04-02 15:34:30 (54.2 KB/s) - ‘cirros-0.4.0-x86_64-disk.img’ saved [12716032/12716032]

重新制作镜像即可
再次登陆

 

3.3、问题3:在安装块存储服务,进行验证时,出现错误(已解决)

[root@controller test]# cinder service-list
ERROR: Unable to establish connection to http://controller:8776/: HTTPConnectionPool(host='controller', port=8776): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f134fa0bb50>: Failed to establish a new connection: [Errno 111] Connection refused',))

查看服务是否开启

[root@controller test]# ss -nplt | grep 8776
LISTEN     0      128          *:8776                     *:*                   users:(("cinder-api",pid=46305,fd=7),("cinder-api",pid=46304,fd=7),("cinder-api",pid=46303,fd=7),("cinder-api",pid=46302,fd=7),("cinder-api",pid=46233,fd=7))

再次查看

[root@controller test]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| Binary           | Host       | Zone | Status  | State | Updated_at                 | Cluster | Disabled Reason | Backend State |
+------------------+------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
| cinder-scheduler | controller | nova | enabled | up    | 2020-04-02T08:44:20.000000 | -       | -               |               |
+------------------+------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
[root@controller test]# 

应当是服务尚未完全启动,给controller节点分配的资源可能过少,内存占用超过90%,swap已用1.8G,等待一段时间就好

[root@controller test]# free -h
              total        used        free      shared  buff/cache   available
Mem:           2.7G        2.5G        116M        2.1M         80M         45M
Swap:          3.9G        1.8G        2.1G

3.4、问题4:块存储,将卷添加到实例出错(已解决)

[root@controller test]# openstack server add volume instance volume1
Invalid input received: Invalid volume: Volume attachments can not be created if the volume is in an error state. The Volume 27e02eb4-e33f-49c1-bf69-e4cbfbb6145a currently has a status of: error  (HTTP 400) (Request-ID: req-04b8eaef-0cde-45d7-b022-7b8986b7755c) (HTTP 400) (Request-ID: req-7b09f9a9-4ea1-48cc-82c0-b0ae55b0fee0)

查看日志

[root@computer1 ~]# cat /var/log/cinder/volume.log 
2020-04-02 17:25:03.628 2330 INFO cinder.rpc [req-b42ed688-f013-46cf-8fa6-31b088eb9c74 - - - - -] Automatically selected cinder-scheduler objects version 1.38 as minimum service version.
2020-04-02 17:25:03.637 2330 INFO cinder.rpc [req-b42ed688-f013-46cf-8fa6-31b088eb9c74 - - - - -] Automatically selected cinder-scheduler RPC version 3.11 as minimum service version.
2020-04-02 17:25:03.708 2330 INFO cinder.volume.manager [req-b42ed688-f013-46cf-8fa6-31b088eb9c74 - - - - -] Determined volume DB was empty at startup.

查看服务

[root@controller test]# openstack volume service list
+------------------+---------------+------+---------+-------+----------------------------+
| Binary           | Host          | Zone | Status  | State | Updated At                 |
+------------------+---------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller    | nova | enabled | up    | 2020-04-02T09:47:11.000000 |
| cinder-volume    | computer1@lvm | nova | enabled | down  | 2020-04-02T09:25:04.000000 |
+------------------+---------------+------+---------+-------+----------------------------+
[root@controller test]# 

发现这里computer1@lvm 的State为down

查找资料

https://blog.csdn.net/zhaihaifei/article/details/79636930

发现可能是controller和computer1时间不同步造成的;

经过调整,时间已同步;

同时输入date命令,两边返回时间相同。

但执行openstack volume service list发现仍为down;

若重启cinder-scheduler和cinder-volume服务,再执行openstack volume service list会发现两个服务State都是up Updated At只有很小差距

然而,过一段时间再次执行openstack volume service list发现up Updated 差距越来越大,直至超过60,cinder-volume的State变为down。

初步猜测为机器配置过低

尝试提高computer1 cpu核心数和内存,正待进一步解决。

与Cloudstack 的对比

相比Cloudstack,openstack的搭建明显要复杂些。

Cloustack只需要安装management和client

而openstack需要安装的组件明显就多了,认证服务keystone、镜像服务glance、放置服务placement、计算服务nova、网络服务neutron、Web UI horizon及块存储服务cinder


2020年5月3日更新

暂时解决,但不是太明白解决的原因

[root@controller rabbitmq]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T05:32:06.000000 |
| cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T05:32:05.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
[root@controller rabbitmq]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T05:33:16.000000 |
| cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T05:33:15.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
[root@controller rabbitmq]# openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2020-05-03T05:37:50.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 629842c9-c107-424f-b24d-344ca0b25710 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | __DEFAULT__                          |
| updated_at          | None                                 |
| user_id             | 2bd00838cd344078982969448d50bd5b     |
+---------------------+--------------------------------------+
[root@controller rabbitmq]# openstack volume list
+--------------------------------------+---------+-----------+------+-------------+
| ID                                   | Name    | Status    | Size | Attached to |
+--------------------------------------+---------+-----------+------+-------------+
| 629842c9-c107-424f-b24d-344ca0b25710 | volume1 | available |    1 |             |
+--------------------------------------+---------+-----------+------+-------------+
[root@controller rabbitmq]#

解决方法:

单独设立存储节点。

奇怪的地方在于,和所有配置和之前完全一样,但单独可以,合并不行。

值得注意的是,在计算节点的cinder的日志文件中

发现了

2020-05-03 10:06:52.308 3546 ERROR oslo.messaging._drivers.impl_rabbit [req-46ccce17-f2b4-4e8c-920a-71c6f3137f0f - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname

无法解析broker hostname的错误

但是前后两次的不同在于

前一次,日志一直报出这个错误,然后在控制节点运行openstack volume service list命令会发现计算节点的openstack-cinder-volume的updated at字段一直无法更新,直到status 变为down。当在计算节点重启openstack-cinder-volume服务时会发现,其status重新变为up,但是刷新后就会发现其updated at字段一直不变。

#很明显,第二个的updated at字段是不变的,而第一行的是变化的,差距超过60秒status就会变为down
[root@controller ~]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T08:18:01.000000 |
| cinder-volume    | computer@lvm | nova | enabled | down  | 2020-05-03T08:16:32.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
[root@controller ~]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T08:18:11.000000 |
| cinder-volume    | computer@lvm | nova | enabled | down  | 2020-05-03T08:16:32.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
[root@controller ~]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T08:18:21.000000 |
| cinder-volume    | computer@lvm | nova | enabled | down  | 2020-05-03T08:16:32.000000 |
+------------------+--------------+------+---------+-------+----------------------------+

而将存储节点独立出来后,发现cinder日志仍然报告同样的错误,但报告3-4次之后就会恢复正常

2020-05-03 13:35:28.522 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
2020-05-03 13:36:05.305 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
2020-05-03 13:36:41.427 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
2020-05-03 13:37:16.573 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
2020-05-03 13:37:51.318 14438 INFO cinder.rpc [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Automatically selected cinder-backup objects version 1.38 as minimum service version.
2020-05-03 13:37:51.324 14438 INFO cinder.rpc [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Automatically selected cinder-backup RPC version 2.1 as minimum service version.
2020-05-03 13:37:51.335 14438 INFO cinder.rpc [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Automatically selected cinder-volume objects version 1.38 as minimum service version.
2020-05-03 13:37:51.340 14438 INFO cinder.rpc [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Automatically selected cinder-volume RPC version 3.16 as minimum service version.
2020-05-03 13:37:51.438 14438 INFO cinder.volume.flows.manager.create_volume [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Volume 629842c9-c107-424f-b24d-344ca0b25710: being created as raw with specification: {'status': u'creating', 'volume_size': 1, 'volume_name': u'volume-629842c9-c107-424f-b24d-344ca0b25710'}
2020-05-03 13:37:51.692 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
2020-05-03 13:37:52.011 14438 INFO cinder.volume.flows.manager.create_volume [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Volume volume-629842c9-c107-424f-b24d-344ca0b25710 (629842c9-c107-424f-b24d-344ca0b25710): created successfully
@

 这时,再去控制节点就会发现,计算节点的openstack-cinder-volume的updated at字段可以更新。

[root@controller rabbitmq]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T05:32:06.000000 |
| cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T05:32:05.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
[root@controller rabbitmq]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary           | Host         | Zone | Status  | State | Updated At                 |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T05:33:16.000000 |
| cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T05:33:15.000000 |
+------------------+--------------+------+---------+-------+----------------------------+
[root@controller ~]# openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+                                
| Binary           | Host         | Zone | Status  | State | Updated At                                                    |
+------------------+--------------+------+---------+-------+----------------------------+                                   
| cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T08:14:01.000000 |                                   
| cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T08:14:01.000000 |                                   
+------------------+--------------+------+---------+-------+----------------------------+                                   

四、总结

总的来说,openstack的搭建已基本完成,已经可以创建并连接实例。openstack需要服务虽然多,但搭建步骤大多重复,再加上资料比较详细,减轻了不少难度。

opentstack对电脑配置还是有些要求的,在尝试用dashboard进行操作时,明显web server响应很慢,有待提高电脑配置,进一步尝试。

五、参考资料

官方文档:https://docs.openstack.org/install-guide/openstack-services.html

最近教程:https://blog.csdn.net/chengyinwu/category_9242444.html

 

  • 2
    点赞
  • 29
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值