nova部署

目录

OpenStack-Placement组件部署

一、创建数据库实例和数据库用户

二、创建Placement服务用户和API的endpoint

 创建一个placement服务,服务类型为placement

 注册API端口到placement的service中;注册的信息会写入到mysql中

 安装placement服务

导入数据库 

 修改Apache配置文件: 00-placemenct-api.conf(安装完placement服务后会自动创建该文件-虚拟主机配置 )

curl 测试访问

查看端口占用(netstat、lsof) 

检查placement状态 

OpenStack-nova组件部署

一、nova组件部署位置

控制节点ct

创建nova数据库,并执行授权操作

管理Nova用户及服务

创建nova服务

给Nova服务关联endpoint(端点)

安装nova组件(nova-api、nova-conductor、nova-novncproxy、nova-scheduler)

修改nova配置文件(nova.conf)

初始化数据库

启动Nova服务 

检查nova服务端口 

计算节点配置Nova服务-c1节点

安装nova-compute组件

修改配置文件 

 controller节点操作

查看compute节点是否注册到controller上,通过消息队列;需要在controller节点执行

 验证计算节点服务


OpenStack-Placement组件部署

一、创建数据库实例和数据库用户

MariaDB [(none)]> create database placement;
Query OK, 1 row affected (0.003 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)

 二、创建Placement服务用户和API的endpoint

[root@ct ~]# openstack user create --domain default --password PLACEMENT_PASS placement  ##创建placement用户
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | c1c5404868a943d08e601bab3ff43b1e |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |

[root@ct ~]# openstack role add --project service --user placement admin ##给与placement用户对service项目拥有admin权限

 创建一个placement服务,服务类型为placement

[root@ct ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 26cf0ce21b504790b2e4dcb0b359cc6d |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+

 注册API端口到placement的service中;注册的信息会写入到mysql中

[root@ct ~]# openstack endpoint create --region RegionOne placement public http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a02307fc0edf4ffebcbb69069feccbd8 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 26cf0ce21b504790b2e4dcb0b359cc6d |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne placement internal http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 7a84ac31040749d0a0f96670c9642bc7 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 26cf0ce21b504790b2e4dcb0b359cc6d |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne placement admin http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | c2ca4e77e7534984a30729a738c64a20 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 26cf0ce21b504790b2e4dcb0b359cc6d |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

 安装placement服务

[root@ct ~]# yum -y install openstack-placement-api

 修改placement配置文件

[root@ct ~]# cp -a /etc/placement/placement.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
[root@ct ~]# vi placement-api.sh

openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url  http://ct:5000/v3
openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
openstack-config --set /etc/placement/placement.conf keystone_authtoken password PLACEMENT_PASS

[root@ct ~]# cat /etc/placement/placement.conf

[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://ct:5000/v3                            #指定keystone地址
memcached_servers = ct:11211                    #session信息是缓存放到了memcached中
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
[oslo_policy]
[placement]
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
[profiler]
[root@ct ~]# source placement-api.sh 

导入数据库 

[root@ct ~]# su -s /bin/sh -c "placement-manage db sync" placement

 修改Apache配置文件: 00-placemenct-api.conf(安装完placement服务后会自动创建该文件-虚拟主机配置 )

[root@ct conf.d]# vi 00-placement-api.conf 		#安装完placement会自动创建此文件
Listen 8778

<VirtualHost *:8778>
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement
  WSGIScriptAlias / /usr/bin/placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/placement/placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
</VirtualHost>

Alias /placement-api /usr/bin/placement-api
<Location /placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>
<Directory /usr/bin>			#此处是bug,必须添加下面的配置来启用对placement api的访问,否则在访问apache的
<IfVersion >= 2.4>				#api时会报403;添加在文件的最后即可
	Require all granted
</IfVersion>
<IfVersion < 2.4>				#apache版本;允许apache访问/usr/bin目录;否则/usr/bin/placement-api将不允许被访问
	Order allow,deny				
	Allow from all			#允许apache访问
</IfVersion>
</Directory>

[root@ct ~]# systemctl restart httpd  ###重新启动apache

curl 测试访问

[root@ct ~]# curl ct:8778

{"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}

查看端口占用(netstat、lsof) 

[root@ct ~]# netstat -natp | grep 8778
tcp        0      0 192.168.10.10:38008     192.168.10.10:8778      TIME_WAIT   -                   
tcp6       0      0 :::8778                 :::*                    LISTEN      3043/httpd  

检查placement状态 

[root@ct ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

OpenStack-nova组件部署

一、nova组件部署位置

控制节点ct

创建nova数据库,并执行授权操作

MariaDB [(none)]> create database nova_api;
Query OK, 1 row affected (0.002 sec)

MariaDB [(none)]> create database nova;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> create database nova_cell0;
Query OK, 1 row affected (0.001 sec)


MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.004 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)


MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.004 sec)

管理Nova用户及服务

[root@ct ~]# openstack user create --domain default --password NOVA_PASS nova ##创建nova用户
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 67cbbea25ec8402ab9040bf378e83657 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

[root@ct ~]# openstack role add --project service --user nova admin   ##把nova用户添加到service项目,拥有admin权限

创建nova服务

[root@ct ~]# openstack service create --name nova --description "OpenStack Compute" compute  
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | b69c2516e1704526932643cd0d674af0 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

给Nova服务关联endpoint(端点)

[root@ct ~]# openstack endpoint create --region RegionOne compute public http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a77aa6e7609849599527e947932a122e |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | b69c2516e1704526932643cd0d674af0 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne compute internal http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 326fbfbeee984b88b16fe267181a5543 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | b69c2516e1704526932643cd0d674af0 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne compute admin http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | bb075e7e98044b2e93fc0ddf447174b1 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | b69c2516e1704526932643cd0d674af0 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

安装nova组件(nova-api、nova-conductor、nova-novncproxy、nova-scheduler)

[root@ct ~]# yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-schedule

修改nova配置文件(nova.conf)


[root@ct ~]# cp -a /etc/nova/nova.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
[root@ct ~]# cp -a /etc/nova/nova.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
#修改nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.100.10			####修改为 ct的IP(内部IP)
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@ct/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@ct/nova
openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/nova.conf glance api_servers http://ct:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS


#查看nova.conf

[root@ct ~]# cat /etc/nova/nova.conf

[DEFAULT]
enabled_apis = osapi_compute,metadata		#指定支持的api类型
my_ip = 192.168.100.10				#定义本地IP
use_neutron = true					#通过neutron获取IP地址
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:RABBIT_PASS@ct	#指定连接的rabbitmq

[api]
auth_strategy = keystone				#指定使用keystone认证

[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@ct/nova_api

[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]

[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@ct/nova

[devices]
[ephemeral_storage_encryption]
[filter_scheduler]

[glance]
api_servers = http://ct:9292

[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]

[keystone_authtoken]				#配置keystone的认证信息
auth_url = http://ct:5000/v3				#到此url去认证
memcached_servers = ct:11211			#memcache数据库地址:端口
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS

[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]

[oslo_concurrency]					#指定锁路径
lock_path = /var/lib/nova/tmp			#锁的作用是创建虚拟机时,在执行某个操作的时候,需要等此步骤执行完后才能执行下一个步骤,不能并行执行,保证操作是一步一步的执行

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]

[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://ct:5000/v3
username = placement
password = PLACEMENT_PASS

[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]						#此处如果配置不正确,则连接不上虚拟机的控制台
enabled = true		
server_listen =  $my_ip				#指定vnc的监听地址
server_proxyclient_address =  $my_ip			#server的客户端地址为本机地址;此地址是管理网的地址

[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement

初始化数据库

[root@ct ~]# su -s /bin/sh -c "nova-manage api_db sync" nova  ##初始化nova_api数据库
[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova  ##注册cell0数据库;nova服务内部把资源划分到不同的cell中,把计算节点划分到不同的cell中;openstack内部基于cell把计算节点进行逻辑上的分组
[root@ct ~]#  su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova   #创建cell1单元格
[root@ct ~]# su -s /bin/sh -c "nova-manage db sync" nova

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova  #可使用以下命令验证cell0和cell1是否注册成功
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
|  名称 |                 UUID                 |       Transport URL        |                数据库连接               | Disabled |
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |           none:/           | mysql+pymysql://nova:****@ct/nova_cell0 |  False   |
| cell1 | a04ee6e8-481b-4eef-b2a2-ce8d546e457e | rabbit://openstack:****@ct |    mysql+pymysql://nova:****@ct/nova    |  False   |
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+

启动Nova服务 

[root@ct ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

[root@ct ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

检查nova服务端口 

[root@ct ~]# netstat -tnlup|egrep '8774|8775'
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      8165/python2        
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      8165/python2        
[root@ct ~]# curl http://ct:8774
{"versions": [{"status": "SUPPORTED", "updated": "2011-01-21T11:33:21Z", "links": [{"href": "http://ct:8774/v2/", "rel": "self"}], "min_version": "", "version": "", "id": "v2.0"}, {"status": "CURRENT", "updated": "2013-07-23T11:33:21Z", "links": [{"href": "http://ct:8774/v2.1/", "rel": "self"}], "min_version": "2.1", "version": "2.79", "id": "v2.1"}]}[root@ct ~]# 

计算节点配置Nova服务-c1节点

安装nova-compute组件

[root@c1 ~]# yum -y install openstack-nova-compute

修改配置文件 

[root@c1 ~]# cp -a /etc/nova/nova.conf{,.bak}
[root@c1 ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
[root@c1 ~]# vi c.sh

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.100.20				#修改为对应节点的内部IP
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
 openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.100.10:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://ct:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
[root@c1 ~]# source c.sh 
[root@c1 ~]#  systemctl enable libvirtd.service openstack-nova-compute.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service.
[root@c1 ~]#  systemctl start libvirtd.service openstack-nova-compute.service #开启服务

 controller节点操作

查看compute节点是否注册到controller上,通过消息队列;需要在controller节点执行

[root@ct ~]# openstack compute service list --service nova-compute
+----+--------------+------+------+---------+-------+----------------------------+
| ID | Binary       | Host | Zone | Status  | State | Updated At                 |
+----+--------------+------+------+---------+-------+----------------------------+
| 11 | nova-compute | c1   | nova | enabled | up    | 2021-01-04T08:59:49.000000 |
| 12 | nova-compute | c2   | nova | enabled | up    | 2021-01-04T08:59:50.000000 |
+----+--------------+------+------+---------+-------+----------------------------+

扫描当前openstack中有哪些计算节点可用,发现后会把计算节点创建到cell中,后面就可以在cell中创建虚拟机;相当于openstack内部对计算节点进行分组,把计算节点分配到不同的cell中 
[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': a04ee6e8-481b-4eef-b2a2-ce8d546e457e
Checking host mapping for compute host 'c1': 0ca0450f-b337-4303-8bce-3a9d5de00287
Creating host mapping for compute host 'c1': 0ca0450f-b337-4303-8bce-3a9d5de00287
Checking host mapping for compute host 'c2': ada45cb1-9d4a-4404-ae6c-a56aa53d5723
Creating host mapping for compute host 'c2': ada45cb1-9d4a-4404-ae6c-a56aa53d5723
Found 2 unmapped computes in cell: a04ee6e8-481b-4eef-b2a2-ce8d546e457e

 验证计算节点服务

[root@ct ~]# openstack compute service list  #检查 nova 的各个服务是否都是正常,以及 compute 服务是否注册成功
+----+----------------+------+----------+---------+-------+----------------------------+
| ID | Binary         | Host | Zone     | Status  | State | Updated At                 |
+----+----------------+------+----------+---------+-------+----------------------------+
|  6 | nova-conductor | ct   | internal | enabled | up    | 2021-01-04T10:53:40.000000 |
|  7 | nova-scheduler | ct   | internal | enabled | up    | 2021-01-04T10:53:33.000000 |
| 11 | nova-compute   | c1   | nova     | enabled | up    | 2021-01-04T10:53:40.000000 |
| 12 | nova-compute   | c2   | nova     | enabled | up    | 2021-01-04T10:53:42.000000 |
+----+----------------+------+----------+---------+-------+----------------------------+
[root@ct ~]# openstack catalog list  #查看各个组件的 api 是否正常
+-----------+-----------+---------------------------------+
| Name      | Type      | Endpoints                       |
+-----------+-----------+---------------------------------+
| keystone  | identity  | RegionOne                       |
|           |           |   admin: http://ct:5000/v3/     |
|           |           | RegionOne                       |
|           |           |   public: http://ct:5000/v3/    |
|           |           | RegionOne                       |
|           |           |   internal: http://ct:5000/v3/  |
|           |           |                                 |
| placement | placement | RegionOne                       |
|           |           |   internal: http://ct:8778      |
|           |           | RegionOne                       |
|           |           |   public: http://ct:8778        |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:8778         |
|           |           |                                 |
| glance    | image     | RegionOne                       |
|           |           |   public: http://ct:9292        |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:9292         |
|           |           | RegionOne                       |
|           |           |   internal: http://ct:9292      |
|           |           |                                 |
| nova      | compute   | RegionOne                       |
|           |           |   internal: http://ct:8774/v2.1 |
|           |           | RegionOne                       |
|           |           |   public: http://ct:8774/v2.1   |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:8774/v2.1    |
|           |           |                                 |
+-----------+-----------+---------------------------------+
[root@ct ~]# openstack image list #查看是否能够拿到镜像
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| d165b623-7eb7-479f-baff-31fd0526c535 | cirros | active |
| d39075f2-59d3-4159-b27c-cdf371249191 | cirros | active |
+--------------------------------------+--------+--------+
[root@ct ~]# nova-status upgrade check #查看cell的api和placement的api是否正常,只要其中一个有误,后期无法创建虚拟机
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Moon-01

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值