OpenStack部署详细教程

目录

项目环境

节点名称主机名IP地址
控制节点ct20.0.0.130(外网地址nat模式)、20.0.100.10(内网地址vm主机模式1)
计算节点 1ct120.0.0.140(外网地址nat模式)、20.0.100.20(内网地址vm主机模式1)
计算节点 2ct220.0.0.150(外网地址nat模式)、20.0.100.30(内网地址vm主机模式1)

所有节点进行如下设置
在这里插入图片描述

systemctl stop firewalld
systemctl disable firewalld
setenforce 0

vi /etc/hosts
20.0.100.10   ct
20.0.100.20   c1
20.0.100.30   c2


vi /etc/resolv.conf
nameserver 114.114.114.114

#网卡设置
1、设置外网卡
vi /etc/sysconfig/network-scripts/ifcfg-ens33	
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=c5aebdb3-c1b8-4950-988b-13a60de196b8
DEVICE=ens33
ONBOOT=yes
#控制节点IP为130、计算节点1IP为140、计算节点2IP为150
IPADDR=20.0.0.130
NETMASK=255.255.255.0
GATEWAY=20.0.0.2
IPV4_ROUTE_METRIC=90 	#调由优先级,NAT网卡优先

2、设置内网卡
vi /etc/sysconfig/network-scripts/ifcfg-ens37	
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens37
DEVICE=ens37
ONBOOT=yes
#控制节点IP为10、计算节点1IP为20、计算节点2IP为30
IPADDR=20.0.100.10		
NETMASK=255.255.255.0

systemctl restart network
cd /etc/yum.repos.d/
cp backup/* ./

ls
backup            CentOS-Debuginfo.repo  CentOS-Sources.repo
CentOS-Base.repo  CentOS-fasttrack.repo  CentOS-Vault.repo
CentOS-CR.repo    CentOS-Media.repo      local.repo

yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre  pcre-devel expat-devel cmake  bzip2 

net-tools:网络工具
bash-completion:tab键自动补全工具
vim:文件编辑器
gcc gcc-c++ make pcre  pcre-devel expat-devel cmake :环境依赖包
bzip2 :压缩/解压工具
 
yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils

centos-release-openstack-train:安装OpenStack t版并且会自动下载一下OpenStack相关的源
python-openstackclient:安装OpenStack客户端
openstack-selinux:安装OpenStack核心防护

设置三天主机互相免密登录
ssh-keygen -t rsa	
ssh-copy-id ct
ssh-copy-id c1
ssh-copy-id c2

控制节点设置

1、设置主机名

[root@localhost ~]# hostnamectl set-hostname ct
[root@localhost ~]# su

2、安装chrony(时间同步)

[root@ct ~]# yum install chrony -y
[root@ct ~]# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp6.aliyun.com iburst	#同步阿里云服务器
allow 20.0.100.0/24				#允许20.0.100.0网段来同步此时间
[root@ct ~]# systemctl enable chronyd
[root@ct ~]# systemctl restart chronyd
[root@ct ~]# chronyc sources		#查看当前同步的服务器及时间延迟
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6     7     2  +1852us[+6245us] +/-   27ms

3、创建任务计划

[root@ct ~]# crontab -e		#创建
*/30 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log
[root@ct ~]# crontab -l		#查看
*/30 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log

计算节点1设置

1、设置主机名

[root@localhost ~]# hostnamectl set-hostname c1
[root@localhost ~]# su

2、安装chrony(时间同步)

[root@c1 ~]# yum install chrony -y
[root@c1 ~]# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ct iburst	#同步控制节点服务器
[root@c1 ~]# systemctl enable chronyd
[root@c1 ~]# systemctl restart chronyd
[root@c1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ct                            3   6    17     8  +7472ns[  +41us] +/-   27ms

3、创建任务计划

[root@c1 ~]# crontab -e		#创建
*/30 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log
[root@c1 ~]# crontab -l		#查看
*/30 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log

计算节点2设置

1、设置主机名

[root@localhost ~]# hostnamectl set-hostname c2
[root@localhost ~]# su

2、安装chrony(时间同步)

[root@c2 ~]# yum install chrony -y
[root@c2 ~]# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ct iburst	#同步控制节点服务器
[root@c2 ~]# systemctl enable chronyd
[root@c2 ~]# systemctl restart chronyd
[root@c2 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ct                            3   6    17     8  +7472ns[  +41us] +/-   27ms

3、创建任务计划

[root@c2 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ct                            3   6     7     1  +3503ns[  -18ms] +/-   19ms

控制节点安装MariaDB

[root@ct ~]#  yum -y install mariadb mariadb-server python2-PyMySQL
python2-PyMySQL:此包用于openstack的控制端连接mysql所需要的模块,如果不安装,则无法连接数据库;此包只安装在控制端

[root@ct ~]#  yum -y install libibverbs
penstack控制节点重启连接后可能会报一堆错误,下载此应用可以解决此问题

[root@ct ~]# vim /etc/my.cnf/openstack.cnf

[root@ct ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld] 
bind-address = 20.0.100.10			#控制节点局域网地址
default-storage-engine = innodb 		#默认存储引擎 
innodb_file_per_table = on 			#每张表独立表空间文件
max_connections = 4096 			#最大连接数 
collation-server = utf8_general_ci 		#默认字符集 
character-set-server = utf8

[root@ct ~]# systemctl enable mariadb
[root@ct ~]# systemctl start mariadb
[root@ct ~]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 回车
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y
New password: 输入密码:123456
Re-enter new password: 却认密码:123456
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] N
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

控制节点安装RabbitMQ

所有创建虚拟机的指令,控制端都会发送到rabbitmq,node节点监听rabbitmq

[root@ct ~]# yum -y install rabbitmq-server
[root@ct ~]# systemctl enable rabbitmq-server.service
[root@ct ~]# systemctl start rabbitmq-server.service

[root@ct ~]# rabbitmqctl add_user openstack RABBIT_PASS
出现如下报错
Error: unable to connect to node rabbit@localhost: nodedown
[root@ct ~]# init 6		#重启系统

创建消息队列用户,用于controler和node节点连接rabbitmq的认证
[root@ct ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack"

配置openstack用户的操作权限(正则,配置读写权限)
[root@ct ~]#  rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"

查看rabbitmq插件列表
[root@ct ~]#  rabbitmq-plugins list
 Configured: E = explicitly enabled; e = implicitly enabled
 | Status:   * = running on rabbit@ct
 |/
[  ] amqp_client                       3.6.16
[  ] cowboy                            1.0.4
[  ] cowlib                            1.0.2
[  ] rabbitmq_amqp1_0                  3.6.16
........
[  ] sockjs

开启rabbitmq的web管理界面的插件,端口为15672
[root@ct ~]# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
  amqp_client
  cowlib
  cowboy
  rabbitmq_web_dispatch
  rabbitmq_management_agent
  rabbitmq_management

Applying plugin configuration to rabbit@ct... started 6 plugins.

#5672是Rabbitmq默认端口,25672是Rabbit的测试工具CLI的端口,15672是rabbitmq的web管理界面的插件
[root@ct ~]# ss -anpt|grep 5672
LISTEN     0      128          *:25672                    *:*                   users:(("beam.smp",pid=9094,fd=46))
LISTEN     0      128          *:15672                    *:*                   users:(("beam.smp",pid=9094,fd=57))
TIME-WAIT  0      0      20.0.100.10:49361              20.0.100.10:25672              
LISTEN     0      128         :::5672                    :::*                   users:(("beam.smp",pid=9094,fd=55))

在浏览器输入http://20.0.0.130:15672访问RabbitMQ(队列)
默认账号密码均为guest

在这里插入图片描述在这里插入图片描述

控制节点安装memcached

[root@ct ~]# yum install -y memcached python-memcached
memcached是用于存储session信息;服务身份验证机制使用Memcached来缓存令牌 
在登录openstack的dashboard时,会产生一些session信息,这些session信息会存放到memcached中
python-memcached模块:在OpenStack中起到连接数据库的作用
[root@ct ~]# vim /etc/sysconfig/memcached
OPTIONS="-l 127.0.0.1,::1,ct"
[root@ct ~]# systemctl enable memcached.service
[root@ct ~]# systemctl start memcached.service 
[root@ct ~]#  netstat -anpt|grep 11211
tcp        0      0 20.0.100.10:11211       0.0.0.0:*               LISTEN      19650/memcached     
tcp        0      0 127.0.0.1:11211         0.0.0.0:*               LISTEN      19650/memcached     
tcp6       0      0 ::1:11211               :::*                    LISTEN      19650/memcached     

控制节点安装etcd

[root@ct ~]# yum -y install etcd
[root@ct ~]# vim /etc/etcd/etcd.conf
#第5行修改内网IP地址,监听其他etcd member的url(2380端口,集群之间通讯,域名为无效值)
ETCD_LISTEN_PEER_URLS="http://20.0.100.10:2380"
#第6行修改内网IP地址,对外提供服务的地址(2379端口,集群内部的通讯端口)
ETCD_LISTEN_CLIENT_URLS="http://20.0.100.10:2379"
#第9行修改为ct,集群中节点标识(名称)
ETCD_NAME="ct"
#第20行修改内网IP地址,该节点成员的URL地址,2380端口:用于集群之间通讯。
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://20.0.100.10:2380"
#第21行
ETCD_ADVERTISE_CLIENT_URLS="http://20.0.100.10:2379"
#第26行
ETCD_INITIAL_CLUSTER="ct=http://20.0.100.10:2380"
#第27行,#集群唯一标识
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
#第28行
ETCD_INITIAL_CLUSTER_STATE="new"
初始集群状态,new为静态,若为existing,则表示此ETCD服务将尝试加入已有的集群
若为DNS,则表示此集群将作为被加入的对象

[root@ct ~]# systemctl enable etcd.service
[root@ct ~]# systemctl start etcd.service
[root@ct ~]# netstat -anutp |grep 2379
tcp        0      0 20.0.100.10:2379        0.0.0.0:*               LISTEN      20024/etcd          
tcp        0      0 20.0.100.10:60364       20.0.100.10:2379        ESTABLISHED 20024/etcd          
tcp        0      0 20.0.100.10:2379        20.0.100.10:60364       ESTABLISHED 20024/etcd          
[root@ct ~]# netstat -anutp |grep 2380
tcp        0      0 20.0.100.10:2380        0.0.0.0:*               LISTEN      20024/etcd          

一、安装OpenStack-keystone组件

1.1、为keystone创建数据库实例和数据库用户

[root@ct ~]# mysql -u root -p
Enter password:123456

MariaDB [(none)]> create database keystone;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';

MariaDB [(none)]> flush privileges;

MariaDB [(none)]> exit

1.2、安装、配置keystone、数据库、Apache

[root@ct ~]# yum -y install openstack-keystone httpd mod_wsgi
#mod_wsgi包的作用是让apache能够代理pythone程序的组件;
#openstack的各个组件,包括API都是用python写的,但访问的是apache,apache会把请求转发给python去处理,这些包只安装在controler节点
[root@ct ~]# cp -a /etc/keystone/keystone.conf{,.bak}

[root@ct ~]#  grep -Ev "^$|#" /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf

[root@ct ~]# openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONE_DBPASS@ct/keystone
#通过pymysql模块访问mysql,指定用户名密码、数据库的域名、数据库名

[root@ct ~]# openstack-config --set /etc/keystone/keystone.conf token provider fernet
#指定token的提供者;提供者就是keystone自己本身
#Fernet:一种安全的消息传递格式

1.3、初始化认证服务数据库

[root@ct ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
#初始化fernet 密钥存储库(以下命令会生成两个密钥,生成的密钥放于/etc/keystone/目录下,用于加密数据)

[root@ct ~]# cd /etc/keystone/
[root@ct keystone]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@ct keystone]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

1.4、配置bootstrap身份认证服务

[root@ct keystone]# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://ct:5000/v3/ \
--bootstrap-internal-url http://ct:5000/v3/ \
--bootstrap-public-url http://ct:5000/v3/ \
--bootstrap-region-id RegionOne

1、初始化openstack,会把openstack的admin用户的信息写入到mysql的user表中,以及url等其他信息写入到mysql的相关表中;
2、admin-url是管理网(如公有云内部openstack管理网络),用于管理虚拟机的扩容或删除;如果共有网络和管理网是一个网络,则当业务量大时,会造成无法通过openstack的控制端扩容虚拟机,所以需要一个管理网;
3、internal-url是内部网络,进行数据传输,如虚拟机访问存储和数据库、zookeeper等中间件,这个网络是不能被外网访问的,只能用于企业内部访问
4、public-url是共有网络,可以给用户访问的(如公有云) #但是此环境没有这些网络,则公用同一个网络
5、指定一个区域名称
#5000端口是keystone提供认证的端口 
#需要在haproxy服务器上添加一条listen ,各种网络的url需要指定controler节点的域名
#一般是haproxy的vip的域名(高可用模式)

1.4、配置Apache HTTP服务器

[root@ct keystone]# echo "ServerName controller" >> /etc/httpd/conf/httpd.conf
[root@ct keystone]#  ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
#安装完mod_wsgi包后,会生成 wsgi-keystone.conf 这个文件
#文件中配置了虚拟主机及监听了5000端口,mod_wsgi就是python的网关

[root@ct keystone]# systemctl enable httpd.service
[root@ct keystone]# systemctl start httpd.service

1.5、配置管理员账户的环境变量

[root@ct keystone]# cat >> ~/.bashrc << EOF
export OS_USERNAME=admin		#控制台登陆用户名
export OS_PASSWORD=ADMIN_PASS	#控制台登陆密码
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://ct:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
#这些环境变量用于创建角色和项目使用,但是创建角色和项目需要有认证信息,所以通过环境变量声明用户名和密码等认证信息,欺骗openstack已经登录且通过认证,这样就可以创建项目和角色;
也就是把admin用户的验证信息通过声明环境变量的方式传递给openstack进行验证,实现针对openstack的非交互式操作

[root@ct keystone]# source ~/.bashrc

#通过配置环境变量,可以使用openstack命令进行一些操作
[root@ct keystone]# openstack user list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| d864ae7d3cc749508762865a0911dd4f | admin |
+----------------------------------+-------+

1.6、创建OpenStack 域、项目、用户和角色

  1. 创建一个项目
    创建在指定的domain(域)中,指定描述信息,project名称为service(可使用openstack domain list 查询)
[root@ct keystone]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 45c894c791994cc08ccc276d48e01340 |
| is_domain   | False                            |
| name        | service                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+
  1. 创建角色(可使用openstack role list查看)
[root@ct keystone]# openstack role create user
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | None                             |
| domain_id   | None                             |
| id          | 94b7dbcf17634a04b748945e7f1d099e |
| name        | user                             |
| options     | {}                               |
+-------------+----------------------------------+
  1. 查看openstack 角色列表
[root@ct keystone]# openstack role list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| 1a5d16a140c24c0fab7a890cb98201ad | reader |
| 6ec64ef9dc8a41879309e11f8538dcb7 | member |
| 75bab8ab5d114894bb4b81d5700ec5ef | admin  |
| 94b7dbcf17634a04b748945e7f1d099e | user   |
+----------------------------------+--------+
# admin为管理员
# member为 租户
# user:用户
  1. 查看是否可以不指定密码就可以获取到token信息(验证认证服务)
[root@ct keystone]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-12-14T14:41:26+0000                                                                                                                                                                |
| id         | gAAAAABf12uGnhn_s0b-fjkQUcJM3HZVP43rnzGUsIIwUMRPf43lC8l0rMCB7eIqrgq3YmRCZGIZ71mVmh9QD7uhV8CEpaewDc7-YKB4fGxARxaUHjpnUjARRTLr3zdcoC1LbMJXEQp7gyJy4Ct4utPbwcJsX0JEGNIdapWE5Rj_E5-gxvA65zY |
| project_id | 2feecdfb9b144880b3db145c48440282                                                                                                                                                        |
| user_id    | d864ae7d3cc749508762865a0911dd4f                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

1.7、keystone部署总结

1、keystone是一个全局组件控制着全局的认证、授权等核心服务
2、初始化数据库并安装keystone组件
3、安装keystone需要支持的服务:Apache、memcache、rabbitmq、mariadb
Apache:承载API接口,而openstack中大多数核心服务及API都是由PY编写的所以,我们安装完Apache之后,也需要安装wsgi这个软件包
rabbitmq:相关联的部分(fernet),而消息传递需要加密,所以会生成两个密钥文件
mariadb:创建核心组件的数据库,授权(本地授权、%授权)
4、备份keystone配置文件并修改配置文件为对应模块增加增加相应功能
5、创建项目、角色、

二、安装OpenStack-glance组件

2.1、为glance创建数据库实例和数据库用户

[root@ct ~]# mysql -uroot -p123456

MariaDB [(none)]> CREATE DATABASE glance;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';

MariaDB [(none)]> flush privileges;

MariaDB [(none)]> exit

2.2、创建Openstack的glance用户

#创建glance用户
[root@ct ~]# openstack user create --domain default --password GLANCE_PASS glance
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 4ba9c1e165804745b0ca878e4a3ea26e |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

#将glance用户添加到service项目中,并且针对这个项目拥有admin权限;注册glance的API,需要对service项目有admin权限
[root@ct ~]# openstack role add --project service --user glance admin

#创建一个service服务,service名称为glance,类型为image;
[root@ct ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | 61044c1e824f45fa8f72527cf6d613eb |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

2.3、创建镜像服务API端点

[root@ct ~]# openstack endpoint create --region RegionOne image public http://ct:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 72cf98f760ab4ecb9a4f339b0574713a |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 61044c1e824f45fa8f72527cf6d613eb |
| service_name | glance                           |
| service_type | image                            |
| url          | http://ct:9292                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne image internal http://ct:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | f96a9b67784243d781db6b754c609937 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 61044c1e824f45fa8f72527cf6d613eb |
| service_name | glance                           |
| service_type | image                            |
| url          | http://ct:9292                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne image admin http://ct:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 9ab603a161d344a79893aca21407246a |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 61044c1e824f45fa8f72527cf6d613eb |
| service_name | glance                           |
| service_type | image                            |
| url          | http://ct:9292                   |
+--------------+----------------------------------+

2.4、安装并配置glance

[root@ct ~]# yum -y install openstack-glance

[root@ct ~]#  cp -a /etc/glance/glance-api.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf

[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@ct/glance
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://ct:5000
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://ct:5000
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers ct:11211
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_PASS
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
[root@ct ~]# openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/


[root@ct ~]# cp -a /etc/glance/glance-registry.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf

[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@ct/glance
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri http://ct:5000
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://ct:5000
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers ct:11211
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name Default
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name Default
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password GLANCE_PASS
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf glance_store stores file,http
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf glance_store default_store file
[root@ct ~]# openstack-config --set /etc/glance/glance-registry.conf glance_store filesystem_store_datadir /var/lib/glance/images/

2.5、初始化glance数据库

[root@ct ~]# su -s /bin/sh -c "glance-manage db_sync" glance

[root@ct ~]# systemctl enable openstack-glance-api.service
[root@ct ~]# systemctl start openstack-glance-api.service 

[root@ct ~]# netstat -anpt|grep 9292
tcp        0      0 0.0.0.0:9292            0.0.0.0:*               LISTEN      20688/python2   

2.6、设置glance-api对存储设备有可写权限

[root@ct ~]# chown -hR glance:glance /var/lib/glance/
-h:值对符号连接/软链接的文件修改

2.7、测试镜像能否上传成功

#上传镜像测试
[root@ct ~]# ls -lh
总用量 13M
-rwxr-xr-x. 1 root root  729 1211 22:19 abc.sh
-rw-------. 1 root root 1.7K 1211 21:58 anaconda-ks.cfg
-rw-r--r--  1 root root  13M 1218 13:58 cirros-0.4.0-x86_64-disk.img

#
[root@ct ~]# openstack image create --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                                                                                                                                                           |
| container_format | bare                                                                                                                                                                                       |
| created_at       | 2020-12-18T05:59:58Z                                                                                                                                                                       |
| disk_format      | qcow2                                                                                                                                                                                      |
| file             | /v2/images/b4276548-d779-406e-9ecb-1068e9e37c20/file                                                                                                                                       |
| id               | b4276548-d779-406e-9ecb-1068e9e37c20                                                                                                                                                       |
| min_disk         | 0                                                                                                                                                                                          |
| min_ram          | 0                                                                                                                                                                                          |
| name             | cirros                                                                                                                                                                                     |
| owner            | 2feecdfb9b144880b3db145c48440282                                                                                                                                                           |
| properties       | os_hash_algo='sha512', os_hash_value='6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e2161b5b5186106570c17a9e58b64dd39390617cd5a350f78', os_hidden='False' |
| protected        | False                                                                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                                                                          |
| size             | 12716032                                                                                                                                                                                   |
| status           | active                                                                                                                                                                                     |
| tags             |                                                                                                                                                                                            |
| updated_at       | 2020-12-18T05:59:58Z                                                                                                                                                                       |
| virtual_size     | None                                                                                                                                                                                       |
| visibility       | public                                                                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
#查看镜像是否上传成功方法1[root@ct ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| b4276548-d779-406e-9ecb-1068e9e37c20 | cirros | active |
+--------------------------------------+--------+--------+

#查看镜像是否上传成功方法2[root@ct ~]# glance image-list
+--------------------------------------+--------+
| ID                                   | Name   |
+--------------------------------------+--------+
| b4276548-d779-406e-9ecb-1068e9e37c20 | cirros |
+--------------------------------------+--------+

2.8、glance部署总结

1、glance是一个核心组件它允许用户发现、注册和获取虚拟机镜像,提供了一个RESET API,允许查询虚拟机镜像的元数据,并获取一个现存的镜像
2、创建数据库、授权
3、创建openstack用户、授权、管理
4、初始化数据库并安装glance组件
5、修改配置文件(glance-api.conf、glance-registry.conf)

三、安装OpenStack-placement组件

控制节点进行如下设置

3.1、创建数据库实例和数据库用户

[root@ct ~]# mysql -uroot -p
Enter password: 123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 10.3.20-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE placement;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]>  flush privileges;
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> exit
Bye

3.2、创建Placement服务用户和API的endpoint

创建placement用户

[root@ct ~]# openstack user create --domain default --password PLACEMENT_PASS placement
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 45c42d74c6f24577891de19afa72a883 |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

给与placement用户对service项目拥有admin权限

[root@ct ~]# openstack role add --project service --user placement admin

创建一个placement服务,服务类型为placement

[root@ct ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 62bf8b70d326484ea12d603c00f493c1 |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+

注册API端口到placement的service中;注册的信息会写入到mysql中

[root@ct ~]# openstack endpoint create --region RegionOne placement public http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 92114df56f6d443e9a41f90b8ba2a0c3 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 62bf8b70d326484ea12d603c00f493c1 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne placement internal http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a38e558ca2a3493790569e62bd851d35 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 62bf8b70d326484ea12d603c00f493c1 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+
[root@ct ~]#  openstack endpoint create --region RegionOne placement admin http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 2c41166c097849df866d3c99d030fd31 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 62bf8b70d326484ea12d603c00f493c1 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

3.3、 安装并配置placement服务

安装placement服务

[root@ct ~]#  yum -y install openstack-placement-api

修改placement配置文件

[root@ct ~]# cp -a /etc/placement/placement.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
[root@ct ~]# openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
[root@ct ~]# openstack-config --set /etc/placement/placement.conf api auth_strategy keystone
[root@ct ~]# openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_url  http://ct:5000/v3
[root@ct ~]# openstack-config --set /etc/placement/placement.conf keystone_authtoken memcached_servers ct:11211
[root@ct ~]# openstack-config --set /etc/placement/placement.conf keystone_authtoken auth_type password
[root@ct ~]# openstack-config --set /etc/placement/placement.conf keystone_authtoken project_domain_name Default
[root@ct ~]# openstack-config --set /etc/placement/placement.conf keystone_authtoken user_domain_name Default
[root@ct ~]# openstack-config --set /etc/placement/placement.conf keystone_authtoken project_name service
[root@ct ~]# openstack-config --set /etc/placement/placement.conf keystone_authtoken username placement
[root@ct ~]# openstack-config --set /etc/placement/placement.conf keystone_authtoken password PLACEMENT_PASS

修改Apache配置文件
#00-placemenct-api.conf(安装完placement服务后会自动创建该文件-虚拟主机配置 )

[root@ct ~]# vim /etc/httpd/conf.d/00-placement-api.conf
#在尾部增加如下
<Directory /usr/bin>
<IfVersion >= 2.4>                              
        Require all granted
</IfVersion>
<IfVersion < 2.4>                                       Order allow,deny
        Allow from all
</IfVersion>
</Directory>

导入数据库重启http服务

[root@ct ~]# su -s /bin/sh -c "placement-manage db sync" placement
[root@ct ~]# systemctl restart httpd.service
[root@ct ~]# netstat -napt|grep 8778
tcp6       0      0 :::8778                 :::*                    LISTEN      25963/httpd         
[root@ct ~]# curl http://ct:8778
{"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}

3.4检查placement状态

[root@ct ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

3.5、placement部署总结

placement:是一个资源提供者可以是一个计算节点,共享存储池,或一个IP分配池。placement服务跟踪每个供应商的库存和使用情况。例如,在一个计算节点创建一个实例的可消费资源如计算节点的资源提供者的CPU和内存,磁盘从外部共享存储池资源提供商和IP地址从外部IP资源提供者。
部署步骤(控制节点上部署):
1、创建数据库实例和数据库用户
2、 创建placement用户
3、给与placement用户对service项目拥有admin权限
4、创建一个placement服务,服务类型为placement
5、注册API端口到placement的service中;注册的信息会写入到mysql中
[root@ct ~]# placement-status upgrade check
6、 安装placement服务
7、修改placement配置文件
8、 修改Apache配置文件
9、导入数据库
10、重新启动apache并测试
<11、检查placement状态

四、OpenStack-nova组件部署

4.1、控制节点部署nova

4.1.1、创建nova数据库,并执行授权操作

[root@ct ~]# mysql -uroot -p
Enter password: 123456
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 27
Server version: 10.3.20-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> CREATE DATABASE nova_cell0;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.002 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS'; 
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]>  flush privileges;
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> exit
Bye

4.1.2、创建nova用户

[root@ct ~]#  openstack user create --domain default --password NOVA_PASS nova
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | b21640d09b8a4f2b856b38bfe03e95ca |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

4.1.3、把nova用户添加到service项目,拥有admin权限

[root@ct ~]# openstack role add --project service --user nova admin

给Nova服务关联endpoint(端点)

[root@ct ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | a822fb9d928649ee9e7893335b97f821 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne compute public http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 80f4575779c544f19db18ecc800d1809 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | a822fb9d928649ee9e7893335b97f821 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne compute internal http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 1e025c19ab5147dda16bafd549a34b75 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | a822fb9d928649ee9e7893335b97f821 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

[root@ct ~]# openstack endpoint create --region RegionOne compute admin http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 8aaef0721316489ab5f4fe24087c876d |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | a822fb9d928649ee9e7893335b97f821 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+

4.1.4、安装nova组件

[root@ct ~]#  yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

4.1.5、修改nova配置文件

[root@ct ~]# cp -a /etc/nova/nova.conf{,.bak}
[root@ct ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf

[root@ct ~]# openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
[root@ct ~]# openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 20.0.100.10 			####修改为 ct的IP(内网 IP)
[root@ct ~]# openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
[root@ct ~]# openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
[root@ct ~]# openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
[root@ct ~]# openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@ct/nova_api
[root@ct ~]# openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@ct/nova
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@ct/placement
[root@ct ~]# openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
[root@ct ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://ct:5000/v3
[root@ct ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers ct:11211
[root@ct ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
[root@ct ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
[root@ct ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
[root@ct ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
[root@ct ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
[root@ct ~]# openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
[root@ct ~]# openstack-config --set /etc/nova/nova.conf vnc enabled true
[root@ct ~]# openstack-config --set /etc/nova/nova.conf vnc server_listen ' $my_ip'
[root@ct ~]# openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
[root@ct ~]# openstack-config --set /etc/nova/nova.conf glance api_servers http://ct:9292
[root@ct ~]# openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement project_name service
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement auth_type password
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement auth_url http://ct:5000/v3
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement username placement
[root@ct ~]# openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS

4.1.6、初始化nova_api数据库

[root@ct ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

4.1.7、注册cell0数据库

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

4.1.8、创建cell1单元格

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
35918ff6-fcdf-4b14-9f59-6275f46f75c7

4.1.9、初始化nova数据库

[root@ct ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release')
  result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release')
  result = self._query(query)

4.1.10、验证cell0和cell1是否注册成功

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
|  名称 |                 UUID                 |       Transport URL        |                数据库连接               | Disabled |
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |           none:/           | mysql+pymysql://nova:****@ct/nova_cell0 |  False   |
| cell1 | 35918ff6-fcdf-4b14-9f59-6275f46f75c7 | rabbit://openstack:****@ct |    mysql+pymysql://nova:****@ct/nova    |  False   |
+-------+--------------------------------------+----------------------------+-----------------------------------------+----------+

4.1.11、启动Nova服务

[root@ct ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

[root@ct ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

[root@ct ~]# netstat -tnlup|egrep '8774|8775'
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      27689/python2       
tcp        0      0 0.0.0.0:8774            0.0.0.0:*               LISTEN      27689/python2       

4.1.12、测试

[root@ct ~]# curl http://ct:8774
{"versions": [{"status": "SUPPORTED", "updated": "2011-01-21T11:33:21Z", "links": [{"href": "http://ct:8774/v2/", "rel": "self"}], "min_version": "", "version": "", "id": "v2.0"}, {"status": "CURRENT", "updated": "2013-07-23T11:33:21Z", "links": [{"href": "http://ct:8774/v2.1/", "rel": "self"}], "min_version": "2.1", "version": "2.79", "id": "v2.1"}]}

4.2、计算机节点部署nova-compute

两个计算节点都进行如下部署,需要将对应IP进行修改

4.2.1、 安装nova-compute组件

yum -y install openstack-nova-compute

4.2.2、修改配置文件

cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@ct
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 20.0.100.20		#c1(IP):20  c1(IP):30 				
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
 openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://20.0.100.10:6080/vnc_auto.html	#计算机节点内网IP
openstack-config --set /etc/nova/nova.conf glance api_servers http://ct:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://ct:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

4.2.3、开启服务

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

4.3、控制节点查看计算机节点是否注册成功

[root@ct ~]# openstack compute service list --service nova-compute
+----+--------------+------+------+---------+-------+----------------------------+
| ID | Binary       | Host | Zone | Status  | State | Updated At                 |
+----+--------------+------+------+---------+-------+----------------------------+
|  6 | nova-compute | c1   | nova | enabled | up    | 2020-12-24T09:16:48.000000 |
|  7 | nova-compute | c2   | nova | enabled | up    | 2020-12-24T09:16:48.000000 |
+----+--------------+------+------+---------+-------+----------------------------+

4.4、把计算节点分配到不同的cell中

[root@ct ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 35918ff6-fcdf-4b14-9f59-6275f46f75c7
Checking host mapping for compute host 'c1': 04a61f49-a4ed-4e51-956f-4422ffc6174c
Creating host mapping for compute host 'c1': 04a61f49-a4ed-4e51-956f-4422ffc6174c
Checking host mapping for compute host 'c2': ae87afcd-ac0d-44d1-aed2-1390733296bf
Creating host mapping for compute host 'c2': ae87afcd-ac0d-44d1-aed2-1390733296bf
Found 2 unmapped computes in cell: 35918ff6-fcdf-4b14-9f59-6275f46f75c7

4.5、设置控制节点扫描时间

[root@ct ~]# vim /etc/nova/nova.conf
#在[scheduler]模块下面增加如下配置
discover_hosts_in_cells_interval = 300

[root@ct ~]# systemctl restart openstack-nova-api.service

4.6、检查 nova 的各个服务是否都是正常

[root@ct ~]# openstack compute service list
+----+----------------+------+----------+---------+-------+----------------------------+
| ID | Binary         | Host | Zone     | Status  | State | Updated At                 |
+----+----------------+------+----------+---------+-------+----------------------------+
|  3 | nova-conductor | ct   | internal | enabled | up    | 2020-12-24T09:20:43.000000 |
|  4 | nova-scheduler | ct   | internal | enabled | up    | 2020-12-24T09:20:44.000000 |
|  6 | nova-compute   | c1   | nova     | enabled | up    | 2020-12-24T09:20:38.000000 |
|  7 | nova-compute   | c2   | nova     | enabled | up    | 2020-12-24T09:20:38.000000 |
+----+----------------+------+----------+---------+-------+----------------------------+

4.7、查看各个组件的 api 是否正常

[root@ct ~]# openstack catalog list
+-----------+-----------+---------------------------------+
| Name      | Type      | Endpoints                       |
+-----------+-----------+---------------------------------+
| placement | placement | RegionOne                       |
|           |           |   internal: http://ct:8778      |
|           |           | RegionOne                       |
|           |           |   public: http://ct:8778        |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:8778         |
|           |           |                                 |
| glance    | image     | RegionOne                       |
|           |           |   public: http://ct:9292        |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:9292         |
|           |           |                                 |
| nova      | compute   | RegionOne                       |
|           |           |   internal: http://ct:8774/v2.1 |
|           |           | RegionOne                       |
|           |           |   public: http://ct:8774/v2.1   |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:8774/v2.1    |
|           |           |                                 |
| keystone  | identity  | RegionOne                       |
|           |           |   public: http://ct:5000/v3/    |
|           |           | RegionOne                       |
|           |           |   admin: http://ct:5000/v3/     |
|           |           | RegionOne                       |
|           |           |   internal: http://ct:5000/v3/  |
|           |           |                                 |
+-----------+-----------+---------------------------------+

4.8、查看是否能够拿到镜像

[root@ct ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| 2d95a258-3bd4-4cc7-ade9-64433e2129f0 | cirros | active |
+--------------------------------------+--------+--------+

3.9、查看cell的api和placement的api是否正常

[root@ct ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Cinder API              |
| Result: Success                |
| Details: None                  |
+--------------------------------+

4.9、nova部署总结

计算服务是openstack最核心的服务之一,负责维护和管理云环境的计算资源它在openstack项目中代号是nova。
部署思路
控制节点:
1、创建nova数据库
2、创建nova用户
3、把nova用户添加到service项目
4、安装nova组件
5、修改nova配置文件
6、初始化nova_api数据库
7、注册cell0数据库
8、创建cell1单元格
9、初始化nova数据库
10、验证cell0和cell1是否注册成功
11、启动Nova服务
12、测试
计算机节点部署nova-compute
1、 安装nova-compute组件
2、修改配置文件
3、开启服务
4、控制节点查看计算机节点是否注册成功
5、把计算节点分配到不同的cell中
6、设置控制节点扫描时间
7、检查 nova 的各个服务是否都是正常
8、查看各个组件的 api 是否正常
9、查看是否能够拿到镜像
10、查看cell的api和placement的api是否正常

  • 2
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
OpenStack 部署是一个非常复杂的过程,需要掌握多种技术和工具。以下是一份 OpenStack 部署详细手把手教程,帮助你快速部署 OpenStack。 1. 准备环境 在开始部署 OpenStack 之前,你需要准备一组物理机或虚拟机,并且安装好 CentOS 7 操作系统。这些机器应该之间可以互相通信,你可以使用一个专用的管理网络。此外,你还需要为 OpenStack 分配 IP 地址和域名。 2. 安装必要的软件 在所有节点上安装必要的软件和工具,包括 Python、epel-release、yum-utils、git 等。你可以使用以下命令进行安装: ``` yum install -y python epel-release yum-utils git ``` 3. 安装 MariaDB 数据库 OpenStack 使用 MariaDB 数据库来存储数据,你需要在一个节点上安装 MariaDB,并且创建相应的数据库和用户。你可以使用以下命令进行安装: ``` yum install -y mariadb mariadb-server python2-PyMySQL systemctl enable mariadb systemctl start mariadb mysql_secure_installation ``` 在执行 `mysql_secure_installation` 命令时,你需要输入一个密码来保护你的数据库。 4. 安装 RabbitMQ 消息队列 OpenStack 使用 RabbitMQ 作为消息队列,你需要在一个节点上安装 RabbitMQ。你可以使用以下命令进行安装: ``` yum install -y rabbitmq-server systemctl enable rabbitmq-server systemctl start rabbitmq-server rabbitmqctl add_user openstack RABBIT_PASS rabbitmqctl set_permissions openstack ".*" ".*" ".*" ``` 在执行 `rabbitmqctl add_user` 命令时,你需要输入一个密码来保护你的 RabbitMQ 服务。在执行 `rabbitmqctl set_permissions` 命令时,你需要指定用户的权限。 5. 安装 Memcached 缓存服务 OpenStack 使用 Memcached 缓存服务来提高性能,你需要在所有节点上安装 Memcached。你可以使用以下命令进行安装: ``` yum install -y memcached python-memcached systemctl enable memcached systemctl start memcached ``` 6. 配置 OpenStack 源 在所有节点上配置 OpenStack 源,并且安装相应的软件包。你可以使用以下命令进行配置: ``` yum install -y centos-release-openstack-queens yum-config-manager --enable openstack-queens yum update ``` 7. 配置网络 OpenStack 需要使用 Neutron 网络服务来管理网络,你需要在所有节点上配置网络。具体的配置方法因环境而异,可以参考 OpenStack 的官方文档进行配置。 8. 安装 OpenStack 在所有节点上安装 OpenStack 相关的组件和服务。这个过程比较复杂,需要涉及到多个组件和配置文件。你可以参考 OpenStack 的官方文档进行安装和配置。 以下是一些常用的组件和服务: - Keystone:OpenStack 的身份认证服务,用于管理用户、角色和权限等。 - Glance:OpenStack 的镜像服务,用于管理虚拟机镜像。 - Nova:OpenStack 的计算服务,用于创建和管理虚拟机。 - Neutron:OpenStack 的网络服务,用于管理虚拟网络。 - Cinder:OpenStack 的存储服务,用于管理块存储。 - Swift:OpenStack 的对象存储服务,用于管理对象存储。 9. 验证 OpenStackOpenStack 安装完成后,你需要对其进行验证,确保各个服务都能够正常工作。你可以使用 OpenStack 的 Dashboard 来进行验证,也可以使用命令行工具(如 nova、glance、neutron 等)进行验证。 以上是 OpenStack 部署详细手把手教程的大致步骤,具体的操作方法和步骤因环境而异,需要根据实际情况进行调整。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值