一、openstack介绍
1、简介
-
OpenStack是一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,以Apache许可证授权的自由软件和开放源代码项目。
-
OpenStack是一个开源的云计算管理平台项目,由几个主要的组件组合起来完成具体工作。OpenStack支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供API以进行集成。
-
OpenStack云计算平台,帮助服务商和企业内部实现类似于 Amazon EC2 和 S3 的云基础架构服务(Infrastructure as a Service, IaaS)。OpenStack 包含两个主要模块:Nova 和 Swift,前者是 NASA 开发的虚拟服务器部署和业务计算模块;后者是 Rackspace开发的分布式云存储模块,两者可以一起用,也可以分开单独用。OpenStack除了有 Rackspace 和 NASA 的大力支持外,还有包括 Dell、Citrix、 Cisco、 Canonical等重量级公司的贡献和支持,发展速度非常快,有取代另一个业界领先开源云平台 Eucalyptus 的态势。
2、架构图
二、实验环境
rhel7.3 selinux firewalld off
百度网盘下载链接: https://pan.baidu.com/s/1T-PYn-nPMCk6UZytZfbXYQ 密码: 75xn
参看官网:https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/(此版本为中文Mitaka版本)
1、安装说明
注意:要关闭机器的NetworkManager!!!
其实3台机器就够了,可以把时间同步服务器和yum源放在控制节点上(controller)!!!
10.10.10.1(虚拟机) controller 网卡eth0、eth1
10.10.10.2(虚拟机) compute 网卡eth0、eht1
10.10.10.3(虚拟机) block
10.10.10.250(真机) dream(时间同步服务器、yum源)
2、每台都加入host解析
[root@controller ~]# vim /etc/hosts
10.10.10.1 controller
10.10.10.2 compute
10.10.10.3 block
10.10.10.250 dream
3、yum源(3台)
我是在本地自己搭建yum源进行安装,也可以使用网络的源,如果是Vmware的话直接搭建在controller节点上即可!!!
[root@controller ~]# vim /etc/yum.repos.d/yum.repo
[rhel7.3]
name=my_yum
baseurl=http://10.10.10.250/rhel7.3
gpgcheck=0
[openstack]
name=mitaka
baseurl=http://10.10.10.250/mitaka
gpgcheck=0
三、基础环境安装
1、时间同步
这里我让三台虚拟机的时间都同步我们真机的时间,你也可以同步控制节点的时间!!!
(1)真机中安装配置chrony(10.10.10.250)
[root@dream ~]# yum install -y chrony
[root@dream ~]# vim /etc/chrony.conf
21 # Allow NTP client access from local network.
22 allow 10.10.10.0/24
[root@dream ~]# systemctl restart chronyd
[root@dream ~]# systemctl enable chronyd
(2)三台虚拟机
# yum install -y chrony
# vim /etc/chrony.conf
server 10.10.10.250 iburst
# systemctl restart chronyd
# systemctl enable chronyd
(3)测试
最后显示为 ^ * 10.10.10.250 即为正确!!!
[root@controller ~]# chronyc sources -v
210 Number of sources = 1
.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 10.10.10.250 3 6 377 0 +1637us[+5649us] +/- 101ms
2、安装OpenStack客户端
[root@controller ~]# yum upgrade ###升级包
[root@controller ~]# yum install -y python-openstackclient
3、SQL数据库安装
大多数OpenStack服务使用SQL数据库来存储信息。典型地,数据库运行在控制节点上。
(1)安装软件包
[root@controller ~]# yum install -y mariadb mariadb-server python2-PyMySQL
(2)创建并编辑openstack.cnf
[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf ###启动选项和UTF-8字符集
[mysqld]
bind-address = 10.10.10.1
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
(3)完成安装
[root@controller ~]# systemctl enable mariadb
[root@controller ~]# systemctl start mariadb
[root@controller ~]# mysql_secure_installation ###默认没有密码,注意不初始化会报错后面
4、消息队列安装
OpenStack 使用 message queue 协调操作和各服务的状态信息。消息队列服务一般运行在控制节点上。OpenStack支持好几种消息队列服务包括 RabbitMQ, Qpid, 和 ZeroMQ。不过,大多数发行版本的OpenStack包支持特定的消息队列服务。
(1)安装及启动
[root@controller ~]# yum install -y rabbitmq-server
[root@controller ~]# systemctl enable rabbitmq-server.service
[root@controller ~]# systemctl start rabbitmq-server.service
(2)添加openstack用户及赋予权限
###后面的openstack为openstack的密码,添加openstack用户
[root@controller ~]# rabbitmqctl add_user openstack openstack
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*" ###给写和读权限
(3)查看openstack用户信息
[root@controller ~]# rabbitmqctl list_user_permissions openstack ###查看权限
Listing permissions for user "openstack" ...
/ .* .* .*
[root@controller ~]# rabbitmqctl authenticate_user openstack openstack ###查看密码是否正确
Authenticating user "openstack" ...
Success
(4)开启Web管理界面
[root@controller ~]# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
mochiweb
webmachine
rabbitmq_web_dispatch
amqp_client
rabbitmq_management_agent
rabbitmq_management
Applying plugin configuration to rabbit@controller... started 6 plugins.
(5)登陆测试
登陆的帐号和密码都为:guest
http://10.10.10.1:15672/
1
5、Memcached安装
[root@controller ~]# yum install -y memcached python-memcached
[root@controller ~]# systemctl enable memcached.service
[root@controller ~]# systemctl start memcached.service
四、认证服务
1、创建keystone
在控制节点上安装和配置OpenStack身份认证服务,代码名称keystone。
(1)创建数据库keystone
[root@controller ~]# mysql -uroot -p ###输入密码登录进数据库
MariaDB [(none)]> CREATE DATABASE keystone;
(2)授权keystone数据库
后面的keystone为认证密码!!!
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
[root@controller ~]# mysql -ukeystone -p ###测试keystone能否进行登录
(3)生成随机值
[root@controller ~]# openssl rand -hex 10 ###生成一个随机值在初始的配置中作为管理员的令牌
974cff95323e34a0719e
2、安全并配置组件
注意OpenStack的所有配置要写到对应的模块下面,否则不生效
(1)安装包
[root@controller ~]# yum install -y openstack-keystone httpd mod_wsgi
(2)配置keystone.conf
[root@controller ~]# vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token = 974cff95323e34a0719e ###上面生成的管理员令牌号
[database]
connection = mysql+pymysql://keystone:keystone@controller/keystone
[token]
provider = fernet
(3)初始化
###初始化身份认证服务的数据库
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
###初始化Fernet keys
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
(4)发现多了fernet-keys
[root@controller ~]# ll /etc/keystone/
total 100
-rw-r----- 1 root keystone 2303 Sep 22 2016 default_catalog.templates
drwx------ 2 keystone keystone 22 Jun 5 04:43 fernet-keys
-rw-r----- 1 root keystone 73221 Jun 5 04:41 keystone.conf
-rw-r----- 1 root keystone 2400 Sep 22 2016 keystone-paste.ini
-rw-r----- 1 root keystone 1046 Sep 22 2016 logging.conf
-rw-r----- 1 keystone keystone 9699 Sep 22 2016 policy.json
-rw-r----- 1 keystone keystone 665 Sep 22 2016 sso_callback_template.html
3、配置Apache HTTP服务器
(1)配置httpd.conf
[root@controller ~]# vim /etc/httpd/conf/httpd.conf
# If your host doesn't have a registered DNS name, enter its IP address here.
#
ServerName controller ###改成控制节点主机名
(2)配置wsgi-keystone.conf
[root@controller keystone]# vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
(3)启动httpd
[root@controller keystone]# systemctl enable httpd.service
[root@controller keystone]# systemctl start httpd.service
4、创建服务实体和API端点
(1)声明环境变量
[root@controller ~]# export OS_TOKEN=ADMIN_TOKEN ###配置认证令牌
[root@controller ~]# export OS_TOKEN=974cff95323e34a0719e ###前面认证令牌
[root@controller ~]# export OS_URL=http://controller:35357/v3 ###配置端点URL
[root@controller ~]# export OS_IDENTITY_API_VERSION=3 ###配置认证API版本
(2)创建服务实体和身份认证服务
注意:OpenStack 是动态生成 ID 的,因此您看到的输出会与示例中的命令行输出不相同。
[root@controller ~]# openstack service create --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 0bc1b8357eae4c19b2497fc176996688 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
(3)创建认证服务的API端点
[root@controller ~]# openstack endpoint create --region RegionOne identity public http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6ae12e2987ff46739a2eb08d8c2ea0a0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0bc1b8357eae4c19b2497fc176996688 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne identity internal http://controller:5000/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6abf7e043f8d48d888f2be5721c33796 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0bc1b8357eae4c19b2497fc176996688 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:5000/v3 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne identity admin http://controller:35357/v3
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e450f7be0cb64afdb22152afdc989abb |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0bc1b8357eae4c19b2497fc176996688 |
| service_name | keystone |
| service_type | identity |
| url | http://controller:35357/v3 |
+--------------+----------------------------------+
5、创建域、项目、用户和角色
(1)创建域default
[root@controller ~]# openstack domain create --description "Default Domain" default
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Default Domain |
| enabled | True |
| id | 296463ec2b564fcd85d49d9e89d8cc15 |
| name | default |
+-------------+----------------------------------+
(2)创建admin项目
[root@controller ~]# openstack project create --domain default --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | d2c589122d9f4273b572b1752551cdb7 |
| is_domain | False |
| name | admin |
| parent_id | 296463ec2b564fcd85d49d9e89d8cc15 |
+-------------+----------------------------------+
(3)创建admin用户
[root@controller ~]# openstack user create --domain default --password-prompt admin
User Password: ###admin为密码
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | 7466c9e538a24384ba0ced75472c5b4a |
| name | admin |
+-----------+----------------------------------+
(4)创建admin角色
[root@controller ~]# openstack role create admin
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | a9de7291aed3458cb75a964045345a76 |
| name | admin |
+-----------+----------------------------------+
(5)添加admin角色到admin项目和用户上
[root@controller ~]# openstack role add --project admin --user admin admin ###这个命令执行后没有输出
(6)创建service项目
使用一个你添加到你的环境中每个服务包含独有用户的service 项目。
[root@controller ~]# openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | 5af0b96b2e804a40ab0be47e7fbfb67f |
| is_domain | False |
| name | service |
| parent_id | 296463ec2b564fcd85d49d9e89d8cc15 |
+-------------+----------------------------------+
(7)创建demo项目
常规(非管理)任务应该使用无特权的项目和用户。作为例子,这里我们创建 demo 项目和用户。
[root@controller ~]# openstack project create --domain default --description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | 44a378473a174b01b9f11df1d1bfad2c |
| is_domain | False |
| name | demo |
| parent_id | 296463ec2b564fcd85d49d9e89d8cc15 |
+-------------+----------------------------------+
(8)创建demo 用户
[root@controller ~]# openstack user create --domain default --password-prompt demo
User Password: ###密码为demo
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | 0f55752937b2424fbaeb04a8d6e147c3 |
| name | demo |
+-----------+----------------------------------+
(9)创建user角色
[root@controller ~]# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 6b628ab3482a46da915c1b56436fbd62 |
| name | user |
+-----------+----------------------------------+
(10)添加user角色到demo项目和用户
注:你可以重复此过程来创建额外的项目和用户!!!
[root@controller ~]# openstack role add --project demo --user demo user ###执行结果没输出
6、验证操作
(1)重置OS_TOKEN和OS_URL 环境变量
[root@controller ~]# unset OS_TOKEN OS_URL
(2)admin用户请求认证令牌
上面设置的admin的密码!!!
[root@controller ~]# openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2018-09-19T12:57:50.180103Z |
| id | gAAAAABbojm-HgSY9jG0TZZSSlHTAvw0Q0CgMeSuD8Zw-ICnXUm8vfTd1USaGYq |
| | XcTjPc0_WaGrjIFvQYlXwtXl4QO8DJ4YgeXKOM8edTSZcDpmdSfMsqxXUi1dkXB |
| | DcQ0hFPu233MKmBzhwZrJgUzii3iZWPHUWytA3bNROUGFVbSNiEW4l3P0 |
| project_id | d2c589122d9f4273b572b1752551cdb7 |
| user_id | 7466c9e538a24384ba0ced75472c5b4a |
+------------+-----------------------------------------------------------------+
(3)demo用户请求认证令牌
[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2018-09-19T12:58:23.623417Z |
| id | gAAAAABbojnf1E- |
| | YRlgexzTl0Ktxxo72UHsL45rnKp_WeS1CXNxPEiLaRD_2nlAzB- |
| | 3VbxfLbujFoY52lBvXEyaNa7CflTZ8TtElaG22-DgnPJwjvQnYU7PzjVHeLapaW |
| | GRzCXFhslEZZLJE2m_ftMSfMDwnPG8wc7h6vjkA_KM-OJ-k7pQ_clM |
| project_id | 44a378473a174b01b9f11df1d1bfad2c |
| user_id | 0f55752937b2424fbaeb04a8d6e147c3 |
+------------+-----------------------------------------------------------------+
7、创建客户端环境脚本
(1)创建环境变量脚本(admin和demo)
[root@controller ~]# vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
###修改为自己admin用户的密码
export OS_PASSWORD=admin
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
[root@controller ~]# vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
###修改为自己demo用户的密码
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
(2)运行脚本
###其实就是声明环境变量
[root@controller ~]# . admin-openrc
[root@controller ~]# . demo-openrc
###请求认证令牌
[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2018-09-19T12:59:17.338714Z |
| id | gAAAAABbojoV0P8ZyAs5THwI1YJDuTxi9kj8vuFBxRLxRujosgIvH0cMBdkC0-U |
| | WUs3eahmPtGBYlL130FnuLAOe8X6YV_rP34xy6BjyoXKPk4qMXby5nGLQk_5Lui |
| | xY4KyG3ciL8aigkLZCx15jUkqM6XGzKqp5GOci7oZTv2Sol4jVwqty6ao |
| project_id | 44a378473a174b01b9f11df1d1bfad2c |
| user_id | 0f55752937b2424fbaeb04a8d6e147c3 |
+------------+-----------------------------------------------------------------+
五、镜像服务
1、简介
- OpenStack镜像服务是IaaS的核心服务,它接受磁盘镜像或服务器镜像API请求,和来自终端用户或OpenStack计算组件的元数据定义。它也支持包括OpenStack对象存储在内的多种类型仓库上的磁盘镜像或服务器镜像存储。
- OpenStack镜像服务包括以下组件:
- glance-api:接收镜像API的调用,诸如镜像发现、恢复、存储。
- glance-registry:存储、处理和恢复镜像的元数据,元数据包括项诸如大小和类型。请求数据库,然后访问镜像文件的存储仓库(真正镜像所放的位置)
2、安装和配置
(1)创建数据库
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
(2)授权glance数据库
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
(3)获取admin权限
[root@controller ~]# . admin-openrc
(4)创建glance用户
[root@controller ~]# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | 60498a4043e1411da9ec880aa075e452 |
| name | glance |
+-----------+----------------------------------+
(5)添加admin角色到glance用户和service项目上
注:这个命令执行后没有输出。
[root@controller ~]# openstack role add --project service --user glance admin
(6)创建glance服务实体
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 15aac117b3dd496b9ac5cdedd2907515 |
| name | glance |
| type | image |
+-------------+----------------------------------+
(7)创建镜像服务的API端点
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 56803fcd1a6946378d03a88af396b060 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 15aac117b3dd496b9ac5cdedd2907515 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 966d05db4a204c5e9fb26ed805c04d6d |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 15aac117b3dd496b9ac5cdedd2907515 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 951d542623f645558f3d3f744fe7cb6a |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 15aac117b3dd496b9ac5cdedd2907515 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
3、安装及配置组件
(1)安装包
[root@controller ~]# yum install -y openstack-glance
(2)配置glance-api.conf
[root@controller ~]# vim /etc/glance/glance-api.conf
[database]
###修改为glance密码
connection = mysql+pymysql://glance:glance@controller/glance
###认证服务访问配置
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
###修改为glance密码
password = glance
[paste_deploy]
flavor = keystone
###本地文件系统存储和镜像文件位置配置
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
(3)配置glance-registry.conf
[root@controller ~]# vim /etc/glance/glance-registry.conf
###数据库访问配置
[database]
###修改为glance密码
connection = mysql+pymysql://glance:glance@controller/glance
###认证服务访问配置
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
###修改为glance密码
password = glance
[paste_deploy]
flavor = keystone
(4)写入镜像服务数据库
注:忽略输出中任何不推荐使用的信息,如果修改了上面的配置,就需要重新执行此命令,否则后面会报错!!!
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
(5)完成安装
启动镜像服务、配置自动启动!!!
[root@controller ~]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
[root@controller ~]# systemctl start openstack-glance-api.service openstack-glance-registry.service
4、验证操作
(1)获取admin权限
[root@controller ~]# . admin-openrc
(2)下载源镜像
[root@controller ~]# wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
(3)设置镜像为公共可见
使用QCOW2磁盘格式,bare容器格式上传镜像到镜像服务并设置公共可见,这样所有的项目都可以访问它!!!
[root@controller ~]# openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2018-09-19T12:05:40Z |
| disk_format | qcow2 |
| file | /v2/images/f5ec2ae2-7a6d-4eaf-96a0-b72afe3eda46/file |
| id | f5ec2ae2-7a6d-4eaf-96a0-b72afe3eda46 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | d2c589122d9f4273b572b1752551cdb7 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2018-09-19T12:05:40Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
(4)查看及确定镜像的上传并验证属性
###查看生成文件,发现和上面的id相同
[root@controller ~]# ls /var/lib/glance/images/
f5ec2ae2-7a6d-4eaf-96a0-b72afe3eda46
###active即正确
[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| f5ec2ae2-7a6d-4eaf-96a0-b72afe3eda46 | cirros | active |
+--------------------------------------+--------+--------+
六、配置计算服务(计算节点)
下面我们进行计算节点的环境安装!!!
1、安装和配置控制节点(controller)
(1)创建nova_api和nova数据库
[root@controller ~]# mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
(2)授权数据库
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
(3)测试登陆
[root@controller ~]# mysql -u nova -pnova nova
(4)创建nova用户
###获取admin权限
[root@controller ~]# . admin-openrc
###上面刚设置的nova密码
[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | 439f3e0ad34c49f581a8041e3e6f40d5 |
| name | nova |
+-----------+----------------------------------+
(5)给nova用户添加admin角色
注意:这个命令执行后没有输出。
[root@controller ~]# openstack role add --project service --user nova admin
(6)创建nova服务实体
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 8a5b04c26cad4085877dc14205e4b01e |
| name | nova |
| type | compute |
+-------------+----------------------------------+
(7)创建Compute服务API端点
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 0967345ed4fa40a3b6d5ed0a0bbfb4e5 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8a5b04c26cad4085877dc14205e4b01e |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | d0a700fce7834d1f9254124ef93606ce |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8a5b04c26cad4085877dc14205e4b01e |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1/%\(tenant_id\)s
+--------------+-------------------------------------------+
| Field | Value |
+--------------+-------------------------------------------+
| enabled | True |
| id | 0b1caf0dfe654f60aaa693b6db06a271 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8a5b04c26cad4085877dc14205e4b01e |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1/%(tenant_id)s |
+--------------+-------------------------------------------+
2、安装及配置组件(controller)
(1)安装软件包
[root@controller ~]# yum install -y openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler
(2)配置nova.conf
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
###控制节点ip,即本机
my_ip = 10.10.10.1
###支持Networking服务
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
###设置为nova的密码
connection = mysql+pymysql://nova:nova@controller/nova_api
[database]
###设置为nova的密码
connection = mysql+pymysql://nova:nova@controller/nova
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
###修改为openstack的密码
rabbit_password = openstack
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
###镜像服务 API 的位置配置
[glance]
api_servers = http://controller:9292
###锁路径配置
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
(3)同步Compute数据库
###忽略输出中任何不推荐使用的信息
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
3、完成安装
启动Compute服务并将其设置为随系统启动!!!
[root@controller ~]# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
到此Compute的基础环境就准备好了,下面进行Compute节点安装!!!
4、安装并配置组件(compute)
(1)安装包
[root@compute ~]# yum install -y openstack-nova-compute
(2)配置nova.conf
[root@compute ~]# vim /etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
###设置为计算节点IP即本机
my_ip = 10.10.10.2
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
###设置为openstack的密码
rabbit_password = openstack
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
###设置为nova密码
password = nova
###启用并配置远程控制台访问
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
(3)是否支持虚拟机的硬件加速
如果这个命令返回了one or greater的值,那么你的计算节点支持硬件加速且不需要额外的配置。
如果这个命令返回了zero值,那么你的计算节点不支持硬件加速。你必须配置libvirt来使用QEMU去代替KVM
###如果你的返回值不是0的话可以不用执行下一步操作
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
0
(4)不支持执行操作
[root@compute ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu
(5)完成安装、设置自启
如果启动报错:ERROR nova.virt.driver [-] Unable to load the virtualization driver
解决方法:yum install -y libvirt
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service
5、验证操作
(1)获取admin权限
[root@controller ~]# . admin-openrc
(2)查看compute结果(controller )
列出服务组件,以验证是否成功启动并注册了每个进程!!!
[root@controller ~]# openstack compute service list
+----+--------------+------------+----------+---------+-------+--------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+------------+----------+---------+-------+--------------+
| 1 | nova- | controller | internal | enabled | up | 2018-09-19T1 |
| | consoleauth | | | | | 2:21:40.0000 |
| | | | | | | 00 |
| 2 | nova- | controller | internal | enabled | up | 2018-09-19T1 |
| | conductor | | | | | 2:21:40.0000 |
| | | | | | | 00 |
| 3 | nova- | controller | internal | enabled | up | 2018-09-19T1 |
| | scheduler | | | | | 2:21:40.0000 |
| | | | | | | 00 |
| 6 | nova-compute | compute | nova | enabled | up | 2018-09-19T1 |
| | | | | | | 2:21:35.0000 |
| | | | | | | 00 |
+----+--------------+------------+----------+---------+-------+--------------+
七、Networking服务配置
1、安装配置控制节点
在你配置OpenStack网络(neutron)服务之前,你必须为其创建一个数据库,服务凭证和API端点。
(1)数据库操作(controller)
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
(2)创建neutron用户
[root@controller ~]# . admin-openrc ###获取admin权限
###输入刚设置的neutron密码
[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | a639afe08d244e099eb0d587c27266e3 |
| name | neutron |
+-----------+----------------------------------+
(3)添加admin 角色到neutron用户
###这个命令执行后没有输出
[root@controller ~]# openstack role add --project service --user neutron admin
(4)创建neutron服务实体
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | dc9aa64760e74570b811b1b74ed2a22e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
(5)创建网络服务API端点
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | b3c05409694741d29ddbbe3427e959fe |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dc9aa64760e74570b811b1b74ed2a22e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 155a3f69ba734764a6000ae5dba6dd99 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dc9aa64760e74570b811b1b74ed2a22e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | e2e3105220b44a13b51d05575b652f68 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | dc9aa64760e74570b811b1b74ed2a22e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
(6)配置网络选项(公共网络)
这里我们也可以配置私有网络,这里我们配置公共网络来做实验!!!
<1> 安装组件
[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
<2> 配置服务组件
Networking服务器组件的配置包括数据库、认证机制、消息队列、拓扑变化通知和插件。
[root@controller ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
###启用ML2插件并禁用其他插件
core_plugin = ml2
service_plugins =
###配置RabbitMQ消息队列的连
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
###数据库访问配置
[database]
connection = mysql+pymysql://neutron:neutron@controller/neutron
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
###修改为openstack密码
rabbit_password = openstack
###认证服务访问配置
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
###修改为neutron密码
password = neutron
###配置网络服务来通知计算节点的网络拓扑变化
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
###修改为nova密码
password = nova
###锁路径配置
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
<3> 配置Modular Layer 2 (ML2) 插件
ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
###启用flat和VLAN网络
type_drivers = flat,vlan
###禁用私有网络
tenant_network_types =
###启用Linuxbridge机制
mechanism_drivers = linuxbridge
###启用端口安全扩展驱动
extension_drivers = port_security
[ml2_type_flat]
###配置公共虚拟网络为flat网络
flat_networks = provider
[securitygroup]
###启用ipset增加安全组规则的高效性
enable_ipset = True
<4> 配置Linuxbridge代理
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
###公共虚拟网络和公共物理网络接口对应起来,修改为本机网卡名,本机eth0(IP)和eth1
physical_interface_mappings = provider:eth1
[vxlan]
###禁止VXLAN覆盖网络
enable_vxlan = False
###启用安全组并配置Linuxbridge iptables firewall driver
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
<5> 配置DHCP代理
配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据,这样在公共网络上的实例就可以通过网络来访问元数据。
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
(7)配置元数据代理
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
###配置元数据主机以及共享密码
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = dream
(8)为计算节点配置网络服务
[root@controller ~]# vim /etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
###修改为neutron密码
password = neutron
service_metadata_proxy = True
###修改为元数据设置的密码
metadata_proxy_shared_secret = dream
(9)完成安装
###网络服务初始化脚本需要一个超链接/etc/neutron/plugin.ini指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini。如果超链接不存在,使用下面的命令创建它
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
###同步数据库,数据库的同步发生在 Networking 之后,因为脚本需要完成服务器和插件的配置文件。
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
###重启计算API服务
[root@controller ~]# systemctl restart openstack-nova-api.service
###启动Networking服务并配置开机自启
[root@controller ~]# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
[root@controller ~]# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
2、安装和配置计算节点(compute)
计算节点处理实例的连接和安全组 。
(1)安装组件
[root@compute ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset
(2)配置通用组件
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
###配置RabbitMQ消息队列的连接
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
###修改为openstack密码
rabbit_password = openstack
###认证服务访问配置
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
###修改为neutron密码
password = neutron
###锁路径配置
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
(3)配置网络选项(公共网络)
配置Linuxbridge代理:
[root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
###将公共虚拟网络和公共物理网络接口对应起来,修改为本机网卡名,本机eth0(IP)和eth1
physical_interface_mappings = provider:eth1
[vxlan]
###禁止VXLAN覆盖网络
enable_vxlan = False
[securitygroup]
###启用安全组并配置Linuxbridge iptables firewall driver
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
(4)为计算节点配置网络服务
[root@compute ~]# vim /etc/nova/nova.conf
###配置访问参数
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
###修改为neutron的密码
password = neutron
(5)完成安装
###重启计算服务
[root@compute ~]# systemctl restart openstack-nova-compute.service
###启动Linuxbridge代理并设置为开机自启
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service
3、验证操作
(1)获取admin权限
[root@controller ~]# . admin-openrc
(2)列出加载的扩展
验证neutron-server进程是否正常启动!!!
[root@controller ~]# neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| availability_zone | Availability Zone |
| network_availability_zone | Network Availability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| tag | Tag support |
| external-net | Neutron external network |
| net-mtu | Network MTU |
| network-ip-availability | Network IP Availability |
| quotas | Quota management support |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| address-scope | Address scope |
| timestamp_core | Time Stamp Fields addition for core resources |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| security-group | security-group |
| rbac-policies | RBAC Policies |
| standard-attr-description | standard-attr-description |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
+---------------------------+-----------------------------------------------+
八、启动一个实例
1、创建虚拟网络
(1)创建提供者网络
[root@controller ~]# . admin-openrc ###获取admin权限
[root@controller ~]# neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2018-09-19T13:09:36 |
| description | |
| id | e72f1fe5-e634-4b58-814a-b856a51cc183 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | d2c589122d9f4273b572b1752551cdb7 |
| updated_at | 2018-09-19T13:09:36 |
+---------------------------+--------------------------------------+
(2)创建subnet
[root@controller ~]# neutron subnet-create --name provider --allocation-pool start=10.10.10.10,end=10.10.10.20 --dns-nameserver 114.114.114.114 --gateway 10.10.10.100 provider 10.10.10.0/24
Created a new subnet:
+-------------------+------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------+
| allocation_pools | {"start": "10.10.10.10", "end": "10.10.10.20"} |
| cidr | 10.10.10.0/24 |
| created_at | 2018-09-19T13:09:57 |
| description | |
| dns_nameservers | 114.114.114.114 |
| enable_dhcp | True |
| gateway_ip | 10.10.10.100 |
| host_routes | |
| id | decc1a0c-53cc-4632-98de-2ff64aaa806f |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | provider |
| network_id | e72f1fe5-e634-4b58-814a-b856a51cc183 |
| subnetpool_id | |
| tenant_id | d2c589122d9f4273b572b1752551cdb7 |
| updated_at | 2018-09-19T13:09:57 |
+-------------------+------------------------------------------------+
2、创建m1.nano规格的主机
[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field | Value |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+---------+
3、生成一个键值对
大部分云镜像支持公共密钥认证而不是传统的密码认证。在启动实例前,你必须添加一个公共密钥到计算服务。
(1)导入用户demo的凭证
[root@controller ~]# . demo-openrc
(2)生成和添加秘钥对
[root@controller ~]# ssh-keygen -q -N "" ###回车就好
[root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | 20:ce:f0:e0:1b:4c:ae:e0:6d:c5:d5:e5:99:e4:3b:12 |
| name | mykey |
| user_id | 0f55752937b2424fbaeb04a8d6e147c3 |
+-------------+-------------------------------------------------+
(3)验证公钥的添加
[root@controller ~]# openstack keypair list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | 20:ce:f0:e0:1b:4c:ae:e0:6d:c5:d5:e5:99:e4:3b:12 |
+-------+-------------------------------------------------+
4、增加安全组规则
(1)允许ICMP (ping)
[root@controller ~]# openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 76f2467b-b4af-4c62-bf24-14f98ff8d940 |
| ip_protocol | icmp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | 74db9e1b-b34c-4f2a-9e80-a64b82aa5b13 |
| port_range | |
| remote_security_group | |
+-----------------------+--------------------------------------+
(2)允许安全shell(SSH)的访问
[root@controller ~]# openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 4442ece2-07fd-41f5-a755-cc0734cc507d |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | 74db9e1b-b34c-4f2a-9e80-a64b82aa5b13 |
| port_range | 22:22 |
| remote_security_group | |
+-----------------------+--------------------------------------+
5、在公有网络上创建实例
(1)获取demo权限
[root@controller ~]# . demo-openrc
(2)列出可用类型
[root@controller ~]# openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
(3)列出可用镜像
[root@controller ~]# openstack image list ###我们前面创建的cirros
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| f5ec2ae2-7a6d-4eaf-96a0-b72afe3eda46 | cirros | active |
+--------------------------------------+--------+--------+
(4)列出可用网络
[root@controller ~]# openstack network list ###我们刚创建的provider
+---------------------------------+----------+---------------------------------+
| ID | Name | Subnets |
+---------------------------------+----------+---------------------------------+
| e72f1fe5-e634-4b58-814a- | provider | decc1a0c-53cc-4632-98de- |
| b856a51cc183 | | 2ff64aaa806f |
+---------------------------------+----------+---------------------------------+
(5)列出可用的安全组
[root@controller ~]# openstack security group list
+----------------------+---------+----------------------+----------------------+
| ID | Name | Description | Project |
+----------------------+---------+----------------------+----------------------+
| 74db9e1b-b34c-4f2a- | default | Default security | 44a378473a174b01b9f1 |
| 9e80-a64b82aa5b13 | | group | 1df1d1bfad2c |
+----------------------+---------+----------------------+----------------------+
(6)创建实例
###provider-instance就为虚拟机的名字,可以自己命名,使用provide公有网络的ID替换PUBLIC_NET_ID
[root@controller ~]# openstack server create --flavor m1.tiny --image cirros --nic net-id=e72f1fe5-e634-4b58-814a-b856a51cc183 --security-group default --key-name mykey provider-instance
+--------------------------------------+---------------------------------------+
| Field | Value |
+--------------------------------------+---------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | nRjwCSmuWH3i |
| config_drive | |
| created | 2018-09-19T13:13:28Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 8b2f5104-3380-4543-8232-b07bef2dd530 |
| image | cirros (f5ec2ae2-7a6d-4eaf- |
| | 96a0-b72afe3eda46) |
| key_name | mykey |
| name | provider-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | 44a378473a174b01b9f11df1d1bfad2c |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | BUILD |
| updated | 2018-09-19T13:13:29Z |
| user_id | 0f55752937b2424fbaeb04a8d6e147c3 |
+--------------------------------------+---------------------------------------+
(7)检查实例的状态
###可以发现生成的地址在我们刚给的范围内,并且状态由BUILD变为ACTIVE
[root@controller ~]# openstack server list
+----------------------+-------------------+--------+----------------------+
| ID | Name | Status | Networks |
+----------------------+-------------------+--------+----------------------+
| 8b2f5104-3380-4543-8 | provider-instance | ACTIVE | provider=10.10.10.11 |
| 232-b07bef2dd530 | | | |
+----------------------+-------------------+--------+----------------------+
###此时我们在计算节点可以查看到此实例,且状态为running
[root@compute ~]# virsh list
Id Name State
----------------------------------------------------
1 instance-00000001 running
(8)使用虚拟控制台访问实例
###获取你实例的URL并从web浏览器访问它,这里名字是provider-instance
[root@controller ~]# openstack console url show provider-instance
+-------+----------------------------------------------------------------------+
| Field | Value |
+-------+----------------------------------------------------------------------+
| type | novnc |
| url | http://controller:6080/vnc_auto.html?token=c8e02fba-14e6-453b- |
| | b0f5-b0f82b270efc |
+-------+----------------------------------------------------------------------+
(9)访问报错
真机为linux因此也要加入Host解析!!!
<1> 报错:
openstack访问IPXE (http://ipxe.org)) 00:03.0 C980 PCI2.10 PnP PMM
<2> 解决方案
###报错的原因是QEMU的版本为1.5.3过低,进行升级就好了
[root@compute ~]# virsh version
Compiled against library: libvirt 2.0.0
Using library: libvirt 2.0.0
Using API: QEMU 2.0.0
Running hypervisor: QEMU 1.5.3
[root@compute ~]# wget https://download.qemu.org/qemu-2.6.0.tar.bz2
[root@compute ~]# yum -y install gcc gcc-c++ automake libtool zlib-devel glib2-devel bzip2-devel libuuid-devel spice-protocol spice-server-devel usbredir-devel libaio-devel
[root@compute ~]# tar xf qemu-2.6.0.tar.bz2
[root@compute ~]# cd qemu-2.6.0
[root@compute qemu-2.6.0]# ./configure --prefix=/usr/local/qemu
###注意:编译的过程很慢,可以把百度网盘中的qemu目录放在/usr/local下就好
[root@compute qemu-2.6.0]# make && make install
[root@compute ~]# rm -f /usr/bin/qemu-kvm
[root@compute ~]# rm -f /usr/libexec/qemu-kvm
[root@compute ~]# rm -f /usr/bin/qemu-img
[root@compute ~]# ln -s /usr/local/qemu/bin/qemu-system-x86_64 /usr/bin/qemu-kvm
[root@compute ~]# ln -s /usr/local/qemu/bin/qemu-system-x86_64 /usr/libexec/qemu-kvm
[root@compute ~]# ln -s /usr/local/qemu/bin/qemu-img /usr/bin/qemu-img
[root@compute ~]# virsh version
Compiled against library: libvirt 2.0.0
Using library: libvirt 2.0.0
Using API: QEMU 2.0.0
Running hypervisor: QEMU 2.6.0
(10)再次访问
这里应该是要重启某些服务,我是直接把控制节点和计算节点重启后进行访问!!!
九、配置Dashboard
1、安装配置
(1)安装包
[root@controller ~]# yum install -y openstack-dashboard
(2)配置local_settings
[root@controller ~]# vim /etc/openstack-dashboard/local_settings
###配置仪表盘以使用 OpenStack 服务
OPENSTACK_HOST = "controller"
###启用对域的支持
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
###通过仪表盘创建的用户默认角色配置为user
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
###允许所有主机访问仪表板
ALLOWED_HOSTS = ['*', ]
###配置memcached会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache' ###此条不存在
###这条存在直接修改就好了
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
###启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
###配置API版本
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
###通过仪表盘创建用户时的默认域配置为default
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
###如果您选择网络参数1,禁用支持3层网络服务
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
...
}
###配置时区
TIME_ZONE = "Asia/shanghai"
(3)完成安装
[root@controller ~]# systemctl restart httpd.service memcached.service
(4)报错
这个报错我之前搭建也未遇到过,感觉很奇怪!!!
<1> 登陆报错:
http://10.10.10.1/dashboard
[root@controller ~]# cat /var/log/httpd/error_log ###日志报错
[Thu Sep 20 21:09:58.075790 2018] [:error] [pid 3295] "Unable to create a new session key. "
[Thu Sep 20 21:09:58.075792 2018] [:error] [pid 3295] RuntimeError: Unable to create a new session key. It is likely that the cache is unavailable.
<2> 解决方案
[root@controller ~]# vim /etc/openstack-dashboard/local_settings
[root@controller ~]# systemctl restart httpd.service memcached.service ###重启服务
(5)修改字体为中文
十、私有网络配置
1、配置控制节点私有网络(controller)
(1)安装组件
组件在安装公共网络时已经进行了安装!!!
[root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
(2)配置服务组件
[root@controller ~]# vim /etc/neutron/neutron.conf ###修改这2处即可
(3)配置Modular Layer 2 (ML2)插件
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
(4)配置Linuxbridge代理
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
###启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population
[vxlan]
enable_vxlan = True
###本地IP地址(controller)
local_ip = 10.10.10.1
l2_population = True
(5)配置layer-3代理
[root@controller ~]# vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =
(6)启动layer-3服务
[root@controller ~]# systemctl enable neutron-l3-agent.service
[root@controller ~]# systemctl start neutron-l3-agent.service
2、配置计算节点私有网络(compute)
(1)配置Linuxbridge代理
[root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan]
enable_vxlan = True
local_ip = 10.10.10.2
l2_population = True
(2)完成安装
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute ~]# systemctl restart neutron-linuxbridge-agent.service
(3)修改配置local_settings,支持3层网络服务
[root@controller ~]# vim /etc/openstack-dashboard/local_settings
[root@controller ~]# systemctl restart httpd memcached
十一、Dashboard页面操作
1、配置网络(实现通信)
(1)删除云主机和网络
(2)创建公共网络(admin)
(3)创建私有网络
(4)创建路由(使公网和私网通信)
到此我们可以发现通过路由公网和私网进行了链接!!!
2、创建云主机
3、绑定浮动IP
4、测试
[root@controller ~]# . admin-openrc
[root@controller ~]# openstack server list
+------------------------------+-------+--------+------------------------------+
| ID | Name | Status | Networks |
+------------------------------+-------+--------+------------------------------+
| 0700ea53-d2ca- | cloud | ACTIVE | provite=10.0.0.3, |
| 4de6-bf40-03f3233ee41b | | | 10.10.10.122 |
+------------------------------+-------+--------+------------------------------+
###可以发现可以通过URL访问,并且ip addr查看为10.0.0.3
[root@controller ~]# openstack console url show cloud
+-------+---------------------------------------------------------------------------------+
| Field | Value |
+-------+---------------------------------------------------------------------------------+
| type | novnc |
| url | http://controller:6080/vnc_auto.html?token=f150201d-ea5f-4556-a0a3-c1d3e0bf6da8 |
+-------+---------------------------------------------------------------------------------+
[root@controller ~]# ping 10.0.0.3
connect: Network is unreachable
[root@controller ~]# ping 10.10.10.122
PING 10.10.10.122(10.10.10.122) 56(84) bytes of data.
64 bytes from 10.10.10.122: icmp_seq=1 ttl=64 time=0.305 ms
64 bytes from 10.10.10.122: icmp_seq=2 ttl=64 time=0.323 ms
十二、块存储服务
1、安装和配置控制节点(controller)
(1)配置数据库
[root@controller ~]# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE cinder;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
(2)创建服务证书
[root@controller ~]# . admin-openrc ###获取admin权限
[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | 296463ec2b564fcd85d49d9e89d8cc15 |
| enabled | True |
| id | 8e13279473104f1a8354346743c32e17 |
| name | cinder |
+-----------+----------------------------------+
###添加admin角色到cinder用户上,这个命令执行后没有输出
[root@controller ~]# openstack role add --project service --user cinder admin
###创建cinder和cinderv2服务实体,块设备存储服务要求两个服务实体
[root@controller ~]# openstack service create --name cinder --description "OpenStack Block Storage" volume
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 1459c962567d4572a9334d3921792654 |
| name | cinder |
| type | volume |
+-------------+----------------------------------+
[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | f92b834b5eaf4816af12020b9921eb6c |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
(3)创建块设备存储服务的API入口点
块设备存储服务每个服务实体都需要端点。
[root@controller ~]# openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 37e871970fea4d4c9d692a9b22369f7e |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 1459c962567d4572a9334d3921792654 |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 081594304c504dbda026e7bbb2d1f843 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 1459c962567d4572a9334d3921792654 |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | e232403f9b4f411ab340193b95353376 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 1459c962567d4572a9334d3921792654 |
| service_name | cinder |
| service_type | volume |
| url | http://controller:8776/v1/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 4d4f4f034442427f8fe9ba5a47e2343c |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f92b834b5eaf4816af12020b9921eb6c |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | ba633f8ec89a4206b9ed71fa18e5e0e7 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f92b834b5eaf4816af12020b9921eb6c |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
+--------------+-----------------------------------------+
| Field | Value |
+--------------+-----------------------------------------+
| enabled | True |
| id | 52cae10050dc42919344c0d9f385b162 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f92b834b5eaf4816af12020b9921eb6c |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(tenant_id)s |
+--------------+-----------------------------------------+
(4)安装配置组件
[root@controller ~]# yum install -y openstack-cinder
(5)配置cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
###自身IP
my_ip = 10.10.10.1
###配置数据库访问
[database]
connection = mysql+pymysql://cinder:cinder@controller/cinde
###配置RabbitMQ消息队列访问
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
###认证服务访问配置
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
###锁路径配置
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
(6)初始化块设备服务的数据库
###忽略输出中任何不推荐使用的信息
[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder
(7)配置计算节点以使用块设备存储
[root@controller ~]# vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
(8)完成安装
###重启API服务
[root@controller ~]# systemctl restart openstack-nova-api.service
###启动块设备存储服务,并将其配置为开机自启
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
2、配置存储节点(block)
(1)安装支持的工具包
###安装LVM包,一些发行版默认包含了LVM
[root@block ~]# yum install -y lvm2
###启动LVM的metadata服务并且设置该服务随系统启动
[root@block ~]# systemctl enable lvm2-lvmetad.service
[root@block ~]# systemctl start lvm2-lvmetad.service
(2)添加一块存储
[root@block ~]# fdisk -l
(3)创建LVM
###创建物理卷/dev/vda
[root@block ~]# pvcreate /dev/vda
###创建LVM卷组cinder-volumes
[root@block ~]# vgcreate cinder-volumes /dev/vda
###查看
[root@block ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 rhel lvm2 a-- 19.00g 0
/dev/vda cinder-volumes lvm2 a-- 2.00g 2.00g
[root@block ~]# vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 0 0 wz--n- 2.00g 2.00g
rhel 1 2 0 wz--n- 19.00g 0
(4)配置lvm.conf
[root@block ~]# vim /etc/lvm/lvm.conf
devices {
...
filter = [ "a/vda/", "r/.*/"]
(5)安全并配置组件
[root@block ~]# yum install -y openstack-cinder targetcli python-keystone
(6)配置cinder.conf
[root@block ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
###配置为本机IP
my_ip = 10.10.10.3
###配置镜像服务API的位置
glance_api_servers = http://controller:9292
###配置数据库访问
[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder
###配置RabbitMQ消息队列访问
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = openstack
###认证服务访问配置
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
###此模块不存在,自己添加,配置LVM后端以LVM驱动结束
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
###锁路径配置
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
(7)完成安装
[root@block ~]# systemctl enable openstack-cinder-volume.service target.service
[root@block ~]# systemctl start openstack-cinder-volume.service target.service
(8)连接云硬盘
3、验证操作
[root@controller ~]# . admin-openrc ###获取admin权限
###列出服务组件以验证是否每个进程都成功启动
[root@controller ~]# cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2018-09-21T12:05:56.000000 | - |
| cinder-volume | block@lvm | nova | enabled | up | 2018-09-21T12:05:49.000000 | - |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
###连接到cloud上面可以看到存储即为成功
[root@controller ~]# ssh cirros@10.10.10.122
$sudo fdisk -l
Disk /dev/vdb: 1073 MB,1073741824 Bytes
4、扩展云硬盘
十三、创建镜像主机
1、图形安装
[root@dream images]# ls /root/
rhel-server-6.5-x86_64-dvd.iso
[root@dream ~]# cd /var/lib/libvirt/images/
[root@dream images]# qemu-img create -f qcow2 test.qcow2 5G
###也可以使用"--cdrom="来指定镜像
[root@dream images]# virt-install --name test --memory 1024 --location /root/rhel-server-6.5-x86_64-dvd.iso --disk test.qcow2
注意:创建的分区仅仅只有一个/,且格式为ext4!!!
2、封装虚拟机
(1)关闭防火墙
# /etc/init.d/iptables stop
# /etc/init.d/ip6tables stop
# chkconfig iptables off
# chkconfig ip6tables off
(2)关闭selinux
# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
(3)关闭进度条显示
# vim /boot/grub/grub.conf ###修改后面
kernel /boot/vmlinuz-2.6.32-431.el6.x86_64 ro root=UUID=5d5513e3-55ac-41a9-b154-c5864cb15391 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM quiet cons
(4)配置yum源
# cd /var/www/html/cloud-init/rhel6
# createrepo .
# vim /etc/yum.repos.d/yum.repo
[rhel6.5]
name=rhel6.5
baseurl=http://10.10.10.250/rhel6.5
gpgcheck=0
[cloud-init]
name=cloud-init
baseurl=http://10.10.10.250/cloud-init/rhel6
gpgcheck=0
(5)配置网卡
# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=dhcp
# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=localhost.localdomain
NOZERCONF=yes
(6)安装软件及配置
# yum install -y acpid
# chkconfig acpid on
# yum install -y cloud-init cloud-utils-growpart dracut-modules-growroot
# dracut -f
# ll /boot/
total 20915
-rw-r--r--. 1 root root 105195 Nov 10 2013 config-2.6.32-431.el6.x86_64
drwxr-xr-x. 3 root root 1024 Sep 21 13:51 efi
drwxr-xr-x. 2 root root 1024 Sep 22 02:03 grub
-rw-------. 1 root root 14445312 Sep 22 02:27 initramfs-2.6.32-431.el6.x86_64.img
drwx------. 2 root root 12288 Sep 21 13:49 lost+found
-rw-r--r--. 1 root root 193758 Nov 10 2013 symvers-2.6.32-431.el6.x86_64.gz
-rw-r--r--. 1 root root 2518236 Nov 10 2013 System.map-2.6.32-431.el6.x86_64
-rwxr-xr-x. 1 root root 4128944 Nov 10 2013 vmlinuz-2.6.32-431.el6.x86_64
# vim /etc/cloud/cloud.cfg
cloud_init_modules:
- bootcmd
- write-files
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- rsyslog
- users-groups
- ssh
- resolv-conf ###添加此项
(7)关机并清除配置
# poweroff
[root@dream ~]# virt-sysprep -d test ###带主机名的就是真机了
(8)压缩镜像
[root@dream ~]# cd /var/lib/libvirt/images/
[root@dream images]# virt-sparsify --compress test.qcow2 test.qcow2.new
[root@dream images]# du -sh test.qcow2 test.qcow2.new
1.1G test.qcow2
288M test.qcow2.new
###复制到自己默认发布目录
[root@dream images]# cp test.qcow2.new /var/www/html/
然后通过dashboard,系统 --> 镜像 --> 创建镜像,输入(http://10.10.10.250/test.qcow2.new)进行导入,这就是模板,我们就可以进行上面的方法进行选择模板进行创建即可进行使用!!!