前言
openstack真是一个庞然大物,想要吃透还真不容易,所以在对openstack大概有了一个了解的时候,就应该是部署,虽然openstack的安装方式有rdo或者devstack等一键安装工具,但是最好浅尝辄止,有了大概的使用经验之后就应该是从头到尾的安装一遍了,不然对于那些报错,以及故障的解决一定是不够气定神闲的,因此,当你有了openstack的基本认识后,开始安装吧~
注:openstack的官方文档写得真的是,好的不要不要的,但是看英文总是感觉有点不溜,因此在官方文档的基础上写得这篇笔记。
参考:http://docs.openstack.org/mitaka/install-guide-rdo/
首先应该是大概的规划,需要几个节点,选择什么操作系统,网络怎么划分~
下面是我的大概规划
节点数:2 (控制节点,计算节点)
操作系统:CentOSLinux release 7.2.1511 (Core)
网络配置:
控制节点: 10.0.0.101 192.168.15.101
结算节点: 10.0.0.102 192.168.15.102
先决条件
The following minimum requirements shouldsupport a proof-of-concept environment with core services and several CirrOSinstances:
Controller Node: 1 processor, 4 GB memory,and 5 GB storage
Compute Node: 1 processor, 2 GB memory, and10 GB storage
官方建议概念验证的最小硬件需求。
控制节点 1 处理器,4 GB内存,5 GB硬盘
计算节点 1 处理器,2 GB内存,10 GB硬盘
参考:http://docs.openstack.org/mitaka/install-guide-rdo/environment.html
注:如果你是用手动一步一步的创建操作系统,配置网络,那么笔者就得好好的鄙视你了~~研究研究vagrant吧,通过下面的配置文件你就能一条命令生成两个虚拟机,并配置好网络了,vagrant简易教程参考:http://youerning.blog.51cto.com/10513771/1745102
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.box = "centos7"
node_servers = { :control => ['10.0.0.101','192.168.15.101'],
:compute =>['10.0.0.102','192.168.15.102']
}
node_servers.each do |node_name,node_ip|
config.vm.define node_name do |node_config|
node_config.vm.host_name = node_name.to_s
node_config.vm.network :private_network,ip: node_ip[0]
node_config.vm.network :private_network,ip: node_ip[1],virtualbox_inet:true
config.vm.boot_timeout = 300
node_config.vm.provider "virtualbox" do |v|
v.memory = 4096
v.cpus = 1
end
end
end
end
通过vagrant up一条命令,稍等一会,两个热腾腾的虚拟机就出炉了,我们的环境就OK了~~
环境如下
操作系统:CentOSLinux release 7.2.1511 (Core)
网络配置:
控制节点: 10.0.0.101 192.168.15.101
结算节点: 10.0.0.102 192.168.15.102
注意:上面的config.vm.box = "centos7",首先需要有个centos7的box
在开始部署前,我们先捋一捋openstack安装步骤
首先是软件环境准备,我们需要将一些通用的软件以及源仓库等进行配置,基本如下
NTP服务器
控制节点,其他节点
openstack 安装包仓库
通用组件:
SQL 数据库 ===> MariaDB
NoSQL 数据库 ==> MongoDB(基本组件不需要,)
消息队列 ==>RabbitMQ
Memcached
再就是openstack整个框架下的各个组件,基本组件如下
认证服务 ===>Keystone
镜像服务 ===>Glance
计算资源服务===> Nova
网络资源服务===> Neutron
Dashboard ===> Horizon
块存储服务 ===>Cinder
其他存储服务,如下
文件共享服务===> Manila
对象存储服务===> Swift
其他组件,如下
编排服务 ===>Heat
遥测服务 ===>Ceilometer
数据库服务===> Trove
环境准备
域名解析
在各个节点编辑hosts文件,加入以下配置
10.0.0.101 controller
10.0.0.102 compute
ntp时间服务器
控制节点
1) 安装chrony软件包
yum install chrony
2) 编辑配置文件 /etc/chrony.conf,添加以下内容,202.108.6.95可根据自己需求自行更改。
server 202.108.6.95 iburst
allow 10.0.0.0/24
3)加入自启动,并启动
#systemctl enable chronyd.service
#systemctl start chronyd.service
其他节点
1) 安装chrony软件包
yum install chrony
2) 编辑配置文件 /etc/chrony.conf,添加以下内容
server controller iburst
allow 10.0.0.0/24
3)加入自启动,并启动
#systemctl enable chronyd.service
#systemctl start chronyd.service
验证
控制节点
#chronyc sources
210Number of sources = 2
MSName/IP address Stratum PollReach LastRx Last sample
=============================================================
^-192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms
^*192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms
其他节点
# chronyc sources
210Number of sources = 1
MSName/IP address Stratum PollReach LastRx Last sample
===============================================================================
^*controller 3 9 377 421 +15us[ -87us] +/- 15ms
openstack 安装包仓库
安装相应openstack版本yum源
yum install centos-release-openstack-mitaka
系统更新
yum upgrade
注:如果系统内核有更新,需要重启
安装openstackclient,openstack-selinux
yum install python-openstackclient
yum install openstack-selinux
注:如果报什么 Package does not match intended download,则yum cleanall或者直接下载rpm包安装吧。
参考下载地址:http://ftp.usf.edu/pub/centos/7/cloud/x86_64/openstack-kilo/common/
SQL数据库
安装
yum install mariadb mariadb-serverpython2-PyMySQL
创建/etc/my.cnf.d/openstack.cnf配置文件,加入以下内容
#绑定IP
[mysqld]
bind-address = 10.0.0.11
#设置字符集等
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
character-set-server = utf8
配置启动项,启动等
systemctl enable mariadb.service
systemctl start mariadb.service
数据库初始化,创建root密码等,操作如下
mysql_secure_installation
Enter current password for root (enter for none):[Enter]
Set root password? [Y/n] Y
New password: openstack
Re-enter new password:openstack
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] n
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y
消息队列rabbitmq
安装
yum install rabbitmq-server
配置启动项,启动
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
添加openstack用户
rabbitmqctl add_user openstack RABBIT_PASS
设置openstack用户的权限,依次分别为写,读,访问
rabbitmqctl set_permissions openstack".*" ".*" ".*"
NoSQL Mongodb
安装
yum install mongodb-server mongodb
配置/etc/mongod.conf配置文件
bind_ip = 10.0.0.11
#smallfile=true可选
smallfiles = true
配置启动项,启动
# systemctl enable mongod.service
# systemctl start mongod.service
Memcached
安装
# yum install memcached python-memcached
配置启动项,启动
# systemctl enable memcached.service
# systemctl start memcached.service
至此,openstack整个框架的软件环境基本搞定,下面就是各组件了。
安装各组件很有意思,除了keystone基本上是差不多的步骤,唯一的区别就是创建时指定的名字不同而已,基本是一般以下步骤。
1)配置数据库
create database xxx
GRANT ALL PRIVILEGES ON keystone.* TO'xxxx'@'localhost' \
IDENTIFIED BY 'XXXX_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO'xxxx'@'%' \
IDENTIFIED BY 'XXXX_DBPASS';
2)安装
yum install xxx
3)配置文件
配置各项服务的连接,比如数据库,rabbitmq等
认证配置
特定配置
4)数据库同步
创建需要的表
5)加入启动项,启动
# systemctl enable openstack-xxx.service
# systemctl start openstack-xxxx.service
6)创建用户,service,endpoint等
openstack user create xxx
openstack service create xxx
openstack endpoint create xxx
7)验证服务是否成功
注:配置文件的配置建议首先备份,然后为了省略不必要的篇幅,在此说明配置文件的编辑方式,如下。
[DEFAULT]
...
admin_token = ADMIN_TOKEN
上面的内容,指明在[DEFAULT]的段落加入admin_token = ADMIN_TOKEN内容。
各组件安装
认证服务 Keystone
配置数据库
$ mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
安装
# yum install openstack-keystone httpdmod_wsgi
配置文件/etc/keystone/keystone.conf
admin令牌
[DEFAULT]
...
admin_token = ADMIN_TOKEN
数据库
[database]
...
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
令牌生成方式
[token]
...
provider = fernet
注:上面的ADMIN_TOKEN可用openssl rand -hex 10命令生成,或者填入一串自定义的字符串
数据库同步
# su -s /bin/sh -c "keystone-managedb_sync" keystone
初始化fernet秘钥
令牌的生成方式参考:http://blog.csdn.net/miss_yang_cloud/article/details/49633719
# keystone-manage fernet_setup--keystone-user keystone --keystone-group keystone
配置Apache
编辑/etc/httpd/conf/httpd.conf
更改一下内容
ServerName controller
创建/etc/httpd/conf.d/wsgi-keystone.conf配置文件,加入以下内容
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystonegroup=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcesskeystone-admin processes=5 threads=1 user=keystone group=keystonedisplay-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
配置启动项,启动
# systemctl enable httpd.service
# systemctl start httpd.service
创建service,API endpoint
为了避免不必要的篇幅,将admin_token,endpoint url配置到环境变量。
$ export OS_TOKEN=ADMIN_TOKEN
$ export OS_URL=http://controller:35357/v3
$ export OS_IDENTITY_API_VERSION=3
创建service
$ openstack service create \
--name keystone --description "OpenStack Identity" identity
创建endpoint,依次有public,internal,admin
$ openstack endpoint create --regionRegionOne \
identity public http://controller:5000/v3
$ openstack endpoint create --regionRegionOne \
identity internal http://controller:5000/v3
$ openstack endpoint create --regionRegionOne \
identity admin http://controller:35357/v3
创建域,项目,用户,角色domain, project, user, role
创建domain
openstack domain create --description"Default Domain" default
创建project
openstack project create --domain default--description "Admin Project" admin
创建用户
openstack user create --domain default --password-promptadmin
提示输入密码的时候,密码输入admin
创建admin role
openstack role create admin
将admin角色加入admin项目中
openstack role add --project admin --useradmin admin
创建service项目
openstack project create --domain default \
--description "Service Project" service
创建demo项目
openstack project create --domain default \
--description "Demo Project" demo
创建demo用户
openstack user create --domain default --password-promptdemo
提示输入密码的时候,密码输入demo
创建user角色
openstack role create user
将user角色加入到demo项目中
openstack role add --project demo --userdemo user
注:记住创建用户时的密码。
验证admin用户
unset OS_TOKEN OS_URL
openstack --os-auth-urlhttp://controller:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:14:07.056119Z |
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
| project_id |343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
验证demo用户
$ openstack --os-auth-urlhttp://controller:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:15:39.014479Z |
| id |gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
| project_id | ed0b60bf607743088218b0a533d5943f |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+------------+-----------------------------------------------------------------+
如果有以上格式返回,验证通过
admin,demo用户的环境变量脚本
正常情况下,当然吧诸如os-xxxx的参数放在环境变量中,为了更快的在admin,demo用户之间切换,创建环境脚本
创建admin-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
exportOS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
创建demo-openrc
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
exportOS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
在此验证admin
首先 . admin-openrc
$ openstack token issue
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:44:35.659723Z |
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
| project_id |343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
镜像服务 Glance
配置数据库
$ mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
创建service,user,role
$ . admin-openrc
$ openstack user create --domain default--password-prompt glance
$ openstack role add --project service--user glance admin
创建endpoint,依次有public,internal,admin
$ openstack service create --name glance \
--description "OpenStack Image" image
$ openstack endpoint create --regionRegionOne \
image public http://controller:9292
$ openstack endpoint create --regionRegionOne \
image internal http://controller:9292
$ openstack endpoint create --regionRegionOne \
image admin http://controller:9292
安装
# yum install openstack-glance
配置文件/etc/glance/glance-api.conf
数据库
[database]
...
connection = mysql://glance:GLANCE_DBPASS@controller/glance
keystone认证
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
...
flavor = keystone
glance存储
[glance_store]
...
stores = file,http
default_store = file
filesystem_store_datadir =/var/lib/glance/images/
配置文件/etc/glance/glance-registry.conf
数据库
[database]
...
connection = mysql://glance:GLANCE_DBPASS@controller/glance
keystone认证
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
...
flavor = keystone
同步数据库
# su -s /bin/sh -c "glance-managedb_sync" glance
启动
# systemctl enableopenstack-glance-api.service \
openstack-glance-registry.service
# systemctl startopenstack-glance-api.service \
openstack-glance-registry.service
验证
$ . admin-openrc
下载cirros镜像
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
创建镜像
$ openstack image create "cirros"\
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
如果执行以下命令,显示如下,则成功
$ openstack image list
+--------------------------------------+--------+
| ID | Name |
+--------------------------------------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 |cirros |
+--------------------------------------+--------+
计算资源服务 nova(控制节点)
数据库
$ mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
创建service,user,role
$ . admin-openrc
$ openstack user create --domain default \
--password-prompt nova
$ openstack role add --project service--user nova admin
$ openstack service create --name nova \
--description "OpenStack Compute" compute
创建endpoint,依次有public,internal,admin
$ openstack endpoint create --regionRegionOne \
compute public http://controller:8774/v2.1/%\(tenant_id\)s
$ openstack endpoint create --regionRegionOne \
compute internal http://controller:8774/v2.1/%\(tenant_id\)s
$ openstack endpoint create --regionRegionOne \
compute admin http://controller:8774/v2.1/%\(tenant_id\)s
安装
# yum install openstack-nova-apiopenstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler
配置文件/etc/nova/nova.conf
启用的api
[DEFAULT]
...
enabled_apis = osapi_compute,metadata
[api_database]
...
connection = mysql://nova:NOVA_DBPASS@controller/nova_api
数据库
[database]
...
connection = mysql://nova:NOVA_DBPASS@controller/nova
rabbitmq队列
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
绑定ip
[DEFAULT]
...
my_ip = 10.0.0.101
支持neutron
[DEFAULT]
...
use_neutron = True
firewall_driver =nova.virt.firewall.NoopFirewallDriver
vnc配置
[vnc]
...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
glance配置
[glance]
...
api_servers = http://controller:9292
并发锁
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
同步数据库
# su -s /bin/sh -c "nova-manage api_dbsync" nova
# su -s /bin/sh -c "nova-manage dbsync" nova
启动
# systemctl enableopenstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl startopenstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
计算资源服务 nova(计算节点)
安装
# yum install openstack-nova-compute
配置文件/etc/nova/nova.conf
rabbitmq队列
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
绑定ip
[DEFAULT]
...
my_ip = 10.0.0.102
支持neutron
[DEFAULT]
...
use_neutron = True
firewall_driver =nova.virt.firewall.NoopFirewallDriver
配置VNC
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url =http://controller:6080/vnc_auto.html
配置Glance
[glance]
...
api_servers = http://controller:9292
并发锁
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
虚拟化驱动
[libvirt]
...
virt_type = qemu
启动
# systemctl enable libvirtd.serviceopenstack-nova-compute.service
# systemctl start libvirtd.serviceopenstack-nova-compute.service
验证
$ . admin-openrc
$ openstack compute service list
+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | UpdatedAt |
+----+--------------------+------------+----------+---------+-------+----------------------------+
| 1| nova-consoleauth | controller |internal | enabled | up |2016-02-09T23:11:15.000000 |
| 2| nova-scheduler | controller |internal | enabled | up |2016-02-09T23:11:15.000000 |
| 3| nova-conductor | controller |internal | enabled | up |2016-02-09T23:11:16.000000 |
| 4| nova-compute | compute1 | nova | enabled | up |2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+
网络服务 neutron(控制节点)
数据库
$ mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
创建service,user,role
$ . admin-openrc
$ openstack user create --domain default--password-prompt neutron
$ openstack role add --project service--user neutron admin
$ openstack service create --name neutron \
--description"OpenStack Networking" network
创建endpoint,依次有public,internal,admin
$ openstack endpoint create --regionRegionOne \
network public http://controller:9696
$ openstack endpoint create --regionRegionOne \
network internal http://controller:9696
$ openstack endpoint create --regionRegionOne \
network admin http://controller:9696
配置提供者网络 provider network,
参考:http://docs.openstack.org/mitaka/install-guide-rdo/neutron-controller-install-option1.html
安装
# yum install openstack-neutronopenstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
配置文件/etc/neutron/neutron.conf
数据库
[database]
...
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
启用二层插件,禁用其他插件
[DEFAULT]
...
core_plugin = ml2
service_plugins =
rabbitmq队列
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
并发锁
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
配置文件/etc/neutron/plugins/ml2/ml2_conf.ini
驱动
[ml2]
...
type_drivers = flat,vlan
禁用个人(selfservice)网络
[ml2]
...
tenant_network_types =
启用linux网桥
[ml2]
...
mechanism_drivers = linuxbridge
端口安装扩展
[ml2]
...
extension_drivers = port_security
flat网络
[ml2_type_flat]
...
flat_networks = provider
启用ipset
[securitygroup]
...
enable_ipset = True
配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings =provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = False
[securitygroup]
...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
注:PROVIDER_INTERFACE_NAME为网络接口,如eth 1之类的
配置文件/etc/neutron/dhcp_agent.ini
[DEFAULT]
...
interface_driver =neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver =neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
配置文件/etc/neutron/metadata_agent.ini
[DEFAULT]
...
nova_metadata_ip = controller
metadata_proxy_shared_secret =METADATA_SECRET
配置文件/etc/nova/nova.conf
[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret =METADATA_SECRET
软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini
数据库同步
su-s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"neutron
重启nova-api
systemctl restartopenstack-nova-api.service
启动
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
网络服务 neutron(计算节点)
安装
yum install openstack-neutron-linuxbridgeebtables
配置文件/etc/neutron/neutron.conf
rabbitmq队列
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
并发锁
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
配置文件/etc/nova/nova.conf
[neutron]
...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
重启nova-compute
# systemctl restartopenstack-nova-compute.service
启动
# systemctl enableneutron-linuxbridge-agent.service
# systemctl startneutron-linuxbridge-agent.service
验证
$ . admin-openrc
$ neutron ext-list
+---------------------------+-----------------------------------------------+
| alias | name |
+---------------------------+-----------------------------------------------+
| default-subnetpools | Default Subnetpools |
| network-ip-availability | Network IP Availability |
| network_availability_zone | NetworkAvailability Zone |
| auto-allocated-topology | Auto Allocated Topology Services |
| ext-gw-mode | Neutron L3 Configurableexternal gateway mode |
| binding | Port Binding |
............
Dashboard horizon
注:必须在控制节点
安装
# yum install openstack-dashboard
配置文件/etc/openstack-dashboard/local_settings
详细见连接http://download.csdn.net/detail/u013982161/9915265
启动
# systemctl restart httpd.service memcached.service
验证
访问http://controller/dashboard
必须加上/dashboard,否则访问的页面不成功。登陆的时候域指定的是default。
块存储 cinder
数据库
$ mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';
创建service,user,role
$ . admin-openrc
$ openstack user create --domain default--password-prompt cinder
$ openstack role add --project service--user cinder admin
注意,这里创建两个service
$ openstack service create --name cinder \
--description "OpenStack Block Storage" volume
$ openstack service create --name cinderv2\
--description "OpenStack Block Storage" volumev2
创建endpoint,依次有public,internal,admin
$ openstack endpoint create --regionRegionOne \
volume public http://controller:8776/v1/%\(tenant_id\)s
$ openstack endpoint create --regionRegionOne \
volume internal http://controller:8776/v1/%\(tenant_id\)s
$ openstack endpoint create --regionRegionOne \
volume admin http://controller:8776/v1/%\(tenant_id\)s
注意,每个service对应三个endpoint
$ openstack endpoint create --regionRegionOne \
volumev2 public http://controller:8776/v2/%\(tenant_id\)s
$ openstack endpoint create --regionRegionOne \
volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
$ openstack endpoint create --region RegionOne\
volumev2 admin http://controller:8776/v2/%\(tenant_id\)s
安装
控制节点
# yum install openstack-cinder
配置文件/etc/cinder/cinder.conf
数据库
[database]
...
connection = mysql://cinder:CINDER_DBPASS@controller/cinder
rabbitmq队列
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
绑定ip
[DEFAULT]
...
my_ip = 10.0.0.11
并行锁
[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp
同步数据库
# su -s /bin/sh -c "cinder-manage dbsync" cinder
配置文件/etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
重启nova-api
# systemctl restartopenstack-nova-api.service
启动
# systemctl enableopenstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl startopenstack-cinder-api.service openstack-cinder-scheduler.service
其他节点,可在计算节点加一块硬盘
注:需要另外一块硬盘
安装
# yum install lvm2
# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
创建逻辑卷
# pvcreate /dev/sdb
Physical volume "/dev/sdb"successfully created
# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes"successfully created
配置文件/etc/lvm/lvm.conf
devices {
...
filter = [ "a/sdb/","r/.*/"]
注:新添加的硬盘一般为sdb,如果有sdc,sde等,则为filter = [ "a/sdb/", "a/sdb/","a/sdb/","r/.*/"],以此类推
安装
# yum install openstack-cinder targetcli
配置文件/etc/cinder/cinder.conf
数据库
[database]
...
connection = mysql://cinder:CINDER_DBPASS@controller/cinder
rabbitmq队列
[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
keystone认证
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
绑定ip
[DEFAULT]
...
my_ip = 10.0.0.102
增加[lvm]及其内容
[lvm]
...
volume_driver =cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
后端启用lvm
[DEFAULT]
...
enabled_backends = lvm
配置Glance API
[DEFAULT]
...
glance_api_servers = http://controller:9292
并行锁
[oslo_concurrency]
...
lock_path = /var/lib/cinder/tmp
启动
# systemctl enableopenstack-cinder-volume.service target.service
# systemctl startopenstack-cinder-volume.service target.service
验证
$ . admin-openrc
$ cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova |enabled | up | 2014-10-18T01:30:54.000000 | None |
| cinder-volume | block1@lvm | nova | enabled | up |2014-10-18T01:30:57.000000 | None |
至此。基本上完成了,所有的安装,你可以在dashboard上首先用admin用户创建一个网络,然后用新建一个实例
后记:虽然手动安装一整套实在有点夸张,这里还是用yum的呢~但是至少得这么手动来一次,其他时候就脚本或者安装工具吧,复制粘贴都把我复制的眼花了~
其他组件就另起一篇文章了,值得注意的是,官方文档才是最好的文档
文章来源
http://youerning.blog.51cto.com/10513771/1769358
安装的过程中部分参考
http://www.cnblogs.com/kevingrace/p/5707003.html