一、openstack简介
1、Openstack是一个IaaS平台管理解决方案
2、OpenStark是由网络主机服务商Rackspace和美国宇航局联合推出的一个开源项目,目的是制定一套开源软件标准,任何公司或个人都可以搭建自己的云计算环境(IaaS),从此打破了Amazon等少数公司的垄断,意义非凡。
3、OpenStack由一系列的子项目组成:
Identity (Keystone)
Compute (Nova)
Image (Glance)
Block Storage (Cinder)
Network (Neutron)
Object Storage (Swift)
Dashboard (Horizon)
Metering (Ceilometer)
Orchestration (Heat)
二、Openstark部署
- 手动部署
- Fuel:Mirantis提供的企业级别的自动化部署工具
- RDO:Redhat提供的openstark的部署方法
- Devstack:快速搭建开发环境的工具
- Openshit:Ubuntu 14.04下openstark的快速部署工具
(1)下载源码
$:git clone https://github.com/windworst/openshit.git
(2)install & configure Openstark
修改配置文件:
zc@linux-B7102T76V12HR-2T-N:~/openshit$ cat setting.conf
# This is OpenShit configure file
# All of settings in this file
# Update to Openstack component configure file
# node ip
SET_CONTROLLER_IP=127.0.0.1
SET_COMPUTE_IP=127.0.0.1
SET_INTERFACE_NAME=eth1
#vnc
SET_VNC_IP=$SET_CONTROLLER_IP
SET_VNC_CONNECT_IP=$SET_CONTROLLER_IP
# mysql configure
SET_MYSQL_IP=$SET_CONTROLLER_IP
SET_MYSQL_USER=root
SET_MYSQL_PASS=smartcore
SET_MYSQL_PORT=3306
# rabbit password
SET_RABBITMQ_IP=$SET_CONTROLLER_IP
SET_RABBITMQ_PASS=smartcore
# keystone service configure
SET_KEYSTONE_IP=$SET_COMPUTE_IP
SET_KEYSTONE_AUTH_URL=http://$SET_KEYSTONE_IP:35357/v2.0
SET_KEYSTONE_AUTH_URL_PUBLIC=http://$SET_KEYSTONE_IP:5000/v2.0
SET_OS_SERVICE_TOKEN=admin
SET_KEYSTONE_ADMIN_TENANT=admin
SET_KEYSTONE_ADMIN_ROLE=admin
SET_KEYSTONE_ADMIN=admin
SET_KEYSTONE_DBPASS=smartcore
SET_KEYSTONE_ADMIN_PASS=smartcore
# glance service configure
SET_GLANCE_IP=$SET_CONTROLLER_IP
SET_GLANCE_DBPASS=smartcore
SET_GLANCE_PASS=smartcore
# nova service configure
SET_NOVA_IP=$SET_CONTROLLER_IP
SET_NOVA_DBPASS=smartcore
SET_NOVA_PASS=smartcore
# dashboard service configure
SET_DASH_DBPASS=smartcore
# cinder service configure
SET_CINDER_IP=$SET_CONTROLLER_IP
SET_CINDER_DBPASS=smartcore
SET_CINDER_PASS=smartcore
# neutron service configure
SET_NEUTRON_IP=$SET_CONTROLLER_IP
SET_NEUTRON_DBPASS=smartcore
SET_NEUTRON_PASS=smartcore
SET_NEUTRON_METADATA_SECRET=smartcore
# heat service configure
#SET_HEAT_DBPASS=
#SET_HEAT_PASS=
# ceilometer service configure
#SET_CEILOMETER_DBPASS=
#SET_CEILOMETER_PASS=
# trove service configure
#SET_TROVE_DBPASS=
#SET_TROVE_PASS=
安装:
$:./openshit.sh --all install && ./openshit.sh --all config
导入环境变量:
$:source admin-env.sh
(3)clean & uninstall
$:./openshit.sh --all clean && ./openshit.sh --all uninstall
三、Openstack简单使用
1、管理openstark服务
查看服务状态
$:nova service-list
管理全部服务
$: ./openshit --all stop
$: ./openshit --all start
单独管理服务
$:service nova-cert status
$:service nova-cert start
$:service nova-cert stop
2、发布CentOS镜像
$:glance image-create --name=cirros --disk-format=qcow2 --container-format=ovf --is-piblic=true < /home/cirros-0.3.0-x86_64-disk.img
$:glance image-list
3、命令行创建虚拟机实例
创建网络
$:nova network-create vmnet --fixed-range-v4=10.0.0.0/24 --bridge-interface=br100
查看网络
$:nova-manage network list
查看flavor
$:nova flavor-list
创建vm
$:nova boot --flavor 1 --image cirros vm01
查看vm
$:nova list
四、ceph与openstack集成
1、安装ceph客户端
$:apt-get install python-ceph
$:apt-get install ceph-common
2、创建pool及ceph用户
创建pool
$:ceph osd pool create datastore 512
创建用户
$:ceph auth get-or-create client.icehouse mon 'allow r' osd 'allow class-read object_prefix rbd_children,allow rwx pool=datastore'
$:ceph auth get-or-create client.icehouse | ssh XX.XX.XX.XX sudo tee /etc/ceph/ceph.client.icehouse.keyring
$:ssh XX.XX.XX.XX sudo chmod +r /etc/ceph/ceph.client.icehouse.keyring
将/etc/ceph/ceph.conf文件拷贝到openstack节点上:
ssh xx.xx.xx.xx sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
配置glance-api.conf文件
[DEFAULT]
default_store = rbd
[glance_store]
store = rbd
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_user = icehouse
rbd_store_pool = datastore
show_image_direct_url = True
重启glance服务
$:service glance-api restart
$:service glance-registry restart
上传一个镜像,测试ceph是否配置成功作为glance后端使用:
参考上边glance使用
查看ceph中datastore pool的列表
$:rados --pool=datastore ls
在openstack计算节点上生成一个uuid:
$:uuidgen
创建一个临时文件
$:vim secort.xml
<secore ephemeral='no' private='no'>
<uuid>sgda29dhj3bsybhfjbsjiv</uuid>
<usage type='ceph'>
<name>client.icehouse secret</name>
</usage>
</secret>
从创建的secret.xml文件创建秘钥:
$:virsh secret-define --file secret.xml
设定libvirt使用上面的秘钥:
$:virsh secret-set-value --secret sdwdbhad7239dnjsadjd --base64 $(cat client.icehouse.key) && rm client.icehouse.key secret.xml
查看秘钥
$:virsh secret-list
ceph与cinder
配置clinder.conf配置文件
[DEFAULT]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = datastore
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_user = icehouse
glance_api_version = 2
rbd_secret_uuid = uasdbnb723hbrsh83bd
重启服务
$:service cinder-api restart
$:service cinder-scheduler restart
$:service cinder-volume restart
测试cinder是否使用ceph
创建卷cephVolume:
$:cinder create --display-name cephVolume 1
通过cinder list与rados --pool=datastore ls验证cephVolume是否放在cinder上
ceph与glane
ceph与nova
修改计算节点的nova.conf文件
[libvirt]
images_type = rbd
images_rbd_pool = datastore
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = icehouse
rbd_secret_uuid = sjhdh398jrsdfh3jr8
inject_password = false
inject_key = false
inject_partition = -2
重启nova
$:./openshit --all restart
测试nova是否使用ceph:
创建虚拟机,使用上面的方法
nova list
rados --pool=datastore ls