OpenStack Grizzly-g3 单节点安装在 Ubuntu12.04 上

网络环境
单独的网络节点 GRE 模式最少需要三块网卡,而我这里是把所有服务都安装在了一个节点上,并不存在 quantum 多 agent , 所以我在这里用了两个网卡。
1.管理网络: eth0 192.16.0.254/16 用来 Mysql、AMQP、API
2.外部网络: eth1 192.168.137.154/24 br-ex
3.用于下载软件包的网络: eth2 192.168.137.155/24




网卡设置
eth1 用来做quantum的external网络,暂时没有把ip地址写到配置文件里,在后面配置ovs时候会在文件增加一个br-ex网卡信息.
将网卡信息修改如下所示:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.16.0.254
netmask 255.255.0.0
#若网络环境会自动分配IP地址则eth0如下修改:
#iface eth0 inet static
#address 192.16.0.254
#netmask 255.255.0.0
#auto eth0
auto eth1
iface eth1 inet manual
然后重启网络:
# /etc/init.d/networking restart



添加 Grizzly 源, 并更新软件包
# cat > /etc/apt/sources.list.d/grizzly.list << _GEEK_
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main
deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-proposed/grizzly main
_GEEK_
# apt-get install ubuntu-cloud-keyring python-software-properties software-properties-common python-keyring
# apt-get update
# apt-get upgrade
# apt-get dist-upgrade


安装 mysql
# apt-get install python-mysqldb mysql-server
使用sed编辑 /etc/mysql/my.cnf 文件的更改绑定地址(0.0.0.0)从本地主机(127.0.0.1)
禁止 mysql 做域名解析,防止 连接 mysql 出现错误和远程连接 mysql 慢的现象。
然后重新启动mysql服务.
# sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
# sed -i '44 i skip-name-resolve' /etc/mysql/my.cnf
# /etc/init.d/mysql restart






安装 RabbitMQ
# apt-get install rabbitmq-server
安装和配置Keystone
# apt-get install keystone
删除默认 keystone 的 sqlite db 文件
# rm -f /var/lib/keystone/keystone.db
创建 keystone 数据库
在 mysql 里创建 keystone 库,并授权 keystone 用户访问:
# mysql -uroot -pmysql
mysql> create database keystone;
mysql> grant all on keystone.* to 'keystone'@'%' identified by 'keystone';
mysql> flush privileges; quit;
修改/etc/keystone/keystone.conf
# vi /etc/keystone/keystone.conf
admin_token = ADMIN
debug = True
verbose = True
[sql]
connection = mysql://keystone:keystone@192.16.0.254/keystone #这一行必须在 [sql] 下面
[signing]
token_format = UUID
启动 keystone 服务: 
# /etc/init.d/keystone restart
同步 keystone 表数据到 db 中: 
# keystone-manage db_sync
用脚本导入数据
创建 user、role、tenant、service、endpoint:
下载脚本:
# wget http://download.longgeek.com/openstack/grizzly/keystone.sh
修改脚本内容:
ADMIN_PASSWORD=${ADMIN_PASSWORD:-password} #租户 admin 的密码
SERVICE_PASSWORD=${SERVICE_PASSWORD:-password} #nova,glance,cinder,quantum,swift的密码
export SERVICE_TOKEN="ADMIN" # token
export SERVICE_ENDPOINT="http://192.16.0.254:35357/v2.0"
SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-service} #租户 service,包含了nova,glance,ciner,quantum,swift等服务
KEYSTONE_REGION=RegionOne
KEYSTONE_IP="192.16.0.254"
#KEYSTONE_WLAN_IP="192.16.0.254"
SWIFT_IP="192.16.0.254"
#SWIFT_WLAN_IP="192.16.0.254"
COMPUTE_IP=$KEYSTONE_IP
EC2_IP=$KEYSTONE_IP
GLANCE_IP=$KEYSTONE_IP
VOLUME_IP=$KEYSTONE_IP
QUANTUM_IP=$KEYSTONE_IP
执行脚本:
# sh keystone.sh
设置环境变量
这里变量对于 keystone.sh 里的设置:
# cat > /root/export.sh << _GEEK_
export OS_TENANT_NAME=admin #这里如果设置为 service 其它服务会无法验证.
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://192.16.0.254:5000/v2.0/
export OS_REGION_NAME=RegionOne
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT=http://192.16.0.254:35357/v2.0/
_GEEK_
# echo 'source /root/export.sh' >> /root/.bashrc
# source /root/export.sh
验证 keystone
# keystone user-list
# keystone role-list
# keystone tenant-list
# keystone endpoint-list

Troubleshooting Keystone
1. 查看 5000 和 35357 端口是否在监听
2. 查看 /var/log/keystone/keystone.log 报错信息
3. keystone.sh 脚本执行错误解决:(检查脚本内容变量设置)
# mysql -uroot -pmysql
mysql> drop database keystone;
mysql> create database keystone; quit;
# keystone-manage db_sync
# sh keystone.sh
4.验证keystone时出现错误,先去查看 log,在检查环境变量是否设置正确




安装和配置Glance
安装glance
# apt-get install glance
删除 glance sqlite 文件:
# rm -f /var/lib/glance/glance.sqlite
创建 glance 数据库
# mysql -uroot -pmysql
mysql> create database glance;
mysql> grant all on glance.* to 'glance'@'%' identified by 'glance';
mysql> flush privileges;
mysql> quit;
修改glance配置文件
# vi /etc/glance/glance-api.conf
修改下面的选项,其它默认。
verbose = True
debug = True
sql_connection = mysql://glance:glance@192.16.0.254/glance
workers = 4
registry_host = 192.16.0.254
notifier_strategy = rabbit
rabbit_host = 192.16.0.254
rabbit_userid = guest
rabbit_password = guest
[keystone_authtoken]
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
[paste_deploy]
config_file = /etc/glance/glance-api-paste.ini
flavor = keystone

# vi /etc/glance/glance-registry.conf
修改下面的选项,其它默认。
verbose = True
debug = True
sql_connection = mysql://glance:glance@192.16.0.254/glance
[keystone_authtoken]
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = password
[paste_deploy]
config_file = /etc/glance/glance-registry-paste.ini
flavor = keystone

启动 glance 服务:
# /etc/init.d/glance-api restart
# /etc/init.d/glance-registry restart
同步到db
# glance-manage version_control 0
# glance-manage db_sync
检查glance,结果应该为空,什么都不显示
# glance image-list

上传镜像文件
下载Cirros img作为测试使用,只有10M:
# wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
# glance image-create --name='cirros' --public --container-format=ovf --disk-format=qcow2 < ./cirros-0.3.0-x86_64-
disk.img
Added new image with ID: xxxxxxxxxxx
Cirros img 是可以使用用户名和密码登陆,也可以使用密钥登陆, user:cirros password:cubswin:)
Troubleshooting Glance
1. 确保配置文件正确,9191 9292 端口存在
2. /var/log/glance/ 两个log文件
3. 确保环境变量中的OS_TENANT_NAME=admin, 否则会报 401错误
4. 上传镜像的格式对应命令中指定的格式




安装 Openvswitch
# apt-get install openvswitch-datapath-source
# module-assistant auto-install openvswitch-datapath
# apt-get install openvswitch-switch openvswitch-brcompat
设置 ovs-brcompatd 启动:
# sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g' /etc/default/openvswitch-switch
启动 openvswitch-switch:
# /etc/init.d/openvswitch-switch restart
* ovs-brcompatd is not running #brcompatd没有启动
* ovs-vswitchd is not running
* ovsdb-server is not running
* Inserting openvswitch module
* /etc/openvswitch/conf.db does not exist
* Creating empty database /etc/openvswitch/conf.db
* Starting ovsdb-server
* Configuring Open vSwitch system IDs
* Starting ovs-vswitchd
* Enabling gre with iptables
再次启动,直到 ovs-brcompatd、ovs-vswitchd、ovsdb-server等服务都启动
# /etc/init.d/openvswitch-switch restart
# lsmod | grep brcompat
brcompat 13512 0 
openvswitch 84038 7 brcompat
如果还是启动不了的话,用下面命令:
# /etc/init.d/openvswitch-switch force-reload-kmod
添加网桥
添加 External 网络网桥 br-ex
用 openvswitch 添加网桥 br-ex 并把网卡 eth1 加入 br-ex:
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth1


做完上面操作后,eth1 这个网卡是没有工作的,手工设置 ip:
# ifconfig eth1 0
# ifconfig br-ex 192.168.137.154/24
# route add default gw 192.168.137.2 dev br-ex
# echo 'nameserver 8.8.8.8' > /etc/resolv.conf
在写到网卡配置文件:
# vi /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.16.0.254
netmask 255.255.0.0
auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
down ifconfig $IFACE down
auto br-ex
iface br-ex inet static
address 192.168.137.154
netmask 255.255.255.0
gateway 192.168.137.2
dns-nameservers 8.8.8.8

# /etc/init.d/keystone restart
重启网卡可能会出现:
Failed to bring up br-ex.
br-ex 可能有 ip 地址,但没有网关和 DNS,需要手工配置一下,或者重启机器. 重启机器后就正常了。
# ovs-vsctl add-br br-int
查看网络
# ovs-vsctl list-br
br-ex
br-int
# ovs-vsctl show
1a8d2081-4ba4-4cad-8020-ccac5772836a
Bridge br-int
Port br-int
Interface br-int
type: internal
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
ovs_version: "1.4.0+build0"

安装quantum
# apt-get install quantum-server python-cliff python-pyparsing python-quantumclient
安装 openvswitch 插件来支持 OVS:
# apt-get install quantum-plugin-openvswitch


创建 Quantum DB
# mysql -uroot -pmysql
mysql> create database quantum;
mysql> grant all on quantum.* to 'quantum'@'%' identified by 'quantum';
mysql> flush privileges; quit;
配置 /etc/quantum/quantum.conf
# vi /etc/quantum/quantum.conf
修改如下选项,其他保留,若原配置中不存在以下选项则添加:
[DEFAULT]
debug = True
verbose = True
state_path = /var/lib/quantum
lock_path = $state_path/lock
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2
api_paste_config = /etc/quantum/api-paste.ini
control_exchange = quantum
rabbit_host = 192.16.0.254
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
notification_driver = quantum.openstack.common.notifier.rpc_notifier
default_notification_level = INFO
notification_topics = notifications
[QUOTAS]
[DEFAULT_SERVICETYPE]
[SECURITYGROUP]
[AGENT]
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
[keystone_authtoken]
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = password
signing_dir = /var/lib/quantum/keystone-signing
配置 Open vSwitch Plugin
# vi /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini
[DATABASE]
sql_connection = mysql://quantum:quantum@192.16.0.254/quantum
reconnect_interval = 2
[OVS]
enable_tunneling = True
tenant_network_type = gre
tunnel_id_ranges = 1:1000
local_ip = 10.0.0.1
integration_bridge = br-int
tunnel_bridge = br-tun
[AGENT]
polling_interval = 2
[SECURITYGROUP]
启动quantum服务
# /etc/init.d/quantum-server restart
安装 OVS agent
# apt-get install quantum-plugin-openvswitch-agent
启动 ovs-agent 时候确保 ovs_quantum_plugin.ini 里有 local_ip 存在. 确保 br-int 网桥已创建.
# /etc/init.d/quantum-plugin-openvswitch-agent restart
启动 ovs-agent 后会根据配置文件自动创建一个 br-tun 网桥:
# ovs-vsctl list-br
br-ex
br-int
br-tun
# ovs-vsctl show
1a8d2081-4ba4-4cad-8020-ccac5772836a
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port "eth1"
Interface "eth1"
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "1.4.0+build0"


安装 quantum-dhcp-agent
# apt-get install quantum-dhcp-agent
配置 quantum-dhcp-agent:
# vi /etc/quantum/dhcp_agent.ini
修改如下选项,其他保留,若原配置中不存在以下选项则添加:
[DEFAULT]
debug = True
verbose = True
use_namespaces = True
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = password
auth_url = http://192.16.0.254:35357/v2.0
dhcp_agent_manager = quantum.agent.dhcp_agent.DhcpAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
state_path = /var/lib/quantum
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
启动服务:
# /etc/init.d/quantum-dhcp-agent restart
安装 L3 Agent
# apt-get install quantum-l3-agent
配置 L3 Agent:
# vi /etc/quantum/l3_agent.ini 
修改如下选项,其他保留,若原配置中不存在以下选项则添加:
[DEFAULT]
debug = True
verbose = True
use_namespaces = True
external_network_bridge = br-ex
signing_dir = /var/cache/quantum
admin_tenant_name = service
admin_user = quantum
admin_password = password
auth_url = http://192.16.0.254:35357/v2.0
l3_agent_manager = quantum.agent.l3_agent.L3NATAgentWithStateReport
root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
启动 L3 agent:
# /etc/init.d/quantum-l3-agent restart
配置 Metadata agent
# vi /etc/quantum/metadata_agent.ini
修改如下选项,其他保留,若原配置中不存在以下选项则添加:
[DEFAULT]
debug = True
auth_url = http://192.16.0.254:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = password
state_path = /var/lib/quantum
nova_metadata_ip = 192.16.0.254
nova_metadata_port = 8775
启动 Metadata agent:
# /etc/init.d/quantum-metadata-agent restart
Troubleshooting Quantum
1. 所有配置文件配置正确,9696 端口启动
2. /var/log/quantum/下所有 log 文件
3. br-ex、br-int 提前添加好




安装Cinder
# apt-get install cinder-api cinder-common cinder-scheduler cinder-volume python-cinderclient
创建DB
# mysql -uroot -pmysql
mysql> create database cinder;
mysql> grant all on cinder.* to 'cinder'@'%' identified by 'cinder';
mysql> flush privileges; quit;
建立一个逻辑卷卷组 cinder-volumes,有两种方法,一种是用物理磁盘创建主分区,一种是用文件来模拟,两者选其一。
方法一:创建一个普通分区,可以用sdb,创建了一个主分区,大小为所有空间
# fdisk /dev/sdb
n
p
1
Enter
Enter
t
8e
w
# partx -a /dev/sdb
# pvcreate /dev/sdb1
# vgcreate cinder-volumes /dev/sdb1
# vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 0 0 wz--n- 150.00g 150.00g
localhost 1 2 0 wz--n- 279.12g 12.00m

方法二:文件模拟
# apt-get install iscsitarget open-iscsi iscsitarget-dkms 
配置iscsi服务
# sed -i 's/false/true/g' /etc/default/iscsitarget
重启服务
# service iscsitarget start
# service open-iscsi start
创建组
# dd if=/dev/zero of=cinder-volumes bs=1 count=0 seek=2G
# losetup /dev/loop2 cinder-volumes
# fdisk /dev/loop2
#Type in the followings:
n
p
1
ENTER
ENTER
t
8e
w
创建物理卷和卷组:
# pvcreate /dev/loop2
# vgcreate cinder-volumes /dev/loop2
# vgs
VG #PV #LV #SN Attr VSize VFree
cinder-volumes 1 0 0 wz--n- 2.00g 2.00g
修改配置文件修改cinder.conf
# vi /etc/cinder/cinder.conf
修改如下选项,其他保留,若原配置中不存在以下选项则添加:
[DEFAULT]
# LOG/STATE
verbose = True
debug = True
iscsi_helper = tgtadm
auth_strategy = keystone
volume_group = cinder-volumes
volume_name_template = volume-%s
state_path = /var/lib/cinder
volumes_dir = /var/lib/cinder/volumes
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
# RPC
rabbit_host = 192.16.0.254
rabbit_password = guest
rpc_backend = cinder.openstack.common.rpc.impl_kombu
# DATABASE
sql_connection = mysql://cinder:cinder@192.16.0.254/cinder
# API
osapi_volume_extension = cinder.api.contrib.standard_extensions
修改api-paste.ini
# vi /etc/cinder/api-paste.ini
修改文件末尾[filter:authtoken]字段:
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 192.16.0.254
service_port = 5000
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = password
signing_dir = /var/lib/cinder
同步并启动服务
同步到 db 中:
# cinder-manage db sync
2013-03-11 13:41:57.885 30326 DEBUG cinder.utils [-] backend <module 'cinder.db.sqlalchemy.migration' from
'/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/migration.pyc'> __get_backend /usr/lib/python2.7/dist-
packages/cinder/utils.py:561
启动服务:
# for serv in api scheduler volume
do
/etc/init.d/cinder-$serv restart
done
# /etc/init.d/tgt restart
检查
# cinder list
Troubleshooting Cinder
1. 服务和 8776 端口启动
2. /var/log/cinder 中日志文件
3. 依赖配置文件指定的volume_group = cinder-volumes, 卷组存在
4. tgt 服务正常.




检查kvm是否可行并安装libvirtd服务
# apt-get install cpu-checker
# kvm-ok
如果kvm加速不行,请确认cpu是否支持虚拟化。
# apt-get install -y kvm libvirt-bin pm-utils
修改/etc/libvirt/qemu.conf:
# vi /etc/libvirt/qemu.conf
修改如下信息:
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc", "/dev/hpet","/dev/net/tun"
]

更新/etc/libvirt/libvirtd.conf
# vi /etc/libvirt/libvirtd.conf
修改如下信息:
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"

更改 /etc/init/libvirt-bin.conf
# vi /etc/init/libvirt-bin.conf
修改如下信息:
env libvirtd_opts="-d -l"
更改/etc/default/libvirt-bin 
# vi /etc/default/libvirt-bin 
修改如下信息:
libvirtd_opts="-d -l"
重启libvirt服务
# service libvirt-bin restart
或者
# /etc/init.d/libvirt-bin restart






安装Nova控制器
# apt-get install nova-api nova-novncproxy novnc nova-ajax-console-proxy nova-cert nova-consoleauth nova-doc nova-scheduler
# apt-get install nova-compute nova-conductor nova-compute-kvm
创建数据库
# mysql -uroot -pmysql
mysql> create database nova;
mysql> grant all on nova.* to 'nova'@'%' identified by 'nova';
mysql> flush privileges; quit;
配置nova
# vi /etc/nova/nova.conf
修改如下选项,其他保留,若原配置中不存在以下选项则添加:
[DEFAULT]
# LOGS/STATE
debug = True
verbose = True
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lock/nova
rootwrap_config = /etc/nova/rootwrap.conf
dhcpbridge = /usr/bin/nova-dhcpbridge
# SCHEDULER
compute_scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
## VOLUMES
volume_api_class = nova.volume.cinder.API
# DATABASE
sql_connection = mysql://nova:nova@192.16.0.254/nova
# COMPUTE
libvirt_type = kvm
compute_driver = libvirt.LibvirtDriver
instance_name_template = instance-%08x
api_paste_config = /etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host = True
# APIS
osapi_compute_extension = nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host = 192.16.0.254
s3_host = 192.16.0.254
# RABBITMQ
rabbit_host = 192.16.0.254
rabbit_password = guest
# GLANCE
image_service = nova.image.glance.GlanceImageService
glance_api_servers = 192.16.0.254:9292
# NETWORK
network_api_class = nova.network.quantumv2.api.API
quantum_url = http://192.16.0.254:9696
quantum_auth_strategy = keystone
quantum_admin_tenant_name = service
quantum_admin_username = quantum
quantum_admin_password = password
quantum_admin_auth_url = http://192.16.0.254:35357/v2.0
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
# NOVNC CONSOLE
novncproxy_base_url = http://192.168.137.154:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each compute host
vncserver_proxyclient_address = 192.16.0.254
vncserver_listen = 0.0.0.0
# AUTHENTICATION
auth_strategy = keystone
[keystone_authtoken]
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dir = /tmp/keystone-signing-nova
配置nova-compute,以支持ovs
# vi /etc/nova/nova-compute.conf
加上
libvirt_ovs_bridge=br-int
libvirt_vif_type=ethernet
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
libvirt_use_virtio_for_bridges=True
配置 api-paste.ini
修改 [filter:authtoken]:
# vi /etc/nova/api-paste.ini
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.16.0.254
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = password
signing_dir = /tmp/keystone-signing-nova

启动服务
# for serv in api cert scheduler consoleauth novncproxy conductor compute;
do
/etc/init.d/nova-$serv restart
done


同步数据并启动服务
# nova-manage db sync
# !for
查看服务
出现笑脸:)表示对应服务正常,如做状态是XX的话,注意查看/var/log/nova/下对应服务的log:
# nova-manage service list 2> /dev/null
Binary Host Zone Status State Updated_At
nova-cert localhost internal enabled :) 2013-03-11 02:56:21
nova-scheduler localhost internal enabled :) 2013-03-11 02:56:22
nova-consoleauth localhost internal enabled :) 2013-03-11 02:56:22
nova-conductor localhost internal enabled :) 2013-03-11 02:56:22
nova-compute localhost nova enabled :) 2013-03-11 02:56:23
组策略
给默认的租策略: default 添加 ping 响应和 ssh 端口:
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
Troubleshooting Nova
1. 配置文件指定的参数是否符合实际环境
2. /var/log/nova/中对应服务的log
3. 依赖环境变量, 数据库连接,端口启动
4. 硬件是否支持虚拟化等




安装Horizon
安装OpenStack Dashboard、Apache 和 WSGI 模块:
# apt-get install -y memcached libapache2-mod-wsgi openstack-dashboard
配置 Dashboard,修改 Memcache 的监听地址:
去掉 ubuntu 的 主题:
# mv /etc/openstack-dashboard/ubuntu_theme.py /etc/openstack-dashboard/ubuntu_theme.py.bak
# 下面是修改监听端口,可以不处理
# vim /etc/openstack-dashboard/local_settings.py
DEBUG = True
CACHE_BACKEND = 'memcached://192.16.0.254:11211/'
OPENSTACK_HOST = "192.16.0.254"
# sed -i 's/127.0.0.1/192.16.0.254/g' /etc/memcached.conf
启动 Memcached 和 Aapache:
# /etc/init.d/memcached restart
# /etc/init.d/apache2 restart
浏览器访问:
 http://192.16.0.254/horizon
用户:    admin
密码: password
Troubleshooting Horizon
1. 出现无法登录的情况,注意查看 /var/log/apache2/error.log 和 /var/log/keystone/keystone.log
一般会出现 401 的错误,主要和配置文件有关系,quantum cinder nova 配置文件的 keystone
验证信息有误。
2. 登录出现 [Errno 111] Connection refused 错误时候,一般是 cinder-api 和 nova-api 没有启动,




配置 External 网络
介绍
我们用管理员创建一个 External 网络后,剩下的就交给每个租户自己来创建自己的网络了。
Quantum 里的名词理解:
Network:分为 External 和 Internal 两种网络, 也就是一个交换机。
Subnet:这个网络在哪个网段,它的网关和 dns 是多少
Router:一个路由器,可以用来隔离不同租户之间自己创建 的 Internal 网络.
Interface: 路由器上的 WLAN 和 LAN 口
Port:交换机上的端口,这个端口被谁使用,可以知道 IP 地址信息。
创建一个 External 网络
注意 router:external=True 参数,它指这是一个 External 网络
# EXTERNAL_NET_ID=$(quantum net-create external_net1 --router:external=True | awk '/ id / {print $4}')
创建一个 Subnet
创建 Float IP 地址的 Subnet, 这个 Subnet 的 DHCP 服务被禁用:
# SUBNET_ID=$(quantum subnet-create external_net1 192.168.137.0/24 --name=external_subnet1 --gateway_ip 192.168.137.2 --enable_dhcp=False | awk '/ id / {print $4}')
创建一个 Internal 网络
# DEMO_ID=$(keystone tenant-list | awk '/ demo / {print $2}')
demo 租户:我给你们部门规划创建了一套网络
# INTERNAL_NET_ID=$(quantum net-create demo_net1 --tenant_id $DEMO_ID | awk '/ id / {print $4}')
为 demo 租户创建 Subnet
demo 租户:我给你们定义了一个 网段 10.1.1.0/24 , 网关是10.1.1.1,默认开启了 dhcp 功能
# DEMO_SUBNET_ID=$(quantum subnet-create demo_net1 10.1.1.0/24 --name=demo_subnet1 --gateway_ip 10.1.1.1 --tenant_id $DEMO_ID| awk '/ id / {print $4}')
为 demo 租户创建一个 Router
又给 demo 租户拿来了一个路由器:
# DEMO_ROUTER_ID=$(quantum router-create --tenant_id $DEMO_ID demo_router1 | awk '/ id / {print $4}')
添加 Router 到 Subnet上
刚才对 demo 说的话, 应用到刚才拿来的路由器上,这个路由器 LAN口地址为: 10.1.1.1, 网段为 10.1.1.0/24:
# quantum router-interface-add $DEMO_ROUTER_ID $DEMO_SUBNET_ID
给Router添加 External IP
在给这个路由器的 WLAN 口插上连接外网的网线,并从 External 网络里拿一个 IP 地址设置到 WLAN 口:
# quantum router-gateway-set $DEMO_ROUTER_ID $EXTERNAL_NET_ID
给demo租户创建一个虚拟机
给我们即将要启动的虚拟机创建一个 Port,指定虚拟机用那个 Subnet 和 Network,在指定一个固定的 IP 地址:
# quantum net-list
+--------------------------------------+---------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+---------------+-------------------------------------------------------+
| a6a8482f-d189-4ced-8b27-cf59331f6ce7 | external_net1 | 5c6a675e-98e5-435d-8bbc-d262accd2286 192.168.137.0/24 |
| afced220-0e54-46f5-925b-095bc90e6010 | demo_net1 | 7a748eba-f553-4323-ae92-3e453292c38d 10.1.1.0/24 |
+--------------------------------------+---------------+-------------------------------------------------------+
# DEMO_PORT_ID=$(quantum port-create --tenant-id=$DEMO_ID --fixed-ip subnet_id=$DEMO_SUBNET_ID,ip_address=10.1.1.11 demo_net1 | awk '/ id / {print $4}')


用 demo 启动虚拟机:
# glance image-list
+--------------------------------------+--------+-------------+------------------+---------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+--------+-------------+------------------+---------+--------+
| 11f91e16-cbed-4f60-bfb4-b9ae96651547 | cirros | qcow2 | ovf | 9761280 | active |
+--------------------------------------+--------+-------------+------------------+---------+--------+
# nova --os-tenant-name demo boot --image cirros --flavor 2 --nic port-id=$DEMO_PORT_ID instance01




给 demo 租户的虚拟机添加 Float ip
虚拟机启动后,你发现你无法 ping 通 10.1.1.11, 有路由器在隔离你当然是无法 ping 通, 不过虚拟机可以出外网. (因为
quantum版本问题,没有 DNS 参数选项,虚拟机的DNS有误,自己修改下虚拟机的resolv.conf), 如果想 ssh 到虚拟机的话,就加
一个 Floating IP吧:
查看 demo 租户的虚拟机的 id
# nova --os_tenant_name=demo list
+--------------------------------------+------------+--------+---------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------------+--------+---------------------+
| ea3c298c-9c30-4bf9-b6ca-37a719e01ff6 | instance01 | ACTIVE | demo_net1=10.1.1.11 |
+--------------------------------------+------------+--------+---------------------+

问题:镜像状态为error的话,且instance日志如下:
Unable to get log for instance "e4497913-5ada-4559-8d25-0cabea149acd".
nova-compute.log 出现如下错误信息:
1)RROR nova.compute.manager Instance failed to spawn
2)libvirtError: internal error Process exited while reading console log output: char device
是因为libvirtd服务的权限不够,要修改/etc/libvirt/qemu.conf
#vi /etc/libvirt/qemu.conf
修改如下:
user = "root"
group = "root"
dynamic_ownership = 1
然后重启libvirt-bin服务和nova服务
# /etc/init.d/libvirt-bin restart
# for serv in api cert scheduler consoleauth novncproxy conductor compute;
do
/etc/init.d/nova-$serv restart
done
然后再新建实例
获取虚拟机的 port id
# quantum port-list -- --device_id ea3c298c-9c30-4bf9-b6ca-37a719e01ff6
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| 701aafff-4e9e-48fe-891b-e576da51a2a0 | | fa:16:3e:16:2e:13 | {"subnet_id": "7a748eba-f553-4323-ae92-3e453292c38d", "ip_address": "10.1.1.11"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
创建一个 Float ip
注意收集 id:
# quantum --os_tenant_name=demo floatingip-create external_net1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.137.4 |
| floating_network_id | a6a8482f-d189-4ced-8b27-cf59331f6ce7 |
| id | 3e728bc3-88b9-40b2-a743-e83e84d9aa69 |
| port_id | |
| router_id | |
| tenant_id | 13b1f513484d40afaec3fd0b382cac09 |
+---------------------+--------------------------------------+


关联浮动 IP 到 VM
# quantum --os_tenant_name=demo floatingip-associate 3e728bc3-88b9-40b2-a743-e83e84d9aa69 701aafff-4e9e-48fe-891b-e576da51a2a0
Associated floatingip 3e728bc3-88b9-40b2-a743-e83e84d9aa69
查看刚才关联的浮动 IP
# quantum floatingip-show 3e728bc3-88b9-40b2-a743-e83e84d9aa69
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 10.1.1.11 |
| floating_ip_address | 192.168.137.4 |
| floating_network_id | a6a8482f-d189-4ced-8b27-cf59331f6ce7 |
| id | 3e728bc3-88b9-40b2-a743-e83e84d9aa69 |
| port_id | 701aafff-4e9e-48fe-891b-e576da51a2a0 |
| router_id | b9f116f5-8363-4fdf-b0d4-0ee239275395 |
| tenant_id | 13b1f513484d40afaec3fd0b382cac09 |
+---------------------+--------------------------------------+


测试:
# ping 192.168.137.4
PING 192.168.137.4 (192.168.137.4) 56(84) bytes of data.
64 bytes from 192.168.137.4: icmp_req=1 ttl=63 time=25.0 ms
64 bytes from 192.168.137.4: icmp_req=2 ttl=63 time=0.963 ms
64 bytes from 192.168.137.4: icmp_req=3 ttl=63 time=0.749 ms
64 bytes from 192.168.137.4: icmp_req=4 ttl=63 time=0.628 ms
64 bytes from 192.168.137.4: icmp_req=5 ttl=63 time=0.596 ms



参考文档

http://longgeek.com/2013/03/11/openstack-grizzly-g3-for-ubuntu-12-04-all-in-one-installation/

https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/master/OpenStack_Grizzly_Install_Guide.rst/

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 14
    评论
评论 14
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值