转自:http://blog.lightcloud.cn/?p=91#comment-182
|---------+--------------------------------------| | Author | yz | |---------+--------------------------------------| | Date | 2012-4-6 | |---------+--------------------------------------| | Version | v0.1 | |---------+--------------------------------------| | Website | http://bbs.linuxtone.org | |---------+--------------------------------------| | Contact | Mail: atkisc@gmail.com QQ: 949587200 | |---------+--------------------------------------|
Table of Contents
- 1 实验环境
- 2 架构部署
- 3 控制节点安装
- 3.1 前提工作
- 3.2 NTP时钟服务安装
- 3.3 MYSQL数据库服务安装
- 3.4 RABBITMQ消息队列服务安装
- 3.5 PYTHON-NOVACLIENT库安装
- 3.6 KEYSTONE身份认证服务安装
- 3.7 PYTHON-KEYSTONECLIENT库安装
- 3.8 SWIFT对象存储服务安装
- 3.9 GLANCE镜像存储服务安装
- 3.10 NOVA计算服务安装
- 3.11 HORIZON管理面板安装
- 3.12 NOVNC WEB访问安装
- 3.13 KEYSTONE身份认证服务配置
- 3.14 GLANCE镜像存储服务配置
- 3.15 建立GLANCE服务数据库
- 3.16 NOVA计算服务配置
- 3.17 SWIFT对象存储服务配置
- 3.18 HORIZON管理面板配置
- 3.19 NOVNC WEB访问配置
- 4 计算节点安装
1 实验环境
- 硬件:
DELL R710(1台)|------+----------------------------------------------------------------| | CPU | Intel(R) Xeon(R) CPU E5620 @ 2.40GHz * 2 | |------+----------------------------------------------------------------| | MEM | 48GB | |------+----------------------------------------------------------------| | DISK | 300GB | |------+----------------------------------------------------------------| | NIC | Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet * 4 | |------+----------------------------------------------------------------|
DELL R410(1台)
|------+----------------------------------------------------------------| | CPU | Intel(R) Xeon(R) CPU E5606 @ 2.13GHz * 2 | |------+----------------------------------------------------------------| | MEM | 8GB | |------+----------------------------------------------------------------| | DISK | 1T * 4 | |------+----------------------------------------------------------------| | NIC | Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet * 4 | |------+----------------------------------------------------------------|
- 系统:
CentOS 6.2 x64
- Openstack版本:
Essex release(2012.1)
2 架构部署
- 配置信息
|-------------------+---------------+-------------+--------------| | Machine/Hostname | External IP | Internal IP | Used for | |-------------------+---------------+-------------+--------------| | DELL R410/Control | 60.12.206.105 | 192.168.1.2 | Control Node | | DELL R710/Compute | 60.12.206.99 | 192.168.1.3 | Compute Node | |-------------------+---------------+-------------+--------------|
实例网段为10.0.0.0/24,Floating IP为60.12.206.110,实例网段桥接在内网网卡上,网络模式采用FlatDHCP
控制节点 /dev/sda为系统盘,/dev/sdb为nova-volume盘,/dev/sdc、/dev/sdd为swift存储用 - 服务器系统安装
1. CentOS 6.2 x64使用最小化安装方式 2. 服务器外网使用eth0 3. 服务器内网使用eth1 4. 所有服务均监听
3 控制节点安装
3.1 前提工作
- 导入第三方软件源
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
- 安装依赖包
yum -y install swig libvirt-python libvirt qemu-kvm python-pip gcc make gcc-c++ patch m4 python-devel libxml2-devel libxslt-devel libgsasl-devel openldap-devel sqlite-devel openssl-devel wget telnet gpxe-bootimgs gpxe-roms gpxe-roms-qemu dmidecode git scsi-target-utils kpartx socat vconfig aoetools rpm -Uvh http://veillard.com/libvirt/6.3/x86_64/dnsmasq-utils-2.48-6.el6.x86_64.rpm ln -sv /usr/bin/pip-python /usr/bin/pip
- 更新内核通过uname -r 查看原内核版本,应如下:
2.6.32-220.el6.x86_64
yum -y install kernel kernel-devel init 6
通过uname -r 查看更新后内核版本,应如下:
2.6.32-220.7.1.el6.x86_64
3.2 NTP时钟服务安装
- 安装NTP时钟同步服务器
yum install -y ntp
- 编辑/etc/ntp.conf,将文件内容替换为如下:
restrict default ignore restrict 127.0.0.1 restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap server ntp.api.bz server 127.127.1.0 fudge 127.127.1.0 stratum 10 driftfile /var/lib/ntp/drift keys /etc/ntp/keys
- 重启ntp服务
/etc/init.d/ntpd start
3.3 MYSQL数据库服务安装
- 安装MYSQL数据库服务
yum install -y mysql-server
- 更改MYSQL数据库服务监听内网网卡IP
sed -i '/symbolic-links=0/a bind-address = 192.168.1.2' /etc/my.cnf
- 启动MYSQL数据库服务
/etc/init.d/mysqld start
- 设置MYSQL的root用户密码为openstack
mysqladmin -uroot password 'openstack';history -c
- 检测服务是否正常启动通过netstat -ltunp查看是否有tcp 3306端口监听
如果没有正常启动请查看/var/log/mysqld.log文件排错
3.4 RABBITMQ消息队列服务安装
- 安装RABBITMQ消息队列服务
yum -y install rabbitmq-server
- 启动RABBITMQ消息队列服务
/etc/init.d/rabbitmq-server start
- 更改RABBITMQ消息队列服务guest用户默认密码为openstack
rabbitmqctl change_password guest openstack
3.5 PYTHON-NOVACLIENT库安装
- 下载源码包
wget https://launchpad.net/nova/essex/2012.1/+download/python-novaclient-2012.1.tar.gz -P /opt
- 安装依赖包
yum -y install python-simplejson python-prettytable python-argparse python-nose1.1 python-httplib2 python-virtualenv MySQL-python
- 解压并安装PYTHON-NOVACLIENT库
cd /opt tar xf python-novaclient-2012.1.tar.gz cd python-novaclient-2012.1 python setup.py install rm -f ../python-novaclient-2012.1.tar.gz
3.6 KEYSTONE身份认证服务安装
- 下载源码包
wget https://launchpad.net/keystone/essex/2012.1/+download/keystone-2012.1.tar.gz -P /opt
- 安装依赖包
yum install -y python-eventlet python-greenlet python-paste python-passlib pip install routes==1.12.3 lxml==2.3 pam==0.1.4 passlib sqlalchemy-migrate==0.7.2 PasteDeploy==1.5.0 SQLAlchemy==0.7.3 WebOb==1.0.8
- 解压并安装KEYSTONE身份认证服务
cd /opt tar xf keystone-2012.1.tar.gz cd keystone-2012.1 python setup.py install rm -f ../keystone-2012.1.tar.gz
3.7 PYTHON-KEYSTONECLIENT库安装
- 下载源码包
wget https://launchpad.net/keystone/essex/2012.1/+download/python-keystoneclient-2012.1.tar.gz -P /opt
- 解压并安装PYTHON-KEYSTONECLIENT库
cd /opt tar xf python-keystoneclient-2012.1.tar.gz cd python-keystoneclient-2012.1 python setup.py install rm -f ../python-keystoneclient-2012.1.tar.gz
3.8 SWIFT对象存储服务安装
- 下载源码包
wget https://launchpad.net/swift/essex/1.4.8/+download/swift-1.4.8.tar.gz -P /opt
- 安装依赖包
pip install configobj==4.7.1 netifaces==0.6
- 解压并安装SWIFT对象存储服务
cd /opt tar xf swift-1.4.8.tar.gz cd swift-1.4.8 python setup.py install rm -f ../swift-1.4.8.tar.gz
3.9 GLANCE镜像存储服务安装
- 下载源码包
wget https://launchpad.net/glance/essex/2012.1/+download/glance-2012.1.tar.gz -P /opt
- 安装依赖包
yum install -y python-anyjson python-kombu m2crypto pip install xattr==0.6.0 iso8601==0.1.4 pysendfile==2.0.0 pycrypto==2.3 wsgiref boto==2.1.1
- 解压并安装GLANCE镜像存储服务
cd /opt tar xf glance-2012.1.tar.gz cd glance-2012.1 python setup.py install rm -f ../glance-2012.1.tar.gz
3.10 NOVA计算服务安装
- 下载源码包
wget https://launchpad.net/nova/essex/2012.1/+download/nova-2012.1.tar.gz -P /opt
- 安装依赖包
yum install -y python-amqplib python-carrot python-lockfile python-gflags python-netaddr python-suds python-paramiko python-feedparser pip install Cheetah==2.4.4 python-daemon==1.5.5 Babel==0.9.6
- 解压并安装NOVA计算服务
cd /opt tar xf nova-2012.1.tar.gz cd nova-2012.1 python setup.py install rm -f ../nova-2012.1.tar.gz
3.11 HORIZON管理面板安装
- 下载源码包
wget https://launchpad.net/horizon/essex/2012.1/+download/horizon-2012.1.tar.gz -P /opt
- 安装依赖包
yum install -y python-django-nose python-dateutil python-cloudfiles python-django python-django-integration-apache httpd
- 解压并安装HORIZON管理面板
cd /opt tar xf horizon-2012.1.tar.gz cd horizon-2012.1 python setup.py install rm -f ../horizon-2012.1.tar.gz
3.12 NOVNC WEB访问安装
- 下载源码包
git clone https://github.com/cloudbuilders/noVNC.git /opt/noVNC
- 安装依赖包
yum install -y python-numdisplay
3.13 KEYSTONE身份认证服务配置
- 建立KEYSTONE服务数据库
mysql -uroot -popenstack -e 'create database keystone'
- 建立KEYSTONE服务配置文件存放目录
mkdir /etc/keystone
- 建立KEYSTONE服务启动用户
useradd -s /sbin/nologin -m -d /var/log/keystone keystone
- 在/etc/keystone建立default_catalog.templates作为KEYSTONE服务服务点配置文件,内容如下:
catalog.RegionOne.identity.publicURL = http://60.12.206.105:$(public_port)s/v2.0 catalog.RegionOne.identity.adminURL = http://60.12.206.105:$(admin_port)s/v2.0 catalog.RegionOne.identity.internalURL = http://60.12.206.105:$(public_port)s/v2.0 catalog.RegionOne.identity.name = Identity Service catalog.RegionOne.compute.publicURL = http://60.12.206.105:8774/v2/$(tenant_id)s catalog.RegionOne.compute.adminURL = http://60.12.206.105:8774/v2/$(tenant_id)s catalog.RegionOne.compute.internalURL = http://60.12.206.105:8774/v2/$(tenant_id)s catalog.RegionOne.compute.name = Compute Service catalog.RegionOne.volume.publicURL = http://60.12.206.105:8776/v1/$(tenant_id)s catalog.RegionOne.volume.adminURL = http://60.12.206.105:8776/v1/$(tenant_id)s catalog.RegionOne.volume.internalURL = http://60.12.206.105:8776/v1/$(tenant_id)s catalog.RegionOne.volume.name = Volume Service catalog.RegionOne.ec2.publicURL = http://60.12.206.105:8773/services/Cloud catalog.RegionOne.ec2.adminURL = http://60.12.206.105:8773/services/Admin catalog.RegionOne.ec2.internalURL = http://60.12.206.105:8773/services/Cloud catalog.RegionOne.ec2.name = EC2 Service catalog.RegionOne.s3.publicURL = http://60.12.206.105:3333 catalog.RegionOne.s3.adminURL = http://60.12.206.105:3333 catalog.RegionOne.s3.internalURL = http://60.12.206.105:3333 catalog.RegionOne.s3.name = S3 Service catalog.RegionOne.image.publicURL = http://60.12.206.105:9292/v1 catalog.RegionOne.image.adminURL = http://60.12.206.105:9292/v1 catalog.RegionOne.image.internalURL = http://60.12.206.105:9292/v1 catalog.RegionOne.image.name = Image Service catalog.RegionOne.object_store.publicURL = http://60.12.206.105:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object_store.adminURL = http://60.12.206.105:8080/ catalog.RegionOne.object_store.internalURL = http://60.12.206.105:8080/v1/AUTH_$(tenant_id)s catalog.RegionOne.object_store.name = Swift Service
- 在/etc/keystone建立policy.json作为KEYSTONE服务策略文件,内容如下:
{ "admin_required": [["role:admin"], ["is_admin:1"]] }
- 在/etc/keystone建立keystone.conf作为KEYSTONE服务配置文件,内容如下:
[DEFAULT] public_port = 5000 admin_port = 35357 admin_token = ADMIN compute_port = 8774 verbose = True debug = True log_file = /var/log/keystone/keystone.log use_syslog = False syslog_log_facility = LOG_LOCAL0 [sql] connection = mysql://root:openstack@localhost/keystone idle_timeout = 30 min_pool_size = 5 max_pool_size = 10 pool_timeout = 200 [identity] driver = keystone.identity.backends.sql.Identity [catalog] driver = keystone.catalog.backends.templated.TemplatedCatalog template_file = /etc/keystone/default_catalog.templates [token] driver = keystone.token.backends.kvs.Token [policy] driver = keystone.policy.backends.simple.SimpleMatch [ec2] driver = keystone.contrib.ec2.backends.sql.Ec2 [filter:debug] paste.filter_factory = keystone.common.wsgi:Debug.factory [filter:token_auth] paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory [filter:admin_token_auth] paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory [filter:xml_body] paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory [filter:json_body] paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory [filter:crud_extension] paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory [filter:ec2_extension] paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory [filter:s3_extension] paste.filter_factory = keystone.contrib.s3:S3Extension.factory [app:public_service] paste.app_factory = keystone.service:public_app_factory [app:admin_service] paste.app_factory = keystone.service:admin_app_factory [pipeline:public_api] pipeline = token_auth admin_token_auth xml_body json_body debug ec2_extension s3_extension public_service [pipeline:admin_api] pipeline = token_auth admin_token_auth xml_body json_body debug ec2_extension crud_extension admin_service [app:public_version_service] paste.app_factory = keystone.service:public_version_app_factory [app:admin_version_service] paste.app_factory = keystone.service:admin_version_app_factory [pipeline:public_version_api] pipeline = xml_body public_version_service [pipeline:admin_version_api] pipeline = xml_body admin_version_service [composite:main] use = egg:Paste#urlmap /v2.0 = public_api / = public_version_api [composite:admin] use = egg:Paste#urlmap /v2.0 = admin_api / = admin_version_api
- 在/etc/init.d/下建立名为keystone的KEYSTONE服务启动脚本,内容如下:
#!/bin/sh # # keystone OpenStack Identity Service # # chkconfig: - 20 80 # description: keystone works provide apis to \ # * Authenticate users and provide a token \ # * Validate tokens ### END INIT INFO . /etc/rc.d/init.d/functions prog=keystone prog_exec=keystone-all exec="/usr/bin/$prog_exec" config="/etc/$prog/$prog.conf" pidfile="/var/run/$prog/$prog.pid" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/subsys/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user keystone --pidfile $pidfile "$exec --config-file=$config &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 配置启动脚本:
chmod 755 /etc/init.d/keystone mkdir /var/run/keystone mkdir /var/lock/keystone chown keystone:root /var/run/keystone chown keystone:root /var/lock/keystone
- 启动KEYSTONE服务
/etc/init.d/keystone start
- 检测服务是否正常启动通过netstat -ltunp查看是否有tcp 5000和tcp 35357端口监听
如果没有正常启动请查看/var/log/keystone/keystone.log文件排错 - 建立KEYSTONE服务初始化数据脚本keystone_data.sh,内容如下:
#!/bin/bash # Variables set before calling this script: # SERVICE_TOKEN - aka admin_token in keystone.conf # SERVICE_ENDPOINT - local Keystone admin endpoint # SERVICE_TENANT_NAME - name of tenant containing service accounts # ENABLED_SERVICES - stack.sh's list of services to start # DEVSTACK_DIR - Top-level DevStack directory ADMIN_PASSWORD=${ADMIN_PASSWORD:-secrete} SERVICE_PASSWORD=${SERVICE_PASSWORD:-service} export SERVICE_TOKEN=ADMIN export SERVICE_ENDPOINT=http://localhost:35357/v2.0 SERVICE_TENANT_NAME=${SERVICE_TENANT_NAME:-tenant} function get_id () { echo `$@ | awk '/ id / { print $4 }'` } # Tenants ADMIN_TENANT=$(get_id keystone tenant-create --name=admin) SERVICE_TENANT=$(get_id keystone tenant-create --name=$SERVICE_TENANT_NAME) DEMO_TENANT=$(get_id keystone tenant-create --name=demo) INVIS_TENANT=$(get_id keystone tenant-create --name=invisible_to_admin) # Users ADMIN_USER=$(get_id keystone user-create --name=admin \ --pass="$ADMIN_PASSWORD" \ --email=admin@example.com) DEMO_USER=$(get_id keystone user-create --name=demo \ --pass="$ADMIN_PASSWORD" \ --email=demo@example.com) # Roles ADMIN_ROLE=$(get_id keystone role-create --name=admin) KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name=KeystoneAdmin) KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name=KeystoneServiceAdmin) ANOTHER_ROLE=$(get_id keystone role-create --name=anotherrole) # Add Roles to Users in Tenants keystone user-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant_id $ADMIN_TENANT keystone user-role-add --user $ADMIN_USER --role $ADMIN_ROLE --tenant_id $DEMO_TENANT keystone user-role-add --user $DEMO_USER --role $ANOTHER_ROLE --tenant_id $DEMO_TENANT # TODO(termie): these two might be dubious keystone user-role-add --user $ADMIN_USER --role $KEYSTONEADMIN_ROLE --tenant_id $ADMIN_TENANT keystone user-role-add --user $ADMIN_USER --role $KEYSTONESERVICE_ROLE --tenant_id $ADMIN_TENANT # The Member role is used by Horizon and Swift so we need to keep it: MEMBER_ROLE=$(get_id keystone role-create --name=Member) keystone user-role-add --user $DEMO_USER --role $MEMBER_ROLE --tenant_id $DEMO_TENANT keystone user-role-add --user $DEMO_USER --role $MEMBER_ROLE --tenant_id $INVIS_TENANT NOVA_USER=$(get_id keystone user-create --name=nova \ --pass="$SERVICE_PASSWORD" \ --tenant_id $SERVICE_TENANT \ --email=nova@example.com) keystone user-role-add --tenant_id $SERVICE_TENANT \ --user $NOVA_USER \ --role $ADMIN_ROLE GLANCE_USER=$(get_id keystone user-create --name=glance \ --pass="$SERVICE_PASSWORD" \ --tenant_id $SERVICE_TENANT \ --email=glance@example.com) keystone user-role-add --tenant_id $SERVICE_TENANT \ --user $GLANCE_USER \ --role $ADMIN_ROLE SWIFT_USER=$(get_id keystone user-create --name=swift \ --pass="$SERVICE_PASSWORD" \ --tenant_id $SERVICE_TENANT \ --email=swift@example.com) keystone user-role-add --tenant_id $SERVICE_TENANT \ --user $SWIFT_USER \ --role $ADMIN_ROLE RESELLER_ROLE=$(get_id keystone role-create --name=ResellerAdmin) keystone user-role-add --tenant_id $SERVICE_TENANT \ --user $NOVA_USER \ --role $RESELLER_ROLE
- 建立KEYSTONE服务数据库结构
keystone-manage db_sync
- 执行初始化数据脚本
bash keystone_data.sh
3.14 GLANCE镜像存储服务配置
3.15 建立GLANCE服务数据库
mysql -uroot -popenstack -e 'create database glance'
- 建立GLANCE服务配置文件存放目录
mkdir /etc/glance
- 建立GLANCE服务启动用户
useradd -s /sbin/nologin -m -d /var/log/glance glance
- 在/etc/glance建立glance-api.conf作为GLANCE-API服务配置文件,内容如下:
[DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = True # Which backend store should Glance use by default is not specified # in a request to add a new image to Glance? Default: 'file' # Available choices are 'file', 'swift', and 's3' default_store = file # Address to bind the API server bind_host = 0.0.0.0 # Port the bind the API server to bind_port = 9292 # Address to find the registry server registry_host = 0.0.0.0 # Port the registry server is listening on registry_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/api.log # Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # ============ Notification System Options ===================== # Notifications can be sent when images are create, updated or deleted. # There are three methods of sending notifications, logging (via the # log_file directive), rabbit (via a rabbitmq queue) or noop (no # notifications sent, the default) notifier_strategy = noop # Configuration options if sending notifications via rabbitmq (these are # the defaults) rabbit_host = localhost rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = guest rabbit_password = openstack rabbit_virtual_host = / rabbit_notification_topic = glance_notifications # ============ Filesystem Store Options ======================== # Directory that the Filesystem backend store # writes image data to filesystem_store_datadir = /var/lib/glance/images/ # ============ Swift Store Options ============================= # Address where the Swift authentication service lives swift_store_auth_address = 127.0.0.1:8080/v1.0/ # User to authenticate against the Swift authentication service swift_store_user = jdoe # Auth key for the user authenticating against the # Swift authentication service swift_store_key = a86850deb2742ec3cb41518e26aa2d89 # Container within the account that the account should use # for storing images in Swift swift_store_container = glance # Do we create the container if it does not exist? swift_store_create_container_on_put = False # What size, in MB, should Glance start chunking image files # and do a large object manifest in Swift? By default, this is # the maximum object size in Swift, which is 5GB swift_store_large_object_size = 5120 # When doing a large object manifest, what size, in MB, should # Glance write chunks to Swift? This amount of data is written # to a temporary disk buffer during the process of chunking # the image file, and the default is 200MB swift_store_large_object_chunk_size = 200 # Whether to use ServiceNET to communicate with the Swift storage servers. # (If you aren't RACKSPACE, leave this False!) # # To use ServiceNET for authentication, prefix hostname of # `swift_store_auth_address` with 'snet-'. # Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/ swift_enable_snet = False # ============ S3 Store Options ============================= # Address where the S3 authentication service lives s3_store_host = 127.0.0.1:8080/v1.0/ # User to authenticate against the S3 authentication service s3_store_access_key = <20-char AWS access key> # Auth key for the user authenticating against the # S3 authentication service s3_store_secret_key = <40-char AWS secret key> # Container within the account that the account should use # for storing images in S3. Note that S3 has a flat namespace, # so you need a unique bucket name for your glance images. An # easy way to do this is append your AWS access key to "glance". # S3 buckets in AWS *must* be lowercased, so remember to lowercase # your AWS access key if you use it in your bucket name below! s3_store_bucket = <lowercased 20-char aws access key>glance # Do we create the bucket if it does not exist? s3_store_create_bucket_on_put = False # ============ Image Cache Options ======================== image_cache_enabled = False # Directory that the Image Cache writes data to # Make sure this is also set in glance-pruner.conf image_cache_datadir = /var/lib/glance/image-cache/ # Number of seconds after which we should consider an incomplete image to be # stalled and eligible for reaping image_cache_stall_timeout = 86400 # ============ Delayed Delete Options ============================= # Turn on/off delayed delete delayed_delete = False # Delayed delete time in seconds scrub_time = 43200 # Directory that the scrubber will use to remind itself of what to delete # Make sure this is also set in glance-scrubber.conf scrubber_datadir = /var/lib/glance/scrubber
- 在/etc/glance建立glance-api-paste.ini作为GLANCE-API服务认证配置文件,内容如下:
[pipeline:glance-api] #pipeline = versionnegotiation context apiv1app # NOTE: use the following pipeline for keystone pipeline = versionnegotiation authtoken context apiv1app # To enable Image Cache Management API replace pipeline with below: # pipeline = versionnegotiation context imagecache apiv1app # NOTE: use the following pipeline for keystone auth (with caching) # pipeline = versionnegotiation authtoken auth-context imagecache apiv1app [app:apiv1app] paste.app_factory = glance.common.wsgi:app_factory glance.app_factory = glance.api.v1.router:API [filter:versionnegotiation] paste.filter_factory = glance.common.wsgi:filter_factory glance.filter_factory = glance.api.middleware.version_negotiation:VersionNegotiationFilter [filter:cache] paste.filter_factory = glance.common.wsgi:filter_factory glance.filter_factory = glance.api.middleware.cache:CacheFilter [filter:cachemanage] paste.filter_factory = glance.common.wsgi:filter_factory glance.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter [filter:context] paste.filter_factory = glance.common.wsgi:filter_factory glance.filter_factory = glance.common.context:ContextMiddleware [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_host = 60.12.206.105 service_port = 5000 service_protocol = http auth_host = 60.12.206.105 auth_port = 35357 auth_protocol = http auth_uri = http:/60.12.206.105:500/ admin_tenant_name = tenant admin_user = glance admin_password = service
- 在/etc/glance建立glance-registry.conf作为GLANCE-REGISTRY服务配置文件,内容如下:
[DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = True # Address to bind the registry server bind_host = 0.0.0.0 # Port the bind the registry server to bind_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/registry.log # Where to store images filesystem_store_datadir = /var/lib/glance/images # Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine sql_connection = mysql://root:openstack@localhost/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 # Limit the api to return `param_limit_max` items in a call to a container. If # a larger `limit` query param is provided, it will be reduced to this value. api_limit_max = 1000 # If a `limit` query param is not provided in an api request, it will # default to `limit_param_default` limit_param_default = 25
- 在/etc/glance建立glance-registry-paste.ini作为GLANCE-REGISTRY服务认证配置文件,内容如下:
[pipeline:glance-registry] #pipeline = context registryapp # NOTE: use the following pipeline for keystone pipeline = authtoken context registryapp [app:registryapp] paste.app_factory = glance.common.wsgi:app_factory glance.app_factory = glance.registry.api.v1:API [filter:context] context_class = glance.registry.context.RequestContext paste.filter_factory = glance.common.wsgi:filter_factory glance.filter_factory = glance.common.context:ContextMiddleware [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_host = 60.12.206.105 service_port = 5000 service_protocol = http auth_host = 60.12.206.105 auth_port = 35357 auth_protocol = http auth_uri = http:/60.12.206.105:500/ admin_tenant_name = tenant admin_user = glance admin_password = service
- 在/etc/glance建立policy.json作为GLANCE服务策略文件,内容如下:
{ "default": [], "manage_image_cache": [["role:admin"]] }
- 在/etc/init.d/下建立名为glance-api的GLANCE-API服务启动脚本,内容如下:
#!/bin/sh # # glance-api OpenStack Image Service API server # # chkconfig: - 20 80 # description: OpenStack Image Service (code-named Glance) API server ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: Glance API server # Description: OpenStack Image Service (code-named Glance) API server ### END INIT INFO . /etc/rc.d/init.d/functions suffix=api prog=openstack-glance-$suffix exec="/usr/bin/glance-$suffix" config="/etc/glance/glance-$suffix.conf" pidfile="/var/run/glance/glance-$suffix.pid" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/subsys/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user glance --pidfile $pidfile "$exec --config-file=$config &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 在/etc/init.d/下建立名为glance-registry的GLANCE-REGISTRY服务启动脚本,内容如下:
#!/bin/sh # # glance-registry OpenStack Image Service Registry server # # chkconfig: - 20 80 # description: OpenStack Image Service (code-named Glance) Registry server ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: Glance Registry server # Description: OpenStack Image Service (code-named Glance) Registry server ### END INIT INFO . /etc/rc.d/init.d/functions suffix=registry prog=openstack-glance-$suffix exec="/usr/bin/glance-$suffix" config="/etc/glance/glance-$suffix.conf" pidfile="/var/run/glance/glance-$suffix.pid" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/subsys/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user glance --pidfile $pidfile "$exec --config-file=$config &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 配置启动脚本:
chmod 755 /etc/init.d/glance-api chmod 755 /etc/init.d/glance-registry mkdir /var/run/glance mkdir /var/lock/glance mkdir /var/lib/glance chown glance:root /var/run/glance chown glance:root /var/lock/glance chown glance:glance /var/lib/glance
- 启动GLANCE-API和GLANCE-REGISTRY服务
/etc/init.d/glance-api start /etc/init.d/glance-registry start
- 检测服务是否正常启动通过netstat -ltunp查看是否有tcp 9292和tcp 9191端口监听
如果没有正常启动请查看/var/log/glance目录下相关文件排错
3.16 NOVA计算服务配置
- 建立NOVA服务数据库
mysql -uroot -popenstack -e 'create database nova'
- 建立NOVA服务配置文件存放目录
mkdir /etc/nova
- 建立NOVA服务启动用户
useradd -s /sbin/nologin -m -d /var/log/nova nova
- 在/etc/nova建立nova.conf作为NOVA服务配置文件,内容如下:
[DEFAULT] debug=True log-dir=/var/log/nova pybasedir=/var/lib/nova use_syslog=False verbose=True api_paste_config=/etc/nova/api-paste.ini auth_strategy=keystone bindir=/usr/bin glance_host=$my_ip glance_port=9292 glance_api_servers=$glance_host:$glance_port image_service=nova.image.glance.GlanceImageService lock_path=/var/lock/nova my_ip=60.12.206.105 rabbit_host=localhost rabbit_password=openstack rabbit_port=5672 rabbit_userid=guest root_helper=sudo sql_connection=mysql://root:gamewave@localhost/nova keystone_ec2_url=http://$my_ip:5000/v2.0/ec2tokens novncproxy_base_url=http://$my_ip:6080/vnc_auto.html vnc_enabled=True vnc_keymap=en-us vncserver_listen=$my_ip vncserver_proxyclient_address=$my_ip dhcpbridge=$bindir/nova-dhcpbridge dhcpbridge_flagfile=/etc/nova/nova.conf public_interface=eth0 routing_source_ip=$my_ip fixed_range=10.0.0.0/24 flat_interface=eth1 flat_network_bridge=br1 floating_range=60.12.206.115 force_dhcp_release=True target_host=$my_ip target_port=3260 console_token_ttl=600 iscsi_helper=ietadm iscsi_ip_address=$my_ip iscsi_num_targets=100 iscsi_port=3260 volume_group=nova-volumes ec2_listen=0.0.0.0 ec2_listen_port=8773 metadata_listen=0.0.0.0 metadata_listen_port=8775 osapi_compute_listen=0.0.0.0 osapi_compute_listen_port=8774 osapi_volume_listen=0.0.0.0 osapi_volume_listen_port=8776
- 在/etc/nova建立api-paste.ini作为NOVA服务认证配置文件,内容如下:
############ # Metadata # ############ [composite:metadata] use = egg:Paste#urlmap /: metaversions /latest: meta /1.0: meta /2007-01-19: meta /2007-03-01: meta /2007-08-29: meta /2007-10-10: meta /2007-12-15: meta /2008-02-01: meta /2008-09-01: meta /2009-04-04: meta [pipeline:metaversions] pipeline = ec2faultwrap logrequest metaverapp [pipeline:meta] pipeline = ec2faultwrap logrequest metaapp [app:metaverapp] paste.app_factory = nova.api.metadata.handler:Versions.factory [app:metaapp] paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory ####### # EC2 # ####### [composite:ec2] use = egg:Paste#urlmap /services/Cloud: ec2cloud [composite:ec2cloud] use = call:nova.api.auth:pipeline_factory noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor deprecated = ec2faultwrap logrequest authenticate cloudrequest validator ec2executor keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor [filter:ec2faultwrap] paste.filter_factory = nova.api.ec2:FaultWrapper.factory [filter:logrequest] paste.filter_factory = nova.api.ec2:RequestLogging.factory [filter:ec2lockout] paste.filter_factory = nova.api.ec2:Lockout.factory [filter:totoken] paste.filter_factory = nova.api.ec2:EC2Token.factory [filter:ec2keystoneauth] paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory [filter:ec2noauth] paste.filter_factory = nova.api.ec2:NoAuth.factory [filter:authenticate] paste.filter_factory = nova.api.ec2:Authenticate.factory [filter:cloudrequest] controller = nova.api.ec2.cloud.CloudController paste.filter_factory = nova.api.ec2:Requestify.factory [filter:authorizer] paste.filter_factory = nova.api.ec2:Authorizer.factory [filter:validator] paste.filter_factory = nova.api.ec2:Validator.factory [app:ec2executor] paste.app_factory = nova.api.ec2:Executor.factory ############# # Openstack # ############# [composite:osapi_compute] use = call:nova.api.openstack.urlmap:urlmap_factory /: oscomputeversions /v1.1: openstack_compute_api_v2 /v2: openstack_compute_api_v2 [composite:osapi_volume] use = call:nova.api.openstack.urlmap:urlmap_factory /: osvolumeversions /v1: openstack_volume_api_v1 [composite:openstack_compute_api_v2] use = call:nova.api.auth:pipeline_factory noauth = faultwrap noauth ratelimit osapi_compute_app_v2 deprecated = faultwrap auth ratelimit osapi_compute_app_v2 keystone = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2 keystone_nolimit = faultwrap authtoken keystonecontext osapi_compute_app_v2 [composite:openstack_volume_api_v1] use = call:nova.api.auth:pipeline_factory noauth = faultwrap noauth ratelimit osapi_volume_app_v1 deprecated = faultwrap auth ratelimit osapi_volume_app_v1 keystone = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1 keystone_nolimit = faultwrap authtoken keystonecontext osapi_volume_app_v1 [filter:faultwrap] paste.filter_factory = nova.api.openstack:FaultWrapper.factory [filter:auth] paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory [filter:noauth] paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory [filter:ratelimit] paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory [app:osapi_compute_app_v2] paste.app_factory = nova.api.openstack.compute:APIRouter.factory [pipeline:oscomputeversions] pipeline = faultwrap oscomputeversionapp [app:osapi_volume_app_v1] paste.app_factory = nova.api.openstack.volume:APIRouter.factory [app:oscomputeversionapp] paste.app_factory = nova.api.openstack.compute.versions:Versions.factory [pipeline:osvolumeversions] pipeline = faultwrap osvolumeversionapp [app:osvolumeversionapp] paste.app_factory = nova.api.openstack.volume.versions:Versions.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 60.12.206.105 service_port = 5000 auth_host = 60.12.206.105 auth_port = 35357 auth_protocol = http auth_uri = http://60.12.206.105:5000/ admin_tenant_name = tenant admin_user = nova admin_password = service
- 在/etc/nova建立policy.json作为NOVA服务策略文件,内容如下:
{ "admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]], "default": [["rule:admin_or_owner"]], "compute:create": [], "compute:create:attach_network": [], "compute:create:attach_volume": [], "compute:get_all": [], "admin_api": [["role:admin"]], "compute_extension:accounts": [["rule:admin_api"]], "compute_extension:admin_actions": [["rule:admin_api"]], "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], "compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]], "compute_extension:admin_actions:resume": [["rule:admin_or_owner"]], "compute_extension:admin_actions:lock": [["rule:admin_api"]], "compute_extension:admin_actions:unlock": [["rule:admin_api"]], "compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]], "compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]], "compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]], "compute_extension:admin_actions:migrateLive": [["rule:admin_api"]], "compute_extension:admin_actions:migrate": [["rule:admin_api"]], "compute_extension:aggregates": [["rule:admin_api"]], "compute_extension:certificates": [], "compute_extension:cloudpipe": [["rule:admin_api"]], "compute_extension:console_output": [], "compute_extension:consoles": [], "compute_extension:createserverext": [], "compute_extension:deferred_delete": [], "compute_extension:disk_config": [], "compute_extension:extended_server_attributes": [["rule:admin_api"]], "compute_extension:extended_status": [], "compute_extension:flavorextradata": [], "compute_extension:flavorextraspecs": [], "compute_extension:flavormanage": [["rule:admin_api"]], "compute_extension:floating_ip_dns": [], "compute_extension:floating_ip_pools": [], "compute_extension:floating_ips": [], "compute_extension:hosts": [["rule:admin_api"]], "compute_extension:keypairs": [], "compute_extension:multinic": [], "compute_extension:networks": [["rule:admin_api"]], "compute_extension:quotas": [], "compute_extension:rescue": [], "compute_extension:security_groups": [], "compute_extension:server_action_list": [["rule:admin_api"]], "compute_extension:server_diagnostics": [["rule:admin_api"]], "compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]], "compute_extension:simple_tenant_usage:list": [["rule:admin_api"]], "compute_extension:users": [["rule:admin_api"]], "compute_extension:virtual_interfaces": [], "compute_extension:virtual_storage_arrays": [], "compute_extension:volumes": [], "compute_extension:volumetypes": [], "volume:create": [], "volume:get_all": [], "volume:get_volume_metadata": [], "volume:get_snapshot": [], "volume:get_all_snapshots": [], "network:get_all_networks": [], "network:get_network": [], "network:delete_network": [], "network:disassociate_network": [], "network:get_vifs_by_instance": [], "network:allocate_for_instance": [], "network:deallocate_for_instance": [], "network:validate_networks": [], "network:get_instance_uuids_by_ip_filter": [], "network:get_floating_ip": [], "network:get_floating_ip_pools": [], "network:get_floating_ip_by_address": [], "network:get_floating_ips_by_project": [], "network:get_floating_ips_by_fixed_address": [], "network:allocate_floating_ip": [], "network:deallocate_floating_ip": [], "network:associate_floating_ip": [], "network:disassociate_floating_ip": [], "network:get_fixed_ip": [], "network:add_fixed_ip_to_instance": [], "network:remove_fixed_ip_from_instance": [], "network:add_network_to_project": [], "network:get_instance_nw_info": [], "network:get_dns_domains": [], "network:add_dns_entry": [], "network:modify_dns_entry": [], "network:delete_dns_entry": [], "network:get_dns_entries_by_address": [], "network:get_dns_entries_by_name": [], "network:create_private_dns_domain": [], "network:create_public_dns_domain": [], "network:delete_dns_domain": [] }
- 在/etc/init.d/下建立名为nova-api的NOVA-API服务启动脚本,内容如下:
#!/bin/sh # # openstack-nova-api OpenStack Nova API Server # # chkconfig: - 20 80 # description: At the heart of the cloud framework is an API Server. \ # This API Server makes command and control of the \ # hypervisor, storage, and networking programmatically \ # available to users in realization of the definition \ # of cloud computing. ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova API Server # Description: At the heart of the cloud framework is an API Server. # This API Server makes command and control of the # hypervisor, storage, and networking programmatically # available to users in realization of the definition # of cloud computing. ### END INIT INFO . /etc/rc.d/init.d/functions suffix=api prog=openstack-nova-$suffix exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --logfile=$logfile &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 在/etc/init.d/下建立名为nova-network的NOVA-NETWORK服务启动脚本,内容如下:
#!/bin/sh # # openstack-nova-network OpenStack Nova Network Controller # # chkconfig: - 20 80 # description: The Network Controller manages the networking resources \ # on host machines. The API server dispatches commands \ # through the message queue, which are subsequently \ # processed by Network Controllers. \ # Specific operations include: \ # * Allocate Fixed IP Addresses \ # * Configuring VLANs for projects \ # * Configuring networks for compute nodes \ ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova Network Controller # Description: The Network Controller manages the networking resources # on host machines. The API server dispatches commands # through the message queue, which are subsequently # processed by Network Controllers. # Specific operations include: # * Allocate Fixed IP Addresses # * Configuring VLANs for projects # * Configuring networks for compute nodes ### END INIT INFO . /etc/rc.d/init.d/functions suffix=network prog=openstack-nova-$suffix exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --logfile=$logfile &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 在/etc/init.d/下建立名为nova-objectstore的NOVA-OBJECTSTORE服务启动脚本,内容如下:
#!/bin/sh # # openstack-nova-objectstore OpenStack Nova Object Storage # # chkconfig: - 20 80 # description: Implementation of an S3-like storage server based on local files. ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova Object Storage # Description: Implementation of an S3-like storage server based on local files. ### END INIT INFO . /etc/rc.d/init.d/functions suffix=objectstore prog=openstack-nova-$suffix exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --logfile=$logfile &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 在/etc/init.d/下建立名为nova-scheduler的NOVA-SCHEDULER服务启动脚本,内容如下:
#!/bin/sh # # openstack-nova-scheduler OpenStack Nova Scheduler # # chkconfig: - 20 80 # description: Determines which physical hardware to allocate to a virtual resource ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova Scheduler # Description: Determines which physical hardware to allocate to a virtual resource ### END INIT INFO . /etc/rc.d/init.d/functions suffix=scheduler prog=openstack-nova-$suffix exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --logfile=$logfile &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 在/etc/init.d/下建立名为nova-volume的NOVA-VOLUME服务启动脚本,内容如下:
#!/bin/sh # # openstack-nova-volume OpenStack Nova Volume Worker # # chkconfig: - 20 80 # description: Volume Workers interact with iSCSI storage to manage \ # LVM-based instance volumes. Specific functions include: \ # * Create Volumes \ # * Delete Volumes \ # * Establish Compute volumes ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova Volume Worker # Description: Volume Workers interact with iSCSI storage to manage # LVM-based instance volumes. Specific functions include: # * Create Volumes # * Delete Volumes # * Establish Compute volumes ### END INIT INFO . /etc/rc.d/init.d/functions suffix=volume prog=openstack-nova-$suffix exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --logfile=$logfile &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 配置启动脚本:
chmod 755 /etc/init.d/nova-api chmod 755 /etc/init.d/nova-network chmod 755 /etc/init.d/nova-objectstore chmod 755 /etc/init.d/nova-scheduler chmod 755 /etc/init.d/nova-volume mkdir /var/run/nova mkdir -p /var/lib/nova/instances mkdir /var/lock/nova chown nova:root /var/run/nova chown -R nova:nova /var/lib/nova chown nova:root /var/lock/nova
- 配置sudo
在/etc/sudoers.d/建立nova文件,内容如下:Defaults:nova !requiretty Cmnd_Alias NOVACMDS = /bin/aoe-stat, \ /bin/chmod, \ /bin/chmod /var/lib/nova/tmp/*/root/.ssh, \ /bin/chown, \ /bin/chown /var/lib/nova/tmp/*/root/.ssh, \ /bin/dd, \ /bin/kill, \ /bin/mkdir, \ /bin/mount, \ /bin/umount, \ /sbin/aoe-discover, \ /sbin/ifconfig, \ /sbin/ip, \ /sbin/ip6tables-restore, \ /sbin/ip6tables-save, \ /sbin/iptables, \ /sbin/iptables-restore, \ /sbin/iptables-save, \ /sbin/iscsiadm, \ /sbin/kpartx, \ /sbin/losetup, \ /sbin/lvcreate, \ /sbin/lvdisplay, \ /sbin/lvremove, \ /sbin/pvcreate, \ /sbin/route, \ /sbin/tune2fs, \ /sbin/vconfig, \ /sbin/vgcreate, \ /sbin/vgs, \ /usr/bin/fusermount, \ /usr/bin/guestmount, \ /usr/bin/socat, \ /bin/cat, \ /usr/bin/tee, \ /usr/bin/qemu-nbd, \ /usr/bin/virsh, \ /usr/sbin/brctl, \ /usr/sbin/dnsmasq, \ /usr/sbin/ietadm, \ /usr/sbin/radvd, \ /usr/sbin/tgtadm, \ /usr/sbin/vblade-persist nova ALL = (root) NOPASSWD: SETENV: NOVACMDS
chmod 0440 /etc/sudoers.d/nova
- 配置polkit策略在/etc/polkit-1/localauthority/50-local.d/建立50-nova.pkla,内容如下:
[Allow nova libvirt management permissions] Identity=unix-user:nova Action=org.libvirt.unix.manage ResultAny=yes ResultInactive=yes ResultActive=yes
- 建立NOVA服务数据库结构
nova-manage db sync
- 安装iscsitargetwget http://sourceforge.net/projects/iscsitarget/files/iscsitarget/1.4.20.2/iscsitarget-1.4.20.2.tar.gz/download -P /opt
cd /opt
tar xf iscsitarget-1.4.20.2.tar.gz
cd iscsitarget-1.4.20.2
make
make install
/etc/init.d/iscsi-target startnetstat -ltnp 查看是否有tcp 3260端口监听 - 建立nova-volumes卷
fdisk /dev/sdb n p 1 两次回车 t 83 w
mkfs.ext4 /dev/sdb1 vgcreate nova-volumes /dev/sdb1
- 启动NOVA相关服务
/etc/init.d/nova-api start /etc/init.d/nova-network start /etc/init.d/nova-objectstore start /etc/init.d/nova-scheduler start /etc/init.d/nova-volume start
- 检测服务是否正常启动通过netstat -ltunp查看是否有tcp 3333、8773、8774、8775、8776端口监听
如果没有正常启动请查看/var/log/nova目录下相关文件排错
3.17 SWIFT对象存储服务配置
- 建立SWIFT服务配置文件存放目录
mkdir /etc/swift
- 建立SWIFT服务启动用户
useradd -s /sbin/nologin -m -d /var/log/swift swift
- 格式化硬盘及挂载
yum -y install xfsprogs mkfs.xfs -f -i size=1024 /dev/sdc mkfs.xfs -f -i size=1024 /dev/sdd mkdir -p /swift/drivers/sd{c,d} mount -t xfs -o noatime,nodiratime,nobarrier,logbufs=8 /dev/sdc /swift/drivers/sdc mount -t xfs -o noatime,nodiratime,nobarrier,logbufs=8 /dev/sdd /swift/drivers/sdd echo -e '/dev/sdc\t/swift/drivers/sdc\txfs\tnoatime,nodiratime,nobarrier,logbufs=8\t0 0' >>/etc/fstab echo -e '/dev/sdd\t/swift/drivers/sdd\txfs\tnoatime,nodiratime,nobarrier,logbufs=8\t0 0' >>/etc/fstab
- swift同步相关配置
mkdir -p /swift/node/sd{c,d} ln -sv /swift/drivers/sdc /swift/node/sdc ln -sv /swift/drivers/sdd /swift/node/sdd
在/etc下建立rsyncd.conf文件,内容如下:
uid = swift gid = swift log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 192.168.1.2 [account5000] max connections = 50 path = /swift/node/sdc read only = false lock file = /var/lock/account5000.lock [account5001] max connections = 50 path = /swift/node/sdd read only = false lock file = /var/lock/account5001.lock [container4000] max connections = 50 path = /swift/node/sdc read only = false lock file = /var/lock/container4000.lock [container4000] max connections = 50 path = /swift/node/sdd read only = false lock file = /var/lock/container4001.lock [object3000] max connections = 50 path = /swift/node/sdc read only = false lock file = /var/lock/object3000.lock [object3001] max connections = 50 path = /swift/node/sdd read only = false lock file = /var/lock/object3001.lock
yum -y install xinetd sed -i '/disable/s#yes#no#g' /etc/xinetd.d/rsync /etc/init.d/xinetd start mkdir -p /etc/swift/{object,container,account}-server
在/etc/swift下建立swift.conf文件,内容如下:
[swift-hash] swift_hash_path_suffix = changeme
在/etc/swift下建立proxy-server.conf文件,内容如下:
[DEFAULT] bind_port = 8080 user = swift swift_dir = /etc/swift workers = 8 log_name = swift log_facility = LOG_LOCAL1 log_level = DEBUG [pipeline:main] pipeline = healthcheck cache swift3 s3token authtoken keystone proxy-server [app:proxy-server] use = egg:swift#proxy allow_account_management = true account_autocreate = true [filter:keystone] paste.filter_factory = keystone.middleware.swift_auth:filter_factory operator_roles = Member,admin,SwiftOperator # NOTE(chmou): s3token middleware is not updated yet to use only # username and password. [filter:s3token] paste.filter_factory = keystone.middleware.s3_token:filter_factory service_port = 60.12.206.105 service_host = 5000 auth_host = 60.12.206.105 auth_port = 35357 auth_protocol = http auth_token = ADMIN admin_token = ADMIN [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory auth_host = 60.12.206.105 auth_port = 35357 auth_protocol = http auth_uri = http://60.12.206.105:5000/ admin_tenant_name = tenant admin_user = swift admin_password = service [filter:swift3] use = egg:swift#swift3 [filter:healthcheck] use = egg:swift#healthcheck [filter:cache] use = egg:swift#memcache
在/etc/swift/account-server下建立sdc.conf和sdd.conf文件,内容如下:
–——————sdc.conf-------------------- [DEFAULT] devices = /swift/node/sdc mount_check = false bind_port = 5000 user = swift log_facility = LOG_LOCAL0 swift_dir = /etc/swift [pipeline:main] pipeline = account-server [app:account-server] use = egg:swift#account [account-replicator] vm_test_mode = yes [account-auditor] [account-reaper] –——————sdd.conf-------------------- [DEFAULT] devices = /swift/node/sdd mount_check = false bind_port = 5001 user = swift log_facility = LOG_LOCAL0 swift_dir = /etc/swift [pipeline:main] pipeline = account-server [app:account-server] use = egg:swift#account [account-replicator] vm_test_mode = yes [account-auditor] [account-reaper]
在/etc/swift/container-server下建立sdc.conf和sdd.conf文件,内容如下:
--------------------sdc.conf-------------------- [DEFAULT] devices = /swift/node/sdc mount_check = false bind_port = 4000 user = swift log_facility = LOG_LOCAL0 swift_dir = /etc/swift [pipeline:main] pipeline = container-server [app:container-server] use = egg:swift#container [container-replicator] vm_test_mode = yes [container-updater] [container-auditor] [container-sync] --------------------sdd.conf-------------------- [DEFAULT] devices = /swift/node/sdd mount_check = false bind_port = 4001 user = swift log_facility = LOG_LOCAL0 swift_dir = /etc/swift [pipeline:main] pipeline = container-server [app:container-server] use = egg:swift#container [container-replicator] vm_test_mode = yes [container-updater] [container-auditor] [container-sync]
在/etc/swift/object-server下建立sdc.conf和sdd.conf文件,内容如下:
--------------------sdc.conf-------------------- [DEFAULT] devices = /swift/node/sdc mount_check = false bind_port = 3000 user = swift log_facility = LOG_LOCAL0 swift_dir = /etc/swift [pipeline:main] pipeline = object-server [app:object-server] use = egg:swift#object [object-replicator] vm_test_mode = yes [object-updater] [object-auditor] [object-expirer] --------------------sdd.conf-------------------- [DEFAULT] devices = /swift/node/sdd mount_check = false bind_port = 3001 user = swift log_facility = LOG_LOCAL0 swift_dir = /etc/swift [pipeline:main] pipeline = object-server [app:object-server] use = egg:swift#object [object-replicator] vm_test_mode = yes [object-updater] [object-auditor] [object-expirer]
- 建立ring
cd /etc/swift swift-ring-builder account.builder create 8 2 1 swift-ring-builder account.builder add z1-60.12.206.105:5000/sdc 1 swift-ring-builder account.builder add z2-60.12.206.105:5001/sdd 1 swift-ring-builder account.builder rebalance swift-ring-builder container.builder create 8 2 1 swift-ring-builder container.builder add z1-60.12.206.105:4000/sdc 1 swift-ring-builder container.builder add z2-60.12.206.105:4001/sdd 1 swift-ring-builder container.builder rebalance swift-ring-builder object.builder create 8 2 1 swift-ring-builder object.builder add z1-60.12.206.105:3000/sdc 1 swift-ring-builder object.builder add z2-60.12.206.105:3001/sdd 1 swift-ring-builder object.builder rebalance
- 设置权限
chown -R swift:swift /swift
- 建立各服务启动脚本在/etc/swift下建立functions文件,内容如下:
. /etc/rc.d/init.d/functions swift_action() { retval=0 server="$1" call="swift_$2" if [[ -f "/etc/swift/$server-server.conf" ]]; then $call "$server" \ "/etc/swift/$server-server.conf" \ "/var/run/swift/$server-server.pid" [ $? -ne 0 ] && retval=1 elif [[ -d "/etc/swift/$server-server/" ]]; then declare -i count=0 for name in $( ls "/etc/swift/$server-server/" ); do $call "$server" \ "/etc/swift/$server-server/$name" \ "/var/run/swift/$server-server/$count.pid" [ $? -ne 0 ] && retval=1 count=$count+1 done fi return $retval } swift_start() { name="$1" long_name="$name-server" conf_file="$2" pid_file="$3" ulimit -n ${SWIFT_MAX_FILES-32768} echo -n "Starting swift-$long_name: " daemon --pidfile $pid_file \ "/usr/bin/swift-$long_name $conf_file &>/var/log/swift-startup.log & echo \$! > $pid_file" retval=$? echo return $retval } swift_stop() { name="$1" long_name="$name-server" conf_name="$2" pid_file="$3" echo -n "Stopping swift-$long_name: " killproc -p $pid_file -d ${SWIFT_STOP_DELAY-15} $long_name retval=$? echo return $retval } swift_status() { name="$1" long_name="$name-server" conf_name="$2" pid_file="$3" status -p $pid_file $long_name }
在/etc/init.d下建立swift-proxy文件,内容如下:
#!/bin/sh ### BEGIN INIT INFO # Provides: openstack-swift-proxy # Required-Start: $remote_fs # Required-Stop: $remote_fs # Default-Stop: 0 1 6 # Short-Description: Swift proxy server # Description: Account server for swift. ### END INIT INFO # openstack-swift-proxy: swift proxy server # # chkconfig: - 20 80 # description: Proxy server for swift. . /etc/rc.d/init.d/functions . /etc/swift/functions name="proxy" [ -e "/etc/sysconfig/openstack-swift-$name" ] && . "/etc/sysconfig/openstack-swift-$name" lockfile="/var/lock/swift/openstack-swift-proxy" start() { swift_action "$name" start retval=$? [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { swift_action "$name" stop retval=$? [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } rh_status() { swift_action "$name" status } rh_status_q() { rh_status &> /dev/null } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart}" exit 2 esac exit $?
在/etc/init.d下建立swift-account文件,内容如下:
#!/bin/sh ### BEGIN INIT INFO # Provides: openstack-swift-account # Required-Start: $remote_fs # Required-Stop: $remote_fs # Default-Stop: 0 1 6 # Short-Description: Swift account server # Description: Account server for swift. ### END INIT INFO # openstack-swift-account: swift account server # # chkconfig: - 20 80 # description: Account server for swift. . /etc/rc.d/init.d/functions . /etc/swift/functions name="account" [ -e "/etc/sysconfig/openstack-swift-$name" ] && . "/etc/sysconfig/openstack-swift-$name" lockfile="/var/lock/swift/openstack-swift-account" start() { swift_action "$name" start retval=$? [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { swift_action "$name" stop retval=$? [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } rh_status() { swift_action "$name" status } rh_status_q() { rh_status &> /dev/null } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart}" exit 2 esac exit $?
- 在/etc/init.d下建立swift-container文件,内容如下:
#!/bin/sh ### BEGIN INIT INFO # Provides: openstack-swift-container # Required-Start: $remote_fs # Required-Stop: $remote_fs # Default-Stop: 0 1 6 # Short-Description: Swift container server # Description: Container server for swift. ### END INIT INFO # openstack-swift-container: swift container server # # chkconfig: - 20 80 # description: Container server for swift. . /etc/rc.d/init.d/functions . /etc/swift/functions name="container" [ -e "/etc/sysconfig/openstack-swift-$name" ] && . "/etc/sysconfig/openstack-swift-$name" lockfile="/var/lock/swift/openstack-swift-container" start() { swift_action "$name" start retval=$? [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { swift_action "$name" stop retval=$? [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } rh_status() { swift_action "$name" status } rh_status_q() { rh_status &> /dev/null } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart}" exit 2 esac exit $?
- 在/etc/init.d下建立swift-object文件,内容如下:
#!/bin/sh ### BEGIN INIT INFO # Provides: openstack-swift-object # Required-Start: $remote_fs # Required-Stop: $remote_fs # Default-Stop: 0 1 6 # Short-Description: Swift object server # Description: Object server for swift. ### END INIT INFO # openstack-swift-object: swift object server # # chkconfig: - 20 80 # description: Object server for swift. . /etc/rc.d/init.d/functions . /etc/swift/functions name="object" [ -e "/etc/sysconfig/openstack-swift-$name" ] && . "/etc/sysconfig/openstack-swift-$name" lockfile="/var/lock/swift/openstack-swift-object" start() { swift_action "$name" start retval=$? [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { swift_action "$name" stop retval=$? [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } rh_status() { swift_action "$name" status } rh_status_q() { rh_status &> /dev/null } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart}" exit 2 esac exit $?
- 配置启动脚本:
chmod 755 /etc/init.d/swift-proxy chmod 755 /etc/init.d/swift-account chmod 755 /etc/init.d/swift-container chmod 755 /etc/init.d/swift-object mkdir /var/run/swift mkdir /var/lock/swift chown swift:root /var/run/swift chown swift:root /var/lock/swift
- 启动服务
/etc/init.d/swift-proxy start /etc/init.d/swift-account start /etc/init.d/swift-container start /etc/init.d/swift-object start
3.18 HORIZON管理面板配置
****建立KEYSTONE服务数据库
mysql -uroot -popenstack -e 'create database dashboard'
- 配置apache编辑/etc/httpd/conf.d/django.conf,更改成如下内容:
WSGISocketPrefix /tmp/horizon <VirtualHost *:80> WSGIScriptAlias / /opt/horizon-2012.1/openstack_dashboard/wsgi/django.wsgi WSGIDaemonProcess horizon user=apache group=apache processes=3 threads=10 SetEnv APACHE_RUN_USER apache SetEnv APACHE_RUN_GROUP apache WSGIProcessGroup horizon DocumentRoot /opt/horizon-2012.1/.blackhole/ Alias /media /opt/horizon-2012.1/openstack_dashboard/static <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /opt/horizon-2012.1/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog /var/log/httpd/error.log LogLevel warn CustomLog /var/log/httpd/access.log combined </VirtualHost>
mkdir /opt/horizon-2012.1/.blackhole
- 配置HORIZON在/opt/horizon-2012.1/openstackdashboard/local下建立localsettings.py文件,内容如下:
import os DEBUG = False TEMPLATE_DEBUG = DEBUG PROD = False USE_SSL = False LOCAL_PATH = os.path.dirname(os.path.abspath(__file__)) # FIXME: We need to change this to mysql, instead of sqlite. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'dashboard', 'USER': 'root', 'PASSWORD': 'openstack', 'HOST': 'localhost', 'PORT': '3306', }, } # The default values for these two settings seem to cause issues with apache CACHE_BACKEND = 'dummy://' SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db' # Send email to the console by default EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' # Or send them to /dev/null #EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend' # django-mailer uses a different settings attribute MAILER_EMAIL_BACKEND = EMAIL_BACKEND # Configure these for your outgoing email host # EMAIL_HOST = 'smtp.my-company.com' # EMAIL_PORT = 25 # EMAIL_HOST_USER = 'djangomail' # EMAIL_HOST_PASSWORD = 'top-secret!' HORIZON_CONFIG = { 'dashboards': ('nova', 'syspanel', 'settings',), 'default_dashboard': 'nova', 'user_home': 'openstack_dashboard.views.user_home', } # TODO(tres): Remove these once Keystone has an API to identify auth backend. OPENSTACK_KEYSTONE_BACKEND = { 'name': 'native', 'can_edit_user': True } OPENSTACK_HOST = "60.12.206.105" OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST # FIXME: this is only needed until keystone fixes its GET /tenants call # so that it doesn't return everything for admins OPENSTACK_KEYSTONE_ADMIN_URL = "http://%s:35357/v2.0" % OPENSTACK_HOST OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member" SWIFT_PAGINATE_LIMIT = 100 # If you have external monitoring links, eg: # EXTERNAL_MONITORING = [ # ['Nagios','http://foo.com'], # ['Ganglia','http://bar.com'], # ] #LOGGING = { # 'version': 1, # # When set to True this will disable all logging except # # for loggers specified in this configuration dictionary. Note that # # if nothing is specified here and disable_existing_loggers is True, # # django.db.backends will still log unless it is disabled explicitly. # 'disable_existing_loggers': False, # 'handlers': { # 'null': { # 'level': 'DEBUG', # 'class': 'django.utils.log.NullHandler', # }, # 'console': { # # Set the level to "DEBUG" for verbose output logging. # 'level': 'INFO', # 'class': 'logging.StreamHandler', # }, # }, # 'loggers': { # # Logging from django.db.backends is VERY verbose, send to null # # by default. # 'django.db.backends': { # 'handlers': ['null'], # 'propagate': False, # }, # 'horizon': { # 'handlers': ['console'], # 'propagate': False, # }, # 'novaclient': { # 'handlers': ['console'], # 'propagate': False, # }, # 'keystoneclient': { # 'handlers': ['console'], # 'propagate': False, # }, # 'nose.plugins.manager': { # 'handlers': ['console'], # 'propagate': False, # } # } #}
- 静态化django编辑/opt/horizon-2012.1/openstack_dashboard/urls.py,在最下面添加如下内容:
if settings.DEBUG is False: urlpatterns += patterns('', url(r'^static/(?P<path>.*)$', 'django.views.static.serve', { 'document_root': settings.STATIC_ROOT, }), )
python /opt/horizon-2012.1/manage.py collectstatic,选择yes
- 建立HORIZON数据库结构
python /opt/horizon-2012.1/manage.py syncdb
- 重启apache服务
chown -R apache:apache /opt/horizon-2012.1 /etc/init.d/httpd restart
3.19 NOVNC WEB访问配置
- 编辑/etc/nova/nova.conf文件,添加如下内容:
novncproxy_base_url=http://$my_ip:6080/vnc_auto.html vnc_enabled=true vnc_keymap=en-us vncserver_listen=$my_ip vncserver_proxyclient_address=$my_ip
- 将NOVNC执行程序添加到环境变量中
ln -sv /opt/noVNC/utils/nova-novncproxy /usr/bin/
- 在/etc/init.d/下建立名为nova-novncproxy的NOVNC服务启动脚本,内容如下:
#!/bin/sh # # openstack-nova-novncproxy OpenStack Nova VNC Web Console # # chkconfig: - 20 80 # description: OpenStack Nova VNC Web Console ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova VNC Web Console # Description: OpenStack Nova VNC Web Console ### END INIT INFO . /etc/rc.d/init.d/functions suffix=novncproxy prog=openstack-nova-$suffix web="/opt/noVNC" exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --web $web --logfile=$logfile --daemon &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 在/etc/init.d/下建立名为nova-consoleauth的控制台认证启动脚本,内容如下:
#!/bin/sh # # openstack-nova-novncproxy OpenStack Nova Console Auth # # chkconfig: - 20 80 # description: OpenStack Nova Console Auth ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova Console Auth # Description: OpenStack Nova Console Auth ### END INIT INFO . /etc/rc.d/init.d/functions suffix=consoleauth prog=openstack-nova-$suffix exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --logfile=$logfile &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 配置启动脚本
chmod 755 /etc/init.d/nova-novncproxy chmod 755 /etc/init.d/nova-consoleauth
- 启动GLANCE-API和GLANCE-REGISTRY服务
/etc/init.d/nova-novncproxy start /etc/init.d/nova-consoleauth start
- 检测服务是否正常启动通过netstat -ltunp查看是否有tcp 6080端口监听
如果没有正常启动请查看/var/log/nova目录下相关文件排错
4 计算节点安装
4.1 前提工作
- 导入第三方软件源
rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm rpm -Uvh http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
- 安装依赖包
yum -y install swig libvirt-python libvirt qemu-kvm gcc make gcc-c++ patch m4 python-devel libxml2-devel libxslt-devel libgsasl-devel openldap-devel sqlite-devel openssl-devel wget telnet gpxe-bootimgs gpxe-roms gpxe-roms-qemu dmidecode git scsi-target-utils kpartx socat vconfig aoetools python-pip rpm -Uvh http://veillard.com/libvirt/6.3/x86_64/dnsmasq-utils-2.48-6.el6.x86_64.rpm ln -sv /usr/bin/pip-python /usr/bin/pip
4.2 NTP时钟同步配置
- 安装NTP相关命令包
yum -y install ntpdate
- 跟控制节点同步时间并写入硬件
ntpdate 192.168.1.2 hwclock -w
- 将时间同步添加到计划任务
echo '30 8 * * * root /usr/sbin/ntpdate 192.168.1.2;hwclock -w' >>/etc/crontab
4.3 PYTHON-NOVACLIENT库安装
- 下载源码包
wget https://launchpad.net/nova/essex/2012.1/+download/python-novaclient-2012.1.tar.gz -P /opt
- 安装依赖包
yum -y install python-simplejson python-prettytable python-argparse python-nose1.1 python-httplib2 python-virtualenv MySQL-python
- 解压并安装PYTHON-NOVACLIENT库
cd /opt tar xf python-novaclient-2012.1.tar.gz cd python-novaclient-2012.1 python setup.py install rm -f ../python-novaclient-2012.1.tar.gz
4.4 GLANCE镜像存储服务安装
- 下载源码包
wget https://launchpad.net/glance/essex/2012.1/+download/glance-2012.1.tar.gz -P /opt
- 安装依赖包
yum install -y python-anyjson python-kombu pip install xattr==0.6.0 iso8601==0.1.4 pysendfile==2.0.0 pycrypto==2.3 wsgiref boto==2.1.1
- 解压并安装GLANCE镜像存储服务
cd /opt tar xf glance-2012.1.tar.gz cd glance-2012.1 python setup.py install rm -f ../glance-2012.1.tar.gz
4.5 NOVA计算服务安装
- 下载源码包
wget https://launchpad.net/nova/essex/2012.1/+download/nova-2012.1.tar.gz -P /opt
- 安装依赖包
yum install -y python-amqplib python-carrot python-lockfile python-gflags python-netaddr python-suds python-paramiko python-feedparser python-eventlet python-greenlet python-paste pip install Cheetah==2.4.4 python-daemon==1.5.5 Babel==0.9.6 routes==1.12.3 lxml==2.3 PasteDeploy==1.5.0 sqlalchemy-migrate==0.7.2 SQLAlchemy==0.7.3 WebOb==1.0.8
- 解压并安装NOVA计算服务
cd /opt tar xf nova-2012.1.tar.gz cd nova-2012.1 python setup.py install rm -f ../nova-2012.1.tar.gz
4.6 NOVA计算服务配置
- 建立NOVA服务配置文件存放目录
mkdir /etc/nova
- 建立NOVA服务启动用户
useradd -s /sbin/nologin -m -d /var/log/nova nova
- 在/etc/nova建立nova.conf作为NOVA服务配置文件,内容如下:
[DEFAULT] auth_strategy=keystone bindir=/usr/bin pybasedir=/var/lib/nova connection_type=libvirt debug=True lock_path=/var/lock/nova log-dir=/var/log/nova my_ip=60.12.206.105 ec2_host=$my_ip ec2_path=/services/Cloud ec2_port=8773 ec2_scheme=http glance_host=$my_ip glance_port=9292 glance_api_servers=$glance_host:$glance_port image_service=nova.image.glance.GlanceImageService metadata_host=$my_ip metadata_port=8775 network_manager=nova.network.manager.FlatDHCPManager osapi_path=/v1.1/ osapi_scheme=http rabbit_host=192.168.1.2 rabbit_password=openstack rabbit_port=5672 rabbit_userid=guest root_helper=sudo s3_host=$my_ip s3_port=3333 sql_connection=mysql://root:openstack@192.168.1.2/nova state_path=/var/lib/nova use_ipv6=False use-syslog=False verbose=True ec2_listen=$my_ip ec2_listen_port=8773 metadata_listen=$my_ip metadata_listen_port=8775 osapi_compute_listen=$my_ip osapi_compute_listen_port=8774 osapi_volume_listen=$my_ip osapi_volume_listen_port=8776 keystone_ec2_url=http://$my_ip:5000/v2.0/ec2tokens dhcpbridge=$bindir/nova-dhcpbridge dhcpbridge_flagfile=/etc/nova/nova.conf public_interface=eth0 routing_source_ip=60.12.206.99 fixed_range=10.0.0.0/24 flat_interface=eth1 flat_network_bridge=b41 force_dhcp_release=True libvirt_type=kvm libvirt_use_virtio_for_bridges=True iscsi_helper=ietadm iscsi_ip_address=$my_ip novncproxy_base_url=http://$my_ip:6080/vnc_auto.html
- 在/etc/init.d/下建立名为nova-compute的NOVA-COMPUTE服务启动脚本,内容如下:
#!/bin/sh # # openstack-nova-compute OpenStack Nova Compute Worker # # chkconfig: - 20 80 # description: Compute workers manage computing instances on host \ # machines. Through the API, commands are dispatched \ # to compute workers to: \ # * Run instances \ # * Terminate instances \ # * Reboot instances \ # * Attach volumes \ # * Detach volumes \ # * Get console output ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova Compute Worker # Description: Compute workers manage computing instances on host # machines. Through the API, commands are dispatched # to compute workers to: # * Run instances # * Terminate instances # * Reboot instances # * Attach volumes # * Detach volumes # * Get console output ### END INIT INFO . /etc/rc.d/init.d/functions suffix=compute prog=openstack-nova-$suffix exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --logfile=$logfile &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 在/etc/init.d/下建立名为nova-network的NOVA-NETWORK服务启动脚本,内容如下:
#!/bin/sh # # openstack-nova-network OpenStack Nova Network Controller # # chkconfig: - 20 80 # description: The Network Controller manages the networking resources \ # on host machines. The API server dispatches commands \ # through the message queue, which are subsequently \ # processed by Network Controllers. \ # Specific operations include: \ # * Allocate Fixed IP Addresses \ # * Configuring VLANs for projects \ # * Configuring networks for compute nodes \ ### BEGIN INIT INFO # Provides: # Required-Start: $remote_fs $network $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 6 # Short-Description: OpenStack Nova Network Controller # Description: The Network Controller manages the networking resources # on host machines. The API server dispatches commands # through the message queue, which are subsequently # processed by Network Controllers. # Specific operations include: # * Allocate Fixed IP Addresses # * Configuring VLANs for projects # * Configuring networks for compute nodes ### END INIT INFO . /etc/rc.d/init.d/functions suffix=network prog=openstack-nova-$suffix exec="/usr/bin/nova-$suffix" config="/etc/nova/nova.conf" pidfile="/var/run/nova/nova-$suffix.pid" logfile="/var/log/nova/$suffix.log" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/nova/$prog start() { [ -x $exec ] || exit 5 [ -f $config ] || exit 6 echo -n $"Starting $prog: " daemon --user nova --pidfile $pidfile "$exec --config-file=$config --logfile=$logfile &>/dev/null & echo \$! > $pidfile" retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile $prog retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { status -p $pidfile $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?
- 配置sudo在/etc/sudoers.d/建立nova文件,内容如下:
Defaults:nova !requiretty Cmnd_Alias NOVACMDS = /bin/aoe-stat, \ /bin/chmod, \ /bin/chmod /var/lib/nova/tmp/*/root/.ssh, \ /bin/chown, \ /bin/chown /var/lib/nova/tmp/*/root/.ssh, \ /bin/dd, \ /bin/kill, \ /bin/mkdir, \ /bin/mount, \ /bin/umount, \ /sbin/aoe-discover, \ /sbin/ifconfig, \ /sbin/ip, \ /sbin/ip6tables-restore, \ /sbin/ip6tables-save, \ /sbin/iptables, \ /sbin/iptables-restore, \ /sbin/iptables-save, \ /sbin/iscsiadm, \ /sbin/kpartx, \ /sbin/losetup, \ /sbin/lvcreate, \ /sbin/lvdisplay, \ /sbin/lvremove, \ /sbin/pvcreate, \ /sbin/route, \ /sbin/tune2fs, \ /sbin/vconfig, \ /sbin/vgcreate, \ /sbin/vgs, \ /usr/bin/fusermount, \ /usr/bin/guestmount, \ /usr/bin/socat, \ /bin/cat, \ /usr/bin/tee, \ /usr/bin/qemu-nbd, \ /usr/bin/virsh, \ /usr/sbin/brctl, \ /usr/sbin/dnsmasq, \ /usr/sbin/ietadm, \ /usr/sbin/radvd, \ /usr/sbin/tgtadm, \ /usr/sbin/vblade-persist nova ALL = (root) NOPASSWD: SETENV: NOVACMDS
chmod 0440 /etc/sudoers.d/nova
- 配置polkit策略在/etc/polkit-1/localauthority/50-local.d/建立50-nova.pkla,内容如下:
[Allow nova libvirt management permissions] Identity=unix-user:nova Action=org.libvirt.unix.manage ResultAny=yes ResultInactive=yes ResultActive=yes
- 配置启动脚本:
chmod 755 /etc/init.d/nova-compute chmod 755 /etc/init.d/nova-network mkdir /var/run/nova mkdir -p /var/lib/nova/instances mkdir /var/lock/nova chown nova:root /var/run/nova chown -R nova:nova /var/lib/nova chown nova:root /var/lock/nova
- 配置MYSQL数据库在控制节点mysql执行如下语句:
grant all on nova.* to root@'192.168.1.%' identified by 'openstack';
- 启动NOVA相关服务
/etc/init.d/nova-compute start /etc/init.d/nova-network start
- 更改iptables允许vnc连接
iptables -I INPUT -d 60.12.206.99 -p tcp -m multiport --dports 5900:6000 -j ACCEPT
Author:趣云团队-yz
Posted by halfss at 上午 5:36
楼主您好,看见您的文章,真实太厉害了。。。不过有一个小疑问:
就是在控制节点安装的前提工作中:rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-5.noarch.rpm这里是不是应该是x86_64, 而非i386
谢谢 已更新
楼主您好,根据您的文章,我在centos下搭建,当配置完horizon,重启/etc/init.d/httpd restart。然后打开浏览器登录,输入用户名密码。出现这个错误Error: An error occurred authenticating. Please try again later.找了好久也没发现哪错了。请楼主指点一二。
而且我执行curl -d ‘{“auth”: {“tenantName”: “admin”, “passwordCredentials”:{“username”: “admin”, “password”: “secrete”}}}’ -H “Content-type: application/json” http://x.x.x.x:35357/v2.0/tokens | python -mjson.tool验证keystone配置没有问题
你可以修改/opt/horizon-2012.1/openstack_dashboard/local/local_settings.py文件
DEBUG = True
..
‘console’: {
# Set the level to “DEBUG” for verbose output logging.
‘level’: ‘DEBUG’,
‘class’: ‘logging.StreamHandler’,
},
..
然后在/opt/horizon-2012.1目录运行./manage.py runserver 0.0.0.0:8888, 然后访问,如果出错就可以在这里看到了
你好,可能是由于编辑器的原因daemon –user keystone –pidfile $pidfile “$exec –config-file=$config &>/dev/null & echo \$! > $pidfile”这段代码都只显示了一横,应该为:–user-keystone –pidfile –config-file。。。
-user 是那个-的问题,不应该为-user-keystone
呵呵。这次更新真好。谢谢博主了。
谢谢楼主,在文档的指引下,我完成了安装。。但是在创建虚拟机是错了个错误,囧,希望楼主指点。OSError: [Errno 2] No such file or directory: ‘/var/lib/nova/nova/virt/libvirt.xml.template’
试试在nova配置文件里加这一行
pybasedir=/opt/nova-2012.1
根据具体情况作相应更改,可以用find找到这个文件在那儿
find / -name libvirt.xml.template
恩恩,感激不尽,谢谢楼主。。。
出现了一个比较奇怪的错误,dashboard能正常显示上传的镜像(swift存储)。但是launch instance时,compute.log报错。ImageNotFound: Image b80cd903-29e7-4b32-ab07-c3484410d23b could not be found.纠结死我了。。。。
看看是不是创建实例的目录权限不对
谢谢Jenny
不知道是不是这个bughttps://bugs.launchpad.net/nova/essex/+bug/977765
ubuntu已经修复了这个bug,但是centos咋办呢。。囧
不确定是不是这个bug,如果是你可以手动修复,或者可以把错误日志发到群里问
@wuzhenle
bug https://bugs.launchpad.net/nova/essex/+bug/977765
是不是这个bug先去看你swift几个节点同步是否正常,如果正常,可以去https://review.openstack.org/#/c/6431/手工查看改的py文件,然后做改动并重新执行python setup.py install,或自制.patch文件。
只能说,您太牛了!!!
楼主你好,我在装计算节点的时候,遇到nova-compute启动不到的问题,其中nova-network是可以启动的。
nova-compute出错的日志是
2012-04-20 16:39:32 TRACE nova self.driver.update_available_resource(context, self.host)
2012-04-20 16:39:32 TRACE nova File “/usr/lib/python2.6/site-packages/nova-2012.1-py2.6.egg/nova/virt/libvirt/connection.py”, line 1881, in update_available_resource
2012-04-20 16:39:32 TRACE nova ‘vcpus_used’: self.get_vcpu_used(),
2012-04-20 16:39:32 TRACE nova File “/usr/lib/python2.6/site-packages/nova-2012.1-py2.6.egg/nova/virt/libvirt/connection.py”, line 1705, in get_vcpu_used
2012-04-20 16:39:32 TRACE nova for dom_id in self._conn.listDomainsID():
2012-04-20 16:39:32 TRACE nova File “/usr/lib/python2.6/site-packages/nova-2012.1-py2.6.egg/nova/virt/libvirt/connection.py”, line 277, in _get_connection
2012-04-20 16:39:32 TRACE nova self.read_only)
2012-04-20 16:39:32 TRACE nova File “/usr/lib/python2.6/site-packages/nova-2012.1-py2.6.egg/nova/virt/libvirt/connection.py”, line 320, in _connect
2012-04-20 16:39:32 TRACE nova return libvirt.openAuth(uri, auth, 0)
2012-04-20 16:39:32 TRACE nova File “/usr/lib64/python2.6/site-packages/libvirt.py”, line 102, in openAuth
2012-04-20 16:39:32 TRACE nova if ret is None:raise libvirtError(‘virConnectOpenAuth() failed’)
2012-04-20 16:39:32 TRACE nova libvirtError: Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory
2012-04-20 16:39:32 TRACE nova
由此我去到控制节点上面看libvirtd没有启动成功,不知道是什么原因?
libvirtd的错误日志是
2012-04-20 09:47:48.931+0000: 20096: info : libvirt version: 0.9.4, package: 23.el6_2.7 (CentOS BuildSystem , 2012-03-26-14:12:59, c6b5.bsys.dev.centos.org)
2012-04-20 09:47:48.931+0000: 20096: error : virNetServerMDNSStart:460 : internal error Failed to create mDNS client: Daemon not running
其中我是按照博文的安装过程的安装的。
2012-04-20 16:39:32 TRACE nova libvirtError: Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory
libvirt没有启动
libvirt安装后需要修改配置吗?
在控制节点上通过修改
perl -pi -e “s|#mdns_adv|mdns_adv|” /etc/libvirt/libvirtd.conf
perl -pi -e “s|#auth_unix_rw|auth_unix_rw|” /etc/libvirt/libvirtd.conf
控制节点有/var/run/libvirt/libvirt-sock这个文件了
能起来了,但是计算节点仍然会
libvirtError: Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory
2012-04-23 17:53:54 CRITICAL nova [-] Failed to connect socket to ‘/var/run/libvirt/libvirt-sock’: No such file or directory
2012-04-23 17:53:54 TRACE nova Traceback (most recent call last):
难道他连的是计算节点的这个sock?
你试下先把avahi-daemon启起来,再启libvirtd试试
连的本机的, 启动libvirtd服务、然后ps aux |grep libvirt去看
再想提一个问题,博文上面的说到Floating IP为60.12.206.110,文章配置过程中并没有那个配置设置了这个ip,所以不知道这个ip到底是如何使用的?
dashboard具体使用方法没写,你直接nova-manage 去建立
多谢指导,另外文章中提到的nova.conf 配置floating_range=60.12.206.115 是不是随便设置的一个外网ip?
不是
再想请教一个问题:
# glance index
Failed to show index. Got error:
You are not authenticated.
出现这种错误一般是那些环境变量没有设置?
OS_USERNAME
OS_PASSWORD
OS_TENANT_NAME
OS_AUTH_URL
OS_REGION_NAME
版主,你好,按照文档,我成功安装了rabbitmq-server,但是在启动这个服务的时候,出现了错误:
[root@centos-liws etc]# /etc/init.d/rabbitmq-server start
Starting rabbitmq-server: FAILED – check /var/log/rabbitmq/startup_{log, _err}
rabbitmq-server.
查看log:
– core initialized
starting empty DB check …done
starting exchange, queue and binding recovery …done
starting mirror queue slave sup …done
starting adding mirrors to queues …done
– message delivery logic ready
starting error log relay …done
starting networking …BOOT ERROR: FAILED
Reason: {badmatch,
{error,
{shutdown,
{child,undefined,’rabbit_tcp_listener_sup_:::5672′,
{tcp_listener_sup,start_link,
[{0,0,0,0,0,0,0,0},
5672,
[inet6,binary,
{packet,raw},
{reuseaddr,true},
{backlog,128},
{nodelay,true},
{exit_on_close,false}],
{rabbit_networking,tcp_listener_started,[amqp]},
{rabbit_networking,tcp_listener_stopped,[amqp]},
不知道版主有没有遇到过这样的问题,该如何解决?谢谢。
解决了,是端口冲突导致的,5672端口被qppid占用。杀掉qppid进程就可以了。
请问版主:在3.13 KEYSTONE身份认证服务配置—-》在/etc/init.d/下建立名为keystone的KEYSTONE服务启动脚本中,我按照文档启动keystone服务,启动完之后,马上就死掉了,在用netstat查看tcp的5000端口也不存在,并且没有/var/log/keystone/keystone.log日志生成。不知道是什么错误。
另:在/etc/init.d/keystone脚本中status -p $pidfile $prog一句,status工具并不提供-p选项,是不是版主弄错了?
求解。谢谢!
可以在配置文件中加入debug=True, 或者直接在终端运行keystone-all看有什么报错
这里的status应该不是status命令,可能是一个脚本
多谢斑竹,这个问题已经解决了。是keystone-all的一个依赖包没有装好,重新安装,yum install openstack-keystone就可以了。
斑竹,你好,我再执行keystone_data.sh的时候,Unable to communicate with identity service: [Errno 111] Connection refused. (HTTP 400)的错误,单步执行keystone –token ADMIN –endpoint http://localhost:35357/v2.0 service-create –name=keystone –type=identity –description=”Keystone Identity Service”的时候,也报
Unable to communicate with identity service: [Errno 111] Connection refused. (HTTP 400)的错误,不知道是什么原因?
楼主您好,想请教一个问题。是这样的,我用3个机器搭建我的测试环境。
control node:rabbitmq mysql keystone glance nova-api nova-novncproxy
network node:nova-network
compute node:nova-compute
当我尝试获取vm的log(nova console-log vm)或者获取vm的vnc url(nova get-vnc-console vm novnc)。响应很慢,大约有一二十是秒,时长时短。(date;nova console-log vm;date)
然后我查看消息队列(sudo rabbitmqctl list_queues),有时候会发现vm的宿主机上有一条消息,且持续了好几秒钟。
是不是nova-compute的问题,消息没有很快的被consume?
请楼主指点一二。
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_host = 60.12.206.105
service_port = 5000
service_protocol = http
auth_host = 60.12.206.105
auth_port = 35357
auth_protocol = http
auth_uri = http:/60.12.206.105:500/ ———————————-这地方是这样吗,不是http://60.12.206.105:5000/ ??
笔误
[...] OpenStack完整安装手册(CentOS6.2) » 趣云-LightCloud-Blog [...]
楼主您好!我想请教一下,openstack有没有考虑服务提供商不可信时,如何保护用户的机密信息?
顺便问一下如果想把openstack的基于角色的访问控制机制替换为基于属性的访问控制机制不知是否可行?
你所谓的信息是云主机中的信息吧,这个,你服务器都在人家那里,人家肯定可以获取到,更可况是虚拟机
role也是用户的一个属性而已
是的就是指云主机中的信息。在云服务提供商不可信的情况下,是不是可以考虑先加密在存储。
不知道您对基于属性的访问控制理解不,openstack为什么采用基于角色的访问控制,而没采用基于属性的访问控制?不知道是不是基于属性的访问控制是不是还没有统一的规范了。
请问一个主机的SSH证书如何导入到虚拟机里面建立认证的?
挂载虚拟机的镜像到本地,将公钥写到镜像里,然后卸载,启动后的虚拟机就有公钥了
执行keystone_data.sh数据初始化出错,请帮忙看下是啥问题。
No handlers could be found for logger “keystoneclient.client”
Conflict occurred attempting to store role. (IntegrityError) (1062, “Duplicate entry ‘admin’ for key ‘name’”) ‘INSERT INTO role (id, name) VALUES (%s, %s)’ (‘bd58030e95074fe9a35267a155d83ce0′, ‘admin’) (HTTP 409)
No handlers could be found for logger “keystoneclient.client”
Conflict occurred attempting to store role. (IntegrityError) (1062, “Duplicate entry ‘KeystoneAdmin’ for key ‘name’”) ‘INSERT INTO role (id, name) VALUES (%s, %s)’ (‘bfef08b93e8d471e890753b1cd090f0b’, ‘KeystoneAdmin’) (HTTP 409)
No handlers could be found for logger “keystoneclient.client”
Conflict occurred attempting to store role. (IntegrityError) (1062, “Duplicate entry ‘KeystoneServiceAdmin’ for key ‘name’”) ‘INSERT INTO role (id, name) VALUES (%s, %s)’ (’09b5daccfc65479b81835b101e9a7766′, ‘KeystoneServiceAdmin’) (HTTP 409)
No handlers could be found for logger “keystoneclient.client”
Conflict occurred attempting to store role. (IntegrityError) (1062, “Duplicate entry ‘anotherrole’ for key ‘name’”) ‘INSERT INTO role (id, name) VALUES (%s, %s)’ (’45d55f917b744c68be233f02f8cb2f34′, ‘anotherrole’) (HTTP 409)
usage: keystone [--os_username ]
[--os_password ]
[--os_tenant_name ]
[--os_tenant_id ] [--os_auth_url ]
[--os_region_name ]
[--os_identity_api_version ]
[--token ] [--endpoint ]
[--username ] [--password ]
[--tenant_name ] [--auth_url ]
[--region_name ]
keystone: error: argument –username: expected one argument