openstack

openstack安装(1)——服务器分配

操作系统我用的是ubuntu 12.04,每台机子都配了ntp。 openstack安装是个很麻烦的工作,本文档是我自己安装的记录。在此感谢下liangbo.me的博主,他的安装文档给了我很大的帮助,我就是基于他的文档来的。

操作系统我用的是ubuntu 12.04,每台机子都配了ntp。

openstack安装是个很麻烦的工作,本文档是我自己安装的记录。在此感谢下liangbo.me的博主,他的安装文档给了我很大的帮助,我就是基于他的文档来的。博客地址:http://liangbo.me/index.php/2012/03/27/11/。

我使用了6台电脑来安装openstack,分配如下:

 

keystone

Host:keystone

ip:192.168.0.106

mysql, keystone

Swift

Host:swift1

ip:192.168.0.110

Proxy-server, account-server, container-server, object-server

Host:swift2

ip:192.168.0.111

account-server, container-server, object-server

Glance

Host:glance

ip:192.168.0.112

Glance-api, glance-registry

Compute

Host:compute1

ip: 192.168.0.114

rabbitmq-server bridge-utils compute-control相关组件

Host:compute2

ip: 192.168.0.109

Nova-compute

dashboard

Host:compute1

ip: 192.168.0.114

openstack-dashboard

 

原文链接:http://blog.csdn.net/nocturne1210/article/details/7877188




openstack安装(2)——keystone

keystone是openstack中用于身份验证的项目,任何服务请求需要经过它的验证获得服务的endpoint。具体作用请看相关官方文档。这里我使用的是mysql来存储keystone的数据。

keystone是openstack中用于身份验证的项目,任何服务请求需要经过它的验证获得服务的endpoint。具体作用请看相关官方文档。这里我使用的是mysql来存储keystone的数据。

keystone

Host:keystone

ip:192.168.0.106

mysql, keystone

 

1、安装

 1)安装数据库

sudoapt-get install mysql-server mysql-client python-mysqldb

    进/etc/mysql/my.cnf里,将bind-address=127.0.0.1改成 0.0.0.0。这样远程主机就可以连接上这个mysql。

    重启mysql服务。sudo service mysql restart

2)安装keystone

Ø  安装软件

sudoapt-get install keystone

创建keystone数据库,并创建用户以及分配权限。

create database keystone;

grant all on keystone.* to 'keystone'@'%' identified by 'keystonepwd';

Ø  配置keystone

配置keystone,修改/etc/keystone/keystone.conf文件:

[sql]

#connection = sqlite:var/lib/keystone/keystone.db

connection = mysql://keystone:keystonepwd@192.168.0.106/keystone

这里注意一下该文件里的这部份信息,记住admin_token参数,以后会用的上,这个参数是用来访问keystone服务的。默认是ADMIN,也可以改成别的。

[DEFAULT]

public_port = 5000

admin_port = 35357

admin_token = ADMIN

compute_port = 8774

verbose = True

debug = True

log_config =/etc/keystone/logging.conf

重启keystone服务

sudoservice keystone restart

同步数据库,

sudokeystone-manage db_sync

然后去数据库里看,

mysql>show tables;

 

+------------------------+

|Tables_in_keystone     |

+------------------------+

|ec2_credential         |

|endpoint               |

|metadata               |

|migrate_version        |

|role                   |

|service                |

|tenant                 |

|token                  |

|user                   |

|user_tenant_membership |

+------------------------+

2、使用keystone

导入环境变量,当然也可以在每次执行keystone命令时加上这方面的参数,keystone 命令格式参见它的help

export SERVICE_TOKEN=ADMIN

exportSERVICE_ENDPOINT=http://192.168.0.106:35357/v2.0

添加tenant:

keystonetenant-create --name adminTenant --description "Admin Tenant"--enabled true

keystone@keystone:~$keystone tenant-list

+----------------------------------+-------------+---------+

|                id                |     name   | enabled |

+----------------------------------+-------------+---------+

|72a95ab302cc42d59e6f414769dcfec7 | adminTenant | True    |

+----------------------------------+-------------+---------+

添加user:

keystoneuser-create --tenant_id 72a95ab302cc42d59e6f414769dcfec7 --name admin --passopenstack --enabled true

keystone@keystone:~$ keystone user-list

+----------------------------------+---------+-------+-------+

|                id                | enabled | email |  name |

+----------------------------------+---------+-------+-------+

|4fd5ba059a6945c0a43ff63b0140b0a9 | True   | None  | admin |

+----------------------------------+---------+-------+-------+

添加role

keystonerole-create --name adminRole

keystone@keystone:~$ keystone role-list

+----------------------------------+-----------+

|                id                |    name  |

+----------------------------------+-----------+

|675b96a12d834021b519ef50502a5e5e | adminRole |

+----------------------------------+-----------+

将这三者关联

keystoneuser-role-add --user 4fd5ba059a6945c0a43ff63b0140b0a9 --tenant_id72a95ab302cc42d59e6f414769dcfec7 --role 675b96a12d834021b519ef50502a5e5e

这样就ok了。测试一下,用curl工具测试。

       sudo apt-get install curl

       我们先输入一个错误的密码试试

curl-d '{"auth": {"tenantName": "adminTenant","passwordCredentials":{"username": "admin","password": "wrong"}}}' -H"Content-type: application/json" http://192.168.0.106:35357/v2.0/tokens| python -mjson.tool

       返回结果

{

    "error":{

        "code":401, 

        "message":"Invalid user / password", 

        "title":"Not Authorized"

    }

}

    如果用户名/密码都正确的话

curl -d'{"auth": {"tenantName": "adminTenant","passwordCredentials":{"username": "admin","password": "openstack"}}}' -H "Content-type:application/json" http://192.168.0.106:35357/v2.0/tokens | python-mjson.tool

    就会返回很多信息,如token、user等,内容太多了,这里我就不贴了。

 

原文链接:http://blog.csdn.net/nocturne1210/article/details/7877307


Openstack 单元测试

这篇文章介绍的是在Openstack nova开发过程中如何进行单元测试,在nova的开发环境搭建好之后,如果对源码进行了修改,就应该做单元测试,本篇文章基本上是对官方文档的翻译,仅对关键步骤稍作介绍,相关的资料后面列举。 1.执行测试 上一篇文章中已经介绍过单元测试的方法,执行测试脚本: 1 ./run_tests.sh 这样会对整个nova工程进行一次测试,会花不少时间。这个脚本里封装了nose测试框架的用法,关于nose可以自己搜索相关资源,也可以到 这里 了解。

这篇文章介绍的是在Openstack nova开发过程中如何进行单元测试,在nova的开发环境搭建好之后,如果对源码进行了修改,就应该做单元测试,本篇文章基本上是对官方文档的翻译,仅对关键步骤稍作介绍,相关的资料后面列举。

1.执行测试

    上一篇文章中已经介绍过单元测试的方法,执行测试脚本:

 

 

1 ./run_tests.sh

这样会对整个nova工程进行一次测试,会花不少时间。这个脚本里封装了nose测试框架的用法,关于nose可以自己搜索相关资源,也可以到这里了解。这个脚本支持许多不同的参数以获得相应的信息,具体参数如下:

 

 

01 Usage: ./run_tests.sh [OPTION]...

02 Run Nova's test suite(s)

03  

04   -V, --virtual-env        Always use virtualenv.  Install automatically if not present

05   -N, --no-virtual-env     Don't use virtualenv.  Run tests in local environment

06   -s, --no-site-packages   Isolate the virtualenv from the global Python environment

07   -r, --recreate-db        Recreate the test database (deprecated, as this is now the default).

08   -n, --no-recreate-db     Don't recreate the test database.

09   -x, --stop               Stop running tests after the first error or failure.

10   -f, --force              Force a clean re-build of the virtual environment. Useful when dependencies have been added.

11   -p, --pep8               Just run pep8

12   -P, --no-pep8            Don't run pep8

13   -c, --coverage           Generate coverage report

14   -h, --help               Print this usage message

15   --hide-elapsed           Don't print the elapsed time for each test along with slow test list

想了解更多关于参数的信息,点击这里

上面是对整个工程的测试,如果仅想对某个模块或功能做测试的话,可以运行相应的测试子集:

 

 

1 ./run_tests.sh scheduler

2 #测试 <span>nova/tests/scheduler</span>

上面的代码是对nova调度器模块做的测试,也可以对模块中的具体类或方法做测试,如下:

 

 

1 ./run_tests.sh test_libvirt:HostStateTestCase

2  

3 #对libvirt模块中的HostStateTestCase类做测试

4  

5 ./run_tests.sh test_utils:ToPrimitiveTestCase.test_dict

6  

7 #对ToPrimitiveTestCase.test_dict方法做测试

2.控制输出

    默认情况下,执行测试之后,会在控制台输出大量的测试信息,在这些输出中找到想要的结果还是比较困难的,即便是

重定向到文件中,依然不是很方便,这样就需要控制测试的输出结果。在执行测试的时候添加一个参数就可以:

 

 

1 --nologcapture


httpclient for java 测试openstack swift .


启动一个新的instance涉及到很多openstack nova里面的组件 •API server:处理客户端的请求,并且转发到cloud control •Cloud control:处理compute节点,网络控制节点,API server和scheduler中间连接 •Scheduler:选择一个host去执行命令 •compute worker:启动和停止实例,附加和删除卷 等操作 •network controller:管理网络资源,分配固定IP,配置vlans

综述

启动一个新的instance涉及到很多openstack nova里面的组件

  • API server:处理客户端的请求,并且转发到cloud control
  • Cloud control:处理compute节点,网络控制节点,API server和scheduler中间连接
  • Scheduler:选择一个host去执行命令
  • compute worker:启动和停止实例,附加和删除卷 等操作
  • network controller:管理网络资源,分配固定IP,配置vlans

 

  1.API server将消息发送到Cloud Controller

  2. Authertication 保用户有权限,然后Cloud Controller将消息发送给Scheduler

  3. Scheduler caste 一个消息给一个选择好的host要求他启动一个实例

  4.compute worker(选择的那个host)获取到消息

  5.6.7.8 compute worker需要一个固定的ip去启动一个实例,所以向network controller发送消息

 

下面我将详细说明一下:

 

API

1.可以在dashboard网页上面进行

2.可以用命令行 euca-add-keypair        euca-run-instances

 

  用户的请求发送到nova-api,有两种方式

  第一种:通过openstack api (nova/api/servers.py 类 class Controller(object))create方法

    def create(self, req, body):

        """ Creates a new server for a given user """

        if 'server' in body:

            body['server']['key_name'] = self._get_key_name(req, body)

 

        extra_values = None

        extra_values, instances = self.helper.create_instance(

                req, body, self.compute_api.create)

 

  第二种:通过ec2 api (nova/api/cloud.py 中类 CloudController )

  调用def run_instances(self, context, **kwargs):

        ...

        (instances, resv_id) = self.compute_api.create(context,

            instance_type=instance_types.get_instance_type_by_name(

                kwargs.get('instance_type', None)),

        ...

 

最终调用的Compute API create():

  • 查看这种类型的instance是否达到最大值
  • 如果不存在安全组,就创建个
  • 生成MAC地址和hostnames
  • 给scheduler发送一个消息去运行这个实例

 

CAST

当然maxCount为1(默认值为1)的时候 调用RPC.cast方法向scheduler发送运行实例的消息

在openstack中通过RPC.cast来发送消息,消息的分发通过RabbitMQ。消息发送方(Compute API)往

topic exchange(scheduler topic)发送一个消息,消息消费者(Scheduler worker)从队列中获得消息,

cast调用不需要返回值。

[python] view plaincopyprint?

  1. def _schedule_run_instance(self,  
  2.         ...  
  3.         return rpc_method(context,  
  4.                 FLAGS.scheduler_topic,  
  5.                 {"method""run_instance",  
  6.                  "args": {"topic": FLAGS.compute_topic,  
  7.                           "request_spec": request_spec,  
  8.                           "admin_password": admin_password,  
  9.                           "injected_files": injected_files,  
  10.                           "requested_networks": requested_networks,  
  11.                           "is_first_time"True,  
  12.                           "filter_properties": filter_properties}})  

Scheduler

scheduler接收到消息,然后通过设定的scheduler策略选择一个目的host,如:zone scheduler

选择一个主机在特定的可获取的zone上面。最后发送一个cast消息到特定的host上面

[python] view plaincopyprint?

  1. def cast_to_compute_host(context, host, method, update_db=True, **kwargs):  
  2.     """Cast request to a compute host queue"""  
  3.   
  4.     if update_db:  
  5.         # fall back on the id if the uuid is not present   
  6.         instance_id = kwargs.get('instance_id'None)  
  7.         instance_uuid = kwargs.get('instance_uuid', instance_id)  
  8.         if instance_uuid is not None:  
  9.             now = utils.utcnow()  
  10.             db.instance_update(context, instance_uuid,  
  11.                     {'host': host, 'scheduled_at': now})  
  12.     rpc.cast(context,  
  13.             db.queue_get_for(context, 'compute', host),  
  14.             {"method": method, "args": kwargs})  
  15.     LOG.debug(_("Casted '%(method)s' to compute '%(host)s'") % locals())  

Compute

compute worker进程接收到消息执行方法(nova/compute/manager.py)

[python] view plaincopyprint?

  1. def _run_instance(self, context, instance_uuid,  
  2.                       requested_networks=None,  
  3.                       injected_files=[],  
  4.                       admin_password=None,  
  5.                       is_first_time=False,  
  6.                       **kwargs):  
  7.         """Launch a new instance with specified options."""  
  8.         context = context.elevated()  
  9.         try:  
  10.             instance = self.db.instance_get_by_uuid(context, instance_uuid)  
  11.             self._check_instance_not_already_created(context, instance)  
  12.             image_meta = self._check_image_size(context, instance)  
  13.             self._start_building(context, instance)  
  14.             self._notify_about_instance_usage(instance, "create.start")  
  15.             network_info = self._allocate_network(context, instance,  
  16.                                                   requested_networks)  
  17.             try:  
  18.                 block_device_info = self._prep_block_device(context, instance)  
  19.                 instance = self._spawn(context, instance, image_meta,  
  20.                                        network_info, block_device_info,  
  21.                                        injected_files, admin_password)  
  22.         ...  

  • 检查instance是否已经在运行
  • 分配一个固定的ip地址
  • 如果没有设置vlan和网桥,设置一下
  • 最后通过虚拟化的driver spawn一个instance

 

network controller

network_info = self._allocate_network(context, instance,

                                                  requested_networks)

调用network的API的allocate_for_instance方法

[python] view plaincopyprint?

  1. def allocate_for_instance(self, context, instance, **kwargs):  
  2.         """Allocates all network structures for an instance. 
  3.  
  4.         :returns: network info as from get_instance_nw_info() below 
  5.         """  
  6.         args = kwargs  
  7.         args['instance_id'] = instance['id']  
  8.         args['instance_uuid'] = instance['uuid']  
  9.         args['project_id'] = instance['project_id']  
  10.         args['host'] = instance['host']  
  11.         args['rxtx_factor'] = instance['instance_type']['rxtx_factor']  
  12.   
  13.         nw_info = rpc.call(context, FLAGS.network_topic,  
  14.                            {'method''allocate_for_instance',  
  15.                              'args': args})  

RPC.call 与RPC.cast最大的不同 就是call方法需要一个response

 

Spawn instance

接下来我要说的就是虚拟化的driver spawn instance,我们这里使用的是libvirt(nova/virt/libvirt/lconnection.py)

[python] view plaincopyprint?

  1. def spawn(self, context, instance, image_meta, network_info,  
  2.               block_device_info=None):  
  3.         xml = self.to_xml(instance, network_info, image_meta, False,  
  4.                           block_device_info=block_device_info)  
  5.         self.firewall_driver.setup_basic_filtering(instance, network_info)  
  6.         self.firewall_driver.prepare_instance_filter(instance, network_info)  
  7.         self._create_image(context, instance, xml, network_info=network_info,  
  8.                            block_device_info=block_device_info)  
  9.   
  10.         self._create_new_domain(xml)  
  11.         LOG.debug(_("Instance is running"), instance=instance)  
  12.         self._enable_hairpin(instance)  
  13.         self.firewall_driver.apply_instance_filter(instance, network_info)  
  14.   
  15.         def _wait_for_boot():  
  16.             """Called at an interval until the VM is running."""  
  17.             try:  
  18.                 state = self.get_info(instance)['state']  
  19.             except exception.NotFound:  
  20.                 LOG.error(_("During reboot, instance disappeared."),  
  21.                           instance=instance)  
  22.                 raise utils.LoopingCallDone  
  23.   
  24.             if state == power_state.RUNNING:  
  25.                 LOG.info(_("Instance spawned successfully."),  
  26.                          instance=instance)  
  27.                 raise utils.LoopingCallDone  
  28.   
  29.         timer = utils.LoopingCall(_wait_for_boot)  
  30.         return timer.start(interval=0.5, now=True)  

  • 通过libvirt xml文件,然后根据xml文件生成instance
  • 准备network filter,默认的fierwall driver是iptables 
  • image的创建(详细情况以后再介绍)
        def _create_image(self, context, instance, libvirt_xml, suffix='',

                      disk_images=None, network_info=None,

                      block_device_info=None):

         ...

  •    最后虚拟化driver的spawn()方法中调用driver 的creatXML()

原文链接:http://blog.csdn.net/sj13426074890/article/details/7926048

Openstack Essex Deploy by Puppet on Ubuntu 12.04 HOWTO


This HOWTO will guide you though a multi-node Openstack Essex deployment with Puppet on Ubuntu 12.04. Before start I will assume you have a clean Ubuntu Server 12.04 installed with minimal packages requirement . It is strongly recommend to install Openstack for a new host, as it will modify a lot of default settings; from the other point of view, don't install Openstack on top of an online production that already well configured ;-) Prerequisites This environment will include 3 hosts: 1 master/proxy/controller + compute node (named controller in the following) vms1.hkstp.internal (eth0: 172.24.0.11/16, eth1: null) 2 compute only nodes vms2.hkstp.internal (eth0: 172.24.0.12/16, eth1: null) vms3.hkstp.internal (eth0: 172.24.0.13/16, eth1: null) My overall design (for sure, just for internal development and testing): Management subnet: 172.24.0.0/16 (eth0) Floating range: 172.24.1.0/24 (eth0) Fixed range: 10.1.0.0/16 (eth1) Controller node address: 172.24.0.11 Default username: openstack Default password: openstack Default token: bdbb8df712625fa7d1e0ff1e049e8aab Network setup example for /etc/network/interfaces (update with your dns-* accordingly): auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 172.24 . 0.11 netmask 255.255 . 0.0 network 172.24 . 0.0 broadcast 172.24 . 255.255 gateway 172.24 . 0.1 dns - nameservers 202.130 . 97.65 202.130 . 97.66 dns - search hkstp . internal auto eth1 iface eth1 inet manual up ifconfig eth1 up You may also need to map above hostname statically by editing /etc/hosts as below: 127.0 . 0.1 localhost 172.24 . 0.11 vms1 . hkstp . internal vms1 172.24 . 0.12 vms2 . hkstp . internal vms2 172.24 . 0.13 vms3 . hkstp . internal vms3 Every node that is configured to be a nova volume service must have a volume group called nova-volumes. NOTE: If you are going to use live migration functionality, pre-create system user/group id so they can map directly in cluster setup: addgroup -- system -- gid 999 kvm addgroup -- system -- gid 998 libvirtd addgroup -- system -- gid 997 nova adduser -- system -- home / var / lib / libvirt -- shell / bin / false -- uid 999 -- gid 999 -- disabled - password libvirt - qemu adduser -- system -- home / var / lib / libvirt / dnsmasq -- shell / bin / false -- uid 998 -- gid 998 -- disabled - password libvirt - dnsmasq adduser -- system -- home / var / lib / nova -- shell / bin / false --

 This HOWTO will guide you though a multi-node Openstack Essex deployment with Puppet on Ubuntu 12.04.

Before start I will assume you have a clean Ubuntu Server 12.04 installed with minimal packages requirement. It is strongly recommend to install Openstack for a new host, as it will modify a lot of default settings; from the other point of view, don't install Openstack on top of an online production that already well configured ;-)

Prerequisites

This environment will include 3 hosts:

  • 1 master/proxy/controller + compute node (named controller in the following)
    • vms1.hkstp.internal (eth0: 172.24.0.11/16, eth1: null)
  • 2 compute only nodes
    • vms2.hkstp.internal (eth0: 172.24.0.12/16, eth1: null)
    • vms3.hkstp.internal (eth0: 172.24.0.13/16, eth1: null)

My overall design (for sure, just for internal development and testing):

  • Management subnet: 172.24.0.0/16 (eth0)
  • Floating range: 172.24.1.0/24 (eth0)
  • Fixed range: 10.1.0.0/16 (eth1)
  • Controller node address: 172.24.0.11
  • Default username: openstack
  • Default password: openstack
  • Default token: bdbb8df712625fa7d1e0ff1e049e8aab

Network setup example for /etc/network/interfaces (update with your dns-* accordingly):

auto lo iface lo inet loopback  auto eth0 iface eth0 inet static         address 172.24.0.11         netmask 255.255.0.0         network 172.24.0.0         broadcast 172.24.255.255         gateway 172.24.0.1         dns-nameservers 202.130.97.65 202.130.97.66         dns-search hkstp.internal  auto eth1 iface eth1 inet manual         up ifconfig eth1 up

You may also need to map above hostname statically by editing /etc/hosts as below:

127.0.0.1      localhost 172.24.0.11    vms1.hkstp.internal    vms1 172.24.0.12    vms2.hkstp.internal    vms2 172.24.0.13    vms3.hkstp.internal    vms3

Every node that is configured to be a nova volume service must have a volume group called nova-volumes.

NOTE: If you are going to use live migration functionality, pre-create system user/group id so they can map directly in cluster setup:

addgroup --system --gid 999 kvm addgroup --system --gid 998 libvirtd addgroup --system --gid 997 nova adduser --system --home /var/lib/libvirt --shell /bin/false --uid 999 --gid 999 --disabled-password libvirt-qemu adduser --system --home /var/lib/libvirt/dnsmasq --shell /bin/false --uid 998 --gid 998 --disabled-password libvirt-dnsmasq adduser --system --home /var/lib/nova --shell /bin/false --uid 997 --gid 997 --disabled-password nova adduser nova libvirtd

Install Puppet

(All nodes) Install puppet agent:

aptitude -y install puppet augeas-tools

(Controller node only) Install puppetmaster by APT, and also install puppetlabs_spec_helper by Gem:

aptitude -y install puppetmaster sqlite3 libsqlite3-ruby libactiverecord-ruby git rake gem install puppetlabs_spec_helper

(All nodes) Enable pluginsync and configure the hostname of the puppetmaster:

augtool << EOF set /files/etc/puppet/puppet.conf/agent/pluginsync true set /files/etc/puppet/puppet.conf/agent/server vms1.hkstp.internal save EOF

(Controller node only) Enable storedconfig and configure database:

augtool << EOF set /files/etc/puppet/puppet.conf/master/storeconfigs true set /files/etc/puppet/puppet.conf/master/dbadapter sqlite3 set /files/etc/puppet/puppet.conf/master/dblocation /var/lib/puppet/server_data/storeconfigs.sqlite save EOF

(Controller node only) Create a dummy site manifest:

cat > /etc/puppet/manifests/site.pp << EOF node default {   notify { "Hey ! It works !": } } EOF

(Controller node only) Restart puppetmaster

/etc/init.d/puppetmaster restart

Test the puppet agents

(All nodes) Register each client with the puppetmaster:

puppet agent -vt --waitforcert 60

(Controller node only) While the puppet agent is waiting, sign the client certificates:

puppetca sign -a

There should be no error and you should see similar message as below on client:

info: Caching catalog for vms3.hkstp.internal info: Applying configuration version '1340077073' notice: Hey ! It works !

Install the Openstack modules for Puppet

Before keep on going it is strongly recommend to reboot your system:

reboot

(Controller node only) Install the latest revision of the modules from GIT:

cd /etc/puppet/modules git clone git://github.com/puppetlabs/puppetlabs-openstack openstack cd openstack rake modules:clone

Now your /etc/puppet/modules should looks like below:

root@vms1:/etc/puppet/modules# ls -la /etc/puppet/modules/ total 80 drwxr-xr-x 20 root root 4096 Jun 19 11:55 . drwxr-xr-x  6 root root 4096 Jun 19 11:46 .. drwxr-xr-x  7 root root 4096 Jun 19 11:55 apt drwxr-xr-x  7 root root 4096 Jun 19 11:54 concat drwxr-xr-x  5 root root 4096 Jun 19 11:55 git drwxr-xr-x  9 root root 4096 Jun 19 11:55 glance drwxr-xr-x  6 root root 4096 Jun 19 11:55 horizon drwxr-xr-x  9 root root 4096 Jun 19 11:55 keystone drwxr-xr-x  7 root root 4096 Jun 19 11:54 memcached drwxr-xr-x  9 root root 4096 Jun 19 11:55 mysql drwxr-xr-x 11 root root 4096 Jun 19 11:55 nova drwxr-xr-x  7 root root 4096 Jun 19 11:54 openstack drwxr-xr-x  9 root root 4096 Jun 19 11:55 rabbitmq drwxr-xr-x  8 root root 4096 Jun 19 11:55 rsync drwxr-xr-x  7 root root 4096 Jun 19 11:55 ssh drwxr-xr-x  7 root root 4096 Jun 19 11:55 stdlib drwxr-xr-x 10 root root 4096 Jun 19 11:55 swift drwxr-xr-x  5 root root 4096 Jun 19 11:55 sysctl drwxr-xr-x  6 root root 4096 Jun 19 11:55 vcsrepo drwxr-xr-x  8 root root 4096 Jun 19 11:55 xinetd

Deploy Openstack controller node on multi-node environment

(Controller node only) Some patch to latest GIT so suit for my usecase (therefore you should futher more override them with your case):

cat > /tmp/puppetlabs-openstack.patch << EOF diff --git examples/site.pp examples/site.pp index 879d8fa..fd38d4e 100644 --- examples/site.pp +++ examples/site.pp @@ -4,7 +4,9 @@  #    # deploy a script that can be used to test nova -class { 'openstack::test_file': } +class { 'openstack::test_file': +  image_type => 'ubuntu', +}    ####### shared variables ##################   @@ -21,17 +23,17 @@ \$public_interface        = 'eth0'  \$private_interface       = 'eth1'  # credentials  \$admin_email             = 'root@localhost' -\$admin_password          = 'keystone_admin' -\$keystone_db_password    = 'keystone_db_pass' -\$keystone_admin_token    = 'keystone_admin_token' -\$nova_db_password        = 'nova_pass' -\$nova_user_password      = 'nova_pass' -\$glance_db_password      = 'glance_pass' -\$glance_user_password    = 'glance_pass' -\$rabbit_password         = 'openstack_rabbit_password' -\$rabbit_user             = 'openstack_rabbit_user' -\$fixed_network_range     = '10.0.0.0/24' -\$floating_network_range  = '192.168.101.64/28' +\$admin_password          = 'openstack' +\$keystone_db_password    = 'openstack' +\$keystone_admin_token    = 'bdbb8df712625fa7d1e0ff1e049e8aab' +\$nova_db_password        = 'openstack' +\$nova_user_password      = 'openstack' +\$glance_db_password      = 'openstack' +\$glance_user_password    = 'openstack' +\$rabbit_password         = 'openstack' +\$rabbit_user             = 'openstack' +\$fixed_network_range     = '10.1.0.0/16' +\$floating_network_range  = '172.24.1.0/24'  # switch this to true to have all service log at verbose  \$verbose                 = false  # by default it does not enable atomatically adding floating IPs @@ -75,7 +77,7 @@ node /openstack_all/ {    # multi-node specific parameters   -\$controller_node_address  = '192.168.101.11' +\$controller_node_address  = '172.24.0.11'    \$controller_node_public   = \$controller_node_address  \$controller_node_internal = \$controller_node_address @@ -83,9 +85,9 @@ \$sql_connection         = "mysql://nova:\${nova_db_password}@\${controller_node_in    node /openstack_controller/ {   -#  class { 'nova::volume': enabled => true } +  class { 'nova::volume': enabled => true }   -#  class { 'nova::volume::iscsi': } +  class { 'nova::volume::iscsi': }      class { 'openstack::controller':      public_address          => \$controller_node_public, @@ -142,7 +144,7 @@ node /openstack_compute/ {      vncproxy_host      => \$controller_node_public,      vnc_enabled        => true,      verbose            => \$verbose, -    manage_volumes     => true, +    manage_volumes     => false,      nova_volume        => 'nova-volumes'    }  EOF cd /etc/puppet/modules/openstack patch -p0 < /tmp/puppetlabs-openstack.patch

Link the module's example site.pp on the controller for production (I do so therefore able to keep trace changes with GIT):

rm -rf /etc/puppet/manifests/site.pp ln -s /etc/puppet/modules/openstack/examples/site.pp /etc/puppet/manifests/site.pp

Once everything is configured on the controller, you can now configure the controller node by:

puppet agent -vt --waitforcert 60 --certname openstack_controller

While the puppet agent is waiting, sign the client certificates:

puppetca sign -a

Now wait and have a coffee break... Once ready, access http://172.24.0.11/ and should show Openstack Dashboard as below:

Login with admin/openstack and should show screen as below:

Deploy Openstack compute node on multi-node environment

Once controller get ready, configure compute nodes by:

puppet agent -vt --waitforcert 60 --certname openstack_compute_vms1 puppet agent -vt --waitforcert 60 --certname openstack_compute_vms2 puppet agent -vt --waitforcert 60 --certname openstack_compute_vms3

While the puppet agent is waiting, sign the client certificates:

puppetca sign -a

Now wait and have a coffee break...

Verify your Openstack deployment

Once you have installed Openstack with Puppet (and assuming you experience no errors), the next step is to verify the installation.

Ensure that your authentication information is in the user's environment by:

source /root/openrc

For development I would like to release firewall rules for all conntection:

nova secgroup-add-rule default tcp 1 65535 0.0.0.0/0 nova secgroup-add-rule default udp 1 65535 0.0.0.0/0 nova secgroup-add-rule default icmp -1 255 0.0.0.0/0

Verify that all of the services for nova are operational by (Ctrl + C to terminate):

watch -n1 nova-manage service list

Which should give you similar result as:

Every 1.0s: nova-manage service list                                               Tue Jun 19 15:52:12 2012  2012-06-19 15:52:12 DEBUG nova.utils [req-7eb90044-238e-4ff5-b60a-cbf7fc243b2e None None] backend <module ' nova.db.sqlalchemy.api' from '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc'> from (pid=3498)  __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:658 Binary           Host                                 Zone             Status     State Updated_At nova-consoleauth vms1                                 nova             enabled    :-)   2012-06-19 07:52:05 nova-scheduler   vms1                                 nova             enabled    :-)   2012-06-19 07:52:05 nova-cert        vms1                                 nova             enabled    :-)   2012-06-19 07:52:05 nova-compute     vms1                                 nova             enabled    :-)   2012-06-19 07:52:09 nova-volume      vms1                                 nova             enabled    :-)   2012-06-19 07:52:05 nova-network     vms1                                 nova             enabled    :-)   2012-06-19 07:52:07 nova-network     vms3                                 nova             enabled    :-)   2012-06-19 07:52:05 nova-volume      vms3                                 nova             enabled    :-)   2012-06-19 07:52:03 nova-compute     vms3                                 nova             enabled    :-)   2012-06-19 07:52:11

Run the test script in order to import default images, add key, and start it:

cp /etc/puppet/modules/openstack/files/nova_test.sh /tmp/nova_test.sh cd /tmp bash ./nova_test.sh

Now access http://172.24.0.11/ and test as below:

  • Import your keypair
  • Edit default security group to allow all TCP/UDP (i.e. 1 - 65535) to 0.0.0.0/0; all ICMP (i.e. -1 - 255) to 0.0.0.0/0
  • Allocate IP to project
  • Fire up a VM, with your imported keypair
  • Create a volume
  • Attach that volume to the VM
  • Allocate a floating IP to a VM instance
  • Test remote connection with your keypair + SSH

Upgrading

(Controller node only) First of all you should MANUALLY access all /etc/puppet/modules/* GIT clone and pull with latest update... That's too complicated! Let's download my lazy git-pull-all.sh script and get it done within seconds!

wget http://edin.no-ip.com/files/git-pull-all_sh mv git-pull-all_sh /usr/local/bin/git-pull-all.sh chmod a+x /usr/local/bin/git-pull-all.sh git-pull-all.sh /etc/puppet/modules

Go back to controller and redeploy with latest setup:

puppet agent -vt --waitforcert 60 --certname openstack_controller

And so for compute nodes too:

puppet agent -vt --waitforcert 60 --certname openstack_compute_vms1 puppet agent -vt --waitforcert 60 --certname openstack_compute_vms2 puppet agent -vt --waitforcert 60 --certname openstack_compute_vms3

Don't forget to reboot all of your systems ;-)



评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值