Using MAAS + Juju to speed up the cloud deployment under the VMs (by quqi99)

作者:张华  发表于:2014-07-20
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明

(http://blog.csdn.net/quqi99 )

Physical Node Configuration

1, enable kvm nested virtualization feature
   kvm-ok
   cat /sys/module/kvm_intel/parameters/nested
   This should output Y, if it doesn’t do the following:
   sudo modprobe -r kvm_intel
   sudo modprobe kvm_intel nested=1

2, sudo apt-get update && sudo apt-get dist-upgrade

    sudo apt-get install -y bridge-utils  qemu-kvm qemu-utils libvirt-bin virt-viewer cpu-checker virtinst uvtool

    sudo apt-get install -y virt-manager qemu-system spice-client-gtk spice-vdagent python-spice-client-gtk
3, Ensure $USER is added to libvirtd group.
   groups | grep libvirtd
4, Ensure host machine has SSH keys.
   ssh-keygen -t rsa
5, create one  KVM virtual network named cloud via NAT with dhcp,   192.168.100.0/24 (192.168.100.3 - 192.168.100.9)
6, crate a kvm vm named maas-server (IP: 192.168.100.3, usename: ubuntu)

sudo virt-install --connect qemu:///system -n maas -r 2048 \
    --arch=x86_64 -c /images/iso/ubuntu-16.04.2-server-amd64.iso \
    --vnc --accelerate \
    --disk=/images/kvm/maas.img,size=15 \
    --network=network=cloud,model=virtio

如果嫌分步安装maas麻烦的话,可以直接在这里选择“multiple server install with MAAS”安装带有MAAS的ubuntu server, 步骤见:https://maas.ubuntu.com/docs/install.html
   vi /etc/network/interface
   auto eth0
   iface eth0 inet static
     address 192.168.100.3
     netmask 255.255.255.0
     gateway 192.168.100.1
    # dns-nameservers 8.8.8.8

  $ cat /etc/resolvconf/resolv.conf.d/base

     search localdomain

     nameserver 192.168.100.1

  sudo resolvconf -u


MAAS-Server VM Configuration
http://dinosaursareforever.blogspot.co.uk/2014/06/manually-deploying-openstack-with.html

上面是maas 1.9的架构图, 自maas 2.0之后架构图如下: 

  • Region相当于一个数据中心, Region下再划分fabric, rack controller is attached to each "fabric", rack controller应该为了性能缓存镜像. 
  • 可以看到, 1.9的cluster就是2.0的rack, 所以maas-cluster-controller包也变成了maas-rack-controller包.
  • rack controller可以连接多个VLANs, region controller提供了HTTP端口5240(MAAS HA后则是80)和10个RPC端口(5250-5259共10个给maas-regiond使用)和rack controller通讯. 默认每5分钟rack controller需要和regiond通讯同步镜像.

intro-arch-overview

fabrics and spaces

 

1, install postgresql and create database maasdb with the user maas, thus maas package will create tables in the database maasdb

   export LC_ALL="en_US.UTF-8"
   sudo apt-get update
   sudo apt-get install postgresql

 

坑一,Fix the problem No PostgreSQL clusters exist; see "man pg_createcluster"
sudo pg_createcluster 9.3 main --start  #如果postgres能启动就不需要下面的步骤了

 

坑二,有时候报错 psql: FATAL:  password authentication failed for user "postgres"

那是因为postgresql和juju都使用了md5,

a, 临时禁掉postgresql里的md5, 将/etc/postgresql/9.3/main/pg_hba.conf文件里搜索md5,将其改为trust,如:

    local   all             all                                     trust

b, juju总会在/etc/maas/maas_local_settings.py  文件中生成一个固定的md5的密码,用下列命令将两者密码修改为一致。

psql -c "ALTER USER maas WITH PASSWORD '1Vl1WwwB5MNJ'" -d template1



   sudo mkdir -p /usr/local/pgsql/data
   sudo chown -R postgres:postgres /usr/local/pgsql/
   sudo mkdir  /var/run/postgresql & sudo chown -R postgres:postgres /var/run/postgresql
   sudo su - postgres
   cd /usr/lib/postgresql/9.3/bin/
   ./initdb -D /usr/local/pgsql/data
   ./postgres -D /usr/local/pgsql/data

 

ubuntu@maas:~$ sudo su - postgres
[sudo] password for ubuntu: 
postgres@maas:~$ psql -d template1 -U postgres
psql (9.3.7)
Type "help" for help.


template1=# create user maas with password '';            # 配置文件: /etc/maas/maas_local_settings.py 
CREATE ROLE
template1=# create database maasdb;
CREATE DATABASE
template1=# grant all privileges on database maasdb to maas;
GRANT
ubuntu@maas:~$ sudo su - maas
ubuntu@maas:~$ psql -d maasdb -U maas
psql (9.3.7)
Type "help" for help.


maasdb=> \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 maasdb    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres         +


postgres=# \c maasdb
You are now connected to database "maasdb" as user "postgres".
maasdb=# \dt

 

也修改/etc/maas/maas_local_settings.py 文件将用户名和密码数据库等信息设置的和上述一样。

 

2,  install maas 

sudo apt-get update
sudo apt-get install -y software-properties-common
sudo apt-add-repository ppa:maas-maintainers/stable
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install maas

 

坑三,安装maas后看不到images标签页,请使用这个源”sudo apt-add-repository ppa:maas-maintainers/stable“

 

要更改maas ip的话,还需要:

sudo dpkg-reconfigure maas-region-controller

sudo dpkg-reconfigure maas-cluster-controller

 

maas - seed cloud setup, which includes both the region controller and the cluster controller below.
maas-region-controller - includes the web UI, API and database.
maas-cluster-controller - controls a group (“cluster”) of nodes including DHCP management.
maas-dhcp/maas-dns - required when managing dhcp/dns.


 fix one problem: ubuntu 14.04 said the password is not right after locking the machine, but can login the system after switching the account:
     sudo chown root:shadow /etc/shadow then sudo chmod u=r,g=r /etc/shadow

 

3, create admin user

sudo maas-region-admin createadmin --username=admin --email=admin@163.com
   http://192.168.100.3/MAAS  (/etc/apache2/conf-enabled/maas-http.conf)
   We can use the simple VPN sshuttle when visting it outside, sshuttle -D -r <hypervisor IP> 192.168.100.0/24


4, Import boot images
   if you want to limit the types of boot images that can be created you need to edit /etc/maas/bootresources.yaml
hua@maas:~$ sudo maas-region-admin apikey --username admin > ~/maas-apikey
hua@maas:~$ maas login myprofile http://192.168.100.3/MAAS `cat ~/maas-apikey

You are now logged in to the MAAS server at
http://192.168.100.3/MAAS/api/1.0/ with the profile name 'myprofile'.
For help with the available commands, try:
  maas my-maas-session --help

   在这个界面http://192.168.100.3/MAAS/images/导入镜像,

   或者API命令 maas myprofile node-groups import-boot-images 

   还可以使用本地镜像加速导入过程,https://maas.ubuntu.com/docs/install.html

   成功后,如下图:

 

 5, 今后需要访问maas node的机器的公钥(~/.ssh/id_rsa.pub)都应该在http://192.168.100.3/MAAS/account/prefs/界面注册, 这样在通过juju部署服务时,机器才被分配给具体用户,此时Juju才会将该这些公钥注到maas node里,这样用户就可以访问属于他自己的maas node了。(注意:是不能通过用户名和密码方式访问的,仅仅也要maas node被分配给具体用户之后也即从Ready状态变到Deployed状态之后才是可以通过ssh公钥登录的


6, configure dhcp and dns, edit the eth0,http://192.168.100.3/MAAS/clusters/445476e3-fc38-411a-97c6-775c2b6be4ca/interfaces/eth0/edit/
   Interface: eth0
   Management: DHCP and DNS

注意, 一选dhcp and dns就报“Internal server error", 选"dhcp"可以. 那是因为执行sudo service bind9 restart时报错“rndc: connect failed: 127.0.0.1#953: connection refused”,
需要:sudo mkdir -p /etc/named/maas && sudo touch /etc/named/maas/named.conf.maas
见:https://bugs.launchpad.net/ubuntu/+source/maas/+bug/1266840

必须使用maas dns服务,并且maas server和maas node都必须使用maas dns服务,这样才能解析maas node的hostname以确保juju的正常运行。

所以在maas server节点中的/etc/resolvconf/resolv.conf.d/head文件中添加下列内容后,再执行“sudo resolvconf -u"命令。至于maas node会由maas自动添加到/etc/resolv.conf文件。

nameserver 192.168.100.3
search maas

不使用maas dns服务的话会无法解决maas node的域名和其他的域名如archive.ubuntu.com(因为maas node的nameserver会由maas默认指定为192.168.100.3),这样后续的如juju bootstrap时就会访问访问不了域名archive.ubuntu.com从而出错, 例如:

Command: ['curtin', 'curthooks']
Exit code: 3
Reason: -
Stdout: "Ign http://archive.ubuntu.com trusty InRelease\nIgn http://archive.ubuntu.com trusty-updates InRelease\nIgn http://archive.ubuntu.com trusty-security

。。。。

. No error reported.\numount: /tmp/tmppl9cvQ/target/dev: device is busy.\n        (In some cases useful info about processes that use\n         the device is found by lsof(8) or fuser(1))\nUnexpected error while running command.\nCommand: ['umount', '/tmp/tmppl9cvQ/target/dev']\nExit code: 1\nReason: -\nStdout: ''\nStderr: ''\n"
Stderr: ''

另外也需要在 http://192.168.100.3/MAAS/settings/中的”Upstream DNS used to resolve domains not managed by this MAAS“选项中设置maas使用的上游dns服务器.

还需要设置/etc/bind/named.conf.options中的dnssec-validation no;

最后,重启dns服务,sudo service bind9 restart


   IP: 192.168.100.3
   Subnet mask: 255.255.255.0
   Broadcast IP: 172.16.1.255
   Router IP: 192.168.100.1
   IP Range Low: 192.168.100.10                   #是maas node在ready状态未分配给用户使用前(即非deployed状态时)临时使用的ip
   IP Range High: 192.168.100.99

   IP Static Range Low: 192.168.100.100     #是maas node在deployed状态时用的ip

   IP Static Range High: 192.168.100.199

 

重启所以maas服务:

sudo service maas-clusterd restart
sudo service maas-dhcpd6 restart
sudo service maas-dhcpd restart


MAAS Nodes Installation


1, 创建一个专用的maas用户,

    sudo apt-get install libvirt-bin
    sudo chsh maas -s /bin/bash
    sudo su - maas
    ssh-keygen
   在maas执行里执行命令(ssh-copy-id -i ~/.ssh/id_rsa hua@192.168.100.1)让maas虚机可以无密码访问物理机上的libvirt

     测试:

ubuntu@maas:~$  sudo -u maas virsh -c qemu+ssh://hua@192.168.100.1/system list --all
 Id    Name                           State
----------------------------------------------------
 3     maas                           running

 

2, 在物理机上执行下列脚本通过pxe方式创建两个虚机来模拟maas的节点, 用于安装openstack的控制节点(可以用4G内存)和计算节点(用2G内存)。

 

sudo apt-get install virtinst

qemu-img create -f qcow2 /bak/images/maas-controller.qcow2 12G

virt-install --noautoconsole --pxe --boot network,hd,menu=on \
   --graphics spice --video qxl --channel spicevmc \
   --name maas-controller --ram 4096 --vcpus 1 \
   --controller scsi,model=virtio-scsi,index=0 \
   --disk path=/bak/images/maas-controller.qcow2,format=qcow2,size=12,bus=scsi,cache=writeback \
   --network=network=cloud,mac=52:54:00:63:7e:7c,model=virtio \
   --network=network=cloud,mac=52:54:00:63:7e:7d,model=virtio

qemu-img create -f qcow2 /bak/images/maas-compute.qcow2 12G
virt-install --noautoconsole --pxe --boot network,hd,menu=on \
   --graphics spice --video qxl --channel spicevmc \
   --name maas-compute --ram 2048 --vcpus 1 \
   --controller scsi,model=virtio-scsi,index=0 \
   --disk path=/bak/images/maas-compute.qcow2,format=qcow2,size=12,bus=scsi,cache=writeback \
   --network=network=cloud,mac=52:54:00:63:7e:7a,model=virtio \
   --network=network=cloud,mac=52:54:00:63:7e:7b,model=virtio

 

坑四,有时候创建了maas node之后,在界面中的Nodes菜单中不显示,重启maas有时候能好。

sudo service maas-clusterd restart
sudo service maas-dhcpd6 restart
sudo service maas-dhcpd restart



3, 给maas节点注册vish电源管理模式,对于裸机使用IPMI, 这样maas就可以控制maas node的启停了。

   Power type: virsh (virtual systems)
   Power Address: qemu+ssh://hua@192.168.100.1/system
   Power ID: 虚机的名字,即maas-controoler或maas-compute
 

 4, Commission the Virtual Machines

      注意,此时maas node节点的状态是New状态, 需点击"Commission Node"按钮(将变成Commissioning状态), 最后变成"Ready"状态。

      只有Ready状态的节点才可以分配给Juju使用,  注意此时还是无法登录的,用户名和密码被禁用掉了。只有等使用juju部署服务被分配给某一用户之后,才将用户的ssh publich key注入到镜像中,这时候才可以通过ssh ubuntu@<ip>来登录maas node(即deploy状态才可以ssh)。注意:Ready状态的maas node应该是stop的,我们不应该人工启动,而应该交给maas来控制启动。

 

点击"Acquire and start node“按钮变成deploy状态后的maas node如下图所示:

 

5,新建两个tag, 并将第一个虚机打上bootstrap tag, 第二个虚机打上compute tag

   maas myprofile tags new name=bootstrap
   maas myprofile tags new name=compute
   maas myprofile tag update-nodes bootstrap add=`maas myprofile nodes list hostname=maas-controller |grep system_id | cut -d \" -f 4`
   maas myprofile tag update-nodes compute add=`maas myprofile nodes list hostname=maas-compute |grep system_id | cut -d \" -f 4`
   maas myprofile tag nodes bootstrap
   maas myprofile tag nodes compute
 


Setup juju for use with MAAS env

在maas虚机上继续安装juju

 

sudo apt-get install -y software-properties-common
sudo add-apt-repository ppa:juju/stable
sudo apt-get update && sudo apt-get install -y juju juju-deployer

 juju generate-config  && juju switch maas

 

then using the following content to update maas section of ~/.juju/environments.yaml file .

其中maas-oauth来自http://192.168.100.3/MAAS/account/prefs/界面,或者使用命令”sudo maas-region-admin apikey --username=admin”查看。

maas:
        type: maas
        maas-server: 'http://192.168.100.3/MAAS/'
        maas-oauth: 'XFRrvYsxvag3DkbsFD:KBTHjWW4uEPKCYwBh3:5LNnXaHGPdXGhUppPK2Z4mPbGNaW5nVh'
        authorized-keys-path: ~/.ssh/id_rsa.pub
        admin-secret: password
        default-series: trusty

 

运行juju bootstrap在打了bootstrap tag的虚机上安装并运行juju agent,如果出错,多半是因为域名问题,请检查maas-dns, 通过juju --debug status可获得信息。

juju bootstrap --upload-tools --show-log --debug --constraints tags=bootstrap 

juju set-constraints tags=    #上句执行后执行这句避免给接下来的机器打乱了标签

 

注意:此时IP已经使用了MAAS_STATIC_RANGE_START, 即为192.168.100.100.

 

 

hua@hua-ThinkPad-T440p:/bak/images$ ssh ubuntu@192.168.100.100

ubuntu@maas-node-1:~$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 192.168.100.3
search maas

 

maas server本身也应该使用自己提供的dns服务,这样在juju --debug status的时候才有可能解析到安装在maas node上的juju服务。

可在maas node上使用命令”sudo tailf /var/log/cloud-init-output.log“监控maas server通过ssh连进来在安装juju-agent charm时做了什么事情。最终在juju bootstrap maas node上启动下下列进程:

ubuntu@maas-node-0:~$ ps -ef|grep juju
root      2098     1  0 03:38 ?        00:00:00 dhclient -1 -v -pf /run/dhclient.juju-br0.pid -lf /var/lib/dhcp/dhclient.juju-br0.leases juju-br0
root     11792     1  0 03:45 ?        00:00:00 /usr/lib/juju/bin/mongod --auth --dbpath /var/lib/juju/db --sslOnNormalPorts --sslPEMKeyFile /var/lib/juju/server.pem --sslPEMKeyPassword xxxxxxx --port 37017 --noprealloc --syslog --smallfiles --journal --keyFile /var/lib/juju/shared-secret --replSet juju --ipv6 --oplogSize 512
root     11850     1  2 03:46 ?        00:00:01 /var/lib/juju/tools/machine-0/jujud machine --data-dir /var/lib/juju --machine-id 0 --debug

并使用juju status命令查看状态如下(调试时可使用juju --debug status):

ubuntu@maas:~$ juju status
environment: maas
machines:
  "0":
    agent-state: started
    agent-version: 1.23.3.1
    dns-name: maas-node-0.maas
    instance-id: /MAAS/api/1.0/nodes/node-99a83fce-0550-11e5-aeec-525400a843f2/
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M tags=bootstrap
    state-server-member-status: has-vote
services: {}

 

 

juju bootstrap --to <hostname>  # bootstrap the state-server on a specific machine 
juju add-machine <hostname>     # add existing MAAS-controlled machines to the juju environment

 

There exist a tool do the same thing, see: https://github.com/niedbalski/maasive

 

 

--------------------------------------------    分隔线 ---------------------------------------------------------------

下载juju charms到maas server上,
sudo apt-get install -y charm-tools
mkdir -pv ~/charms/trusty
cd ~/charms/trusty/
for CHARM in \
ceph \
ceph-osd \
cinder \
glance \
juju-gui \
keystone \
mediawiki \
mysql \
nova-cloud-controller \
nova-compute \
ntpmaster \
ntp \
openstack-dashboard \
quantum-gateway \
rabbitmq-server \
swift-proxy \
swift-storage
do
charm-get trusty/$CHARM
done

部署juju-gui, 其中--to=0是指已经启动的第1个即安装juju agent的那个maas node.
cd ~/charms
juju deploy --to=0 local:trusty/juju-gui
juju expose juju-gui
ubuntu@maas:~/charms$ juju status juju-gui
environment: maas
machines:
  "0":
    agent-state: started
    agent-version: 1.23.3.1
    dns-name: maas-node-0.maas
    instance-id: /MAAS/api/1.0/nodes/node-99a83fce-0550-11e5-aeec-525400a843f2/
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M tags=bootstrap
    state-server-member-status: has-vote
services:
  juju-gui:
    charm: local:trusty/juju-gui-0
    exposed: true
    units:
      juju-gui/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: "0"
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: maas-node-0.maas
然后通过访问http://maas-node-0.maas即可访问juju-gui的界面,也可以将域名换成IP.
ubuntu@maas:~/charms$ juju ssh juju-gui/0

部署ntp charm

ubuntu@maas:~/charms$ cat ~/openstack-ntpmaster.yaml
ntpmaster:
source: LOCAL
上面的yaml文件怎么写可以通过charm-info命令查看帮助
ubuntu@maas:~/charms$ charm-info ntpmaster
然后部署:
cd ~/charms/
# clear any Juju constraints, 确保它部署在第一个节点上
juju set-constraints tags=
# deploy the NTP server:
juju deploy --to 0 --config ~/openstack-ntpmaster.yaml local:trusty/ntpmaster
4. watch the progress of the charm deployment
juju debug-log

同理方法部署neutron, 因为两个虚机都有两个网卡,所以有:ext-port: 'eth1'.
juju set-constraints tags=
juju deploy --to 0 --config ~/openstack-neutron.yaml local:trusty/quantum-gateway
ubuntu@maas:~/charms$ cat ~/openstack-neutron.yaml
quantum-gateway:
openstack-origin: distro
ext-port: 'eth1'
instance-mtu: 1400

同理方法部署计算节点, 注意,此时没有加--to 0,这样会找maas要一个新节点部署(这时候第2个虚机会启动)
juju set-constraints tags=
juju deploy --constraints tags=compute --config ~/openstack-compute.yaml local:trusty/nova-compute -n 1
ubuntu@maas:~/charms$ cat ~/openstack-compute.yaml
nova-compute:
  openstack-origin: distro
新节点部署后使用juju status看到的状态是:
  "2":
    agent-state: pending
    dns-name: maas-node-1.maas
    instance-id: /MAAS/api/1.0/nodes/node-c0908402-0550-11e5-aeec-525400a843f2/
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M tags=compute

同理方法部署mysql, 将mysql通过--to=0(其中是0还是其他的什么通过juju status查看,另外若是容器可加--to=lxc:2)部署在第1个节点上,
juju deploy --to=0 local:trusty/mysql

同理方法部署keystone到第1个节点
juju deploy --to=0 --config ~/openstack-keystone.yaml local:trusty/keystone
ubuntu@maas:~/charms$ cat ~/openstack-keystone.yaml
keystone:
  openstack-origin: distro
  admin-password: password

同理方法将rabbitmq部署到第1个节点
juju deploy --to=0 local:trusty/rabbitmq-server

部署controller到第1个节点
juju deploy --to=0 --config ~/openstack-controller.yaml local:trusty/nova-cloud-controller
ubuntu@maas:~/charms$ cat ~/openstack-controller.yaml
nova-cloud-controller:
  openstack-origin: distro
  network-manager: 'Neutron'
  quantum-security-groups: "yes"
  console-access-protocol: spic

部署glance服务到第1个节点
juju deploy --to=0 --config ~/openstack-glance.yaml local:trusty/glance
ubuntu@maas:~/charms$ cat ~/openstack-glance.yaml
glance:
  openstack-origin: distro

部署ntp client, ntp server是安装在maas-node-0.maas上,
juju deploy --config ~/openstack-ntp.yaml local:trusty/ntp
ubuntu@maas:~/charms$ cat ~/openstack-ntp.yaml
ntp:
  source: maas-node-0.maas
ntp charm是一个子charm, 会附着在maas-node-0.maas与maas-node-1.maas上
juju add-relation quantum-gateway ntp
juju add-relation nova-compute ntp

添加其他关系,因为只有两个虚机,一个虚机安装juju agent及openstack除nova-compute以外的所有服务,一个虚机安装nova-coumpute,机器性能有限,所以我们没有安装horizon, cinder, swift这些组件,仅是一个最小安装。
添加关系之前先用“juju status |grep agent-state”命令确保每一个charm都是installed状态。
juju add-relation keystone mysql
juju add-relation nova-cloud-controller mysql
juju add-relation nova-cloud-controller keystone
juju add-relation nova-cloud-controller rabbitmq-server
juju add-relation nova-cloud-controller glance
juju add-relation nova-cloud-controller quantum-gateway
juju add-relation nova-compute:shared-db mysql
juju add-relation nova-compute:amqp rabbitmq-server
juju add-relation nova-compute nova-cloud-controller
juju add-relation nova-compute glance
juju add-relation glance mysql
juju add-relation glance keystone
juju add-relation quantum-gateway mysql
juju add-relation quantum-gateway:amqp rabbitmq-server:amqp
然后使用juju status命令查看状态,如果有错,可以使用juju resolved -r keystone/0重新部署。

部署bundle
juju-deployer -c OPENSTACK_BUNDLE
watch juju status --format=tabular

使用openstack
a, novarc文件如下:
#!/bin/bash
set -e
KEYSTONE_IP=`juju status keystone/0 | grep public-address | awk '{ print
$2 }' | tail -n 1 | xargs host | grep -v alias | awk '{ print $5 }'`
KEYSTONE_ADMIN_TOKEN=`juju ssh keystone/0 sudo grep admin_token
/etc/keystone/keystone.conf | tail -n 1 | awk '{ print $3 }'`
echo
echo "Keystone IP: [${KEYSTONE_IP}]"
echo "Keystone Admin Token: [${KEYSTONE_ADMIN_TOKEN}]"
echo
cat << EOF > ~/nova.rc
export SERVICE_ENDPOINT=http://${KEYSTONE_IP}:35357/v2.0/
export SERVICE_TOKEN=${KEYSTONE_ADMIN_TOKEN}
export OS_AUTH_URL=http://${KEYSTONE_IP}:5000/v2.0/
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
EOF

b, 创建网络
source ~/nova.rc
neutron net-create Public_Network -- --router:external=True
neutron subnet-create --name Public_Subnet --allocation-pool \
start=192.168.100.150,end=192.168.100.199 \
--gateway=192.168.100.1 --enable_dhcp=False \
--dns-nameserver 192.168.100.3 --dns-nameserver 8.8.8.8 \
Public_Network EXT_SUBNET_ID
neutron net-list

c, 上传image
wget -O ~/images/cirros-0.3.3-x86_64-uec.tar.gz http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-uec.tar.gz
    (cd ~/images && tar -xzf cirros-0.3.3-x86_64-uec.tar.gz)
glance image-create --name="cirros" --is-public=true  --progress \
    --container-format=bare --disk-format=qcow2 < ~/images/cirros-0.3.3-x86_64-disk.img

d, 定义tenant
keystone tenant-create --name=Project01
keystone user-create --name=proj01user --pass=openstack --email=proj01user@example.com
keystone user-role-add --user proj01user --role Member --tenant Project01
要定义新tenant操作时就用它的环境变量
export OS_AUTH_URL=http://`juju-deployer -f keystone`:5000/v2.0/
export OS_USERNAME=proj01user
export OS_PASSWORD=openstack
export OS_TENANT_NAME=Project01

e, 定义key pair
nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey

f, 定义默认的security group允许ssh与icmp
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

g, 配置tenant router, 即虚机所使用的网络
neutron net-create Project01_Network
neutron subnet-create --name Project01_Subnet --allocation-pool \
start=10.0.0.10,end=10.0.0.199 \
--gateway=10.0.0.1 --enable_dhcp=True \
--dns-nameserver 192.168.100.3 --dns-nameserver 8.8.8.8 \
Project01_Network PRIVATE_SUBNET_ID
neutron router-create Project01_Public
neutron router-gateway-set Project01_Public Public_Network
neutron router-interface-add Project01_Public Project01_Subnet

h, 创建虚机
nova flavor-create --is-public True m1.smaller auto 512 5 1
nova aggregate-create SSD nova
nova aggregate-add-host SSD compute01
nova aggregate-add-host SSD compute03
nova aggregate-set-metadata SSD ssd=true
nova flavor-key ssd.smaller set ssd=true
nova flavor-show ssd.smaller
nova boot demo1 --flavor m1.smaller --image 1 --key-name mykey

i, 使用floating ip
nova floating-ip-create Public_Network
nova floating-ip-associate VM_NAME FLOATING_IP_ADDR

 

 

Juju Bundle

多个charm在一个机器里可能会有冲突,要在一个机器里最好也是使用容器隔离, 其余两个不能运行在容器的neutron-gateway(因为netlink/iscsc不能运行在容器)和nova-compute(因为ovs不能运行在容器),它们俩只能一个一个虚机了。共得3台虚机。但容器用的是lxcbr0, eth0必须在它里头才可能和另两个虚机网络通,针对这个问题,maas provider与local provider的处理方法不一样:
1, maas provider, 通过sudo cat /var/lib/juju/containers/juju-trusty-lxc-template/lxc.conf配置lxc.network.link = juju-br0, 而非lxcbr0, 请看例子:
juju deploy --to 0 local:trusty/juju-gui
juju deploy --to lxc:0 local:trusty/mysql
juju deploy --to lxc:0 local:trusty/keystone
juju deploy --to lxc:0 nova-cloud-controller
juju deploy --to lxc:0 local:trusty/glance
juju deploy --to lxc:0 local:trusty/rabbitmq-server
juju deploy --to lxc:0 local:trusty/openstack-dashboard
juju deploy --to lxc:0 local:trusty/cinder

juju ssh 192.168.100.100 lxc-ls --fancy
juju deploy  --constraints tags=compute local:trusty/nova-compute

然后 等 所以状态juju status |grep agent-state变成installed之后再执行下列命令


juju add-relation mysql keystone
juju add-relation nova-cloud-controller mysql
juju add-relation nova-cloud-controller rabbitmq-server
juju add-relation nova-cloud-controller glance
juju add-relation nova-cloud-controller keystone
juju add-relation nova-compute nova-cloud-controller
juju add-relation nova-compute mysql
juju add-relation nova-compute rabbitmq-server:amqp
juju add-relation nova-compute glance
juju add-relation glance mysql
juju add-relation glance keystone
juju add-relation glance cinder
juju add-relation mysql cinder
juju add-relation cinder rabbitmq-server
juju add-relation cinder nova-cloud-controller
juju add-relation cinder keystone
juju add-relation openstack-dashboard keystone
juju set keystone admin-password="password"


2, maas local, lp:charms/ubuntu用来部署一台虚机并已经配置好了将eth0加到了lxcbr0,new-lxc-network: False用于指定使用已经配置好的lxcbr0, 请看例子。
# juju-deployer -c ../min.yaml openstack-services -L
openstack-services:
  series: trusty
  openstack-origin: cloud:trusty-juno
  source: cloud:trusty-updates/juno
  services:
    ubuntu:
      branch: lp:charms/ubuntu
      series: trusty
      constraints: "mem=2G root-disk=10G"
      num_units: 1
      new-lxc-network: False
    mysql:
      branch: lp:charms/mysql
      constraints: mem=1G
      to: lxc:ubuntu=0
      options:
    rabbitmq-server:
      branch: lp:charms/rabbitmq-server
      to: lxc:ubuntu=0
      options:
    keystone:
      branch: lp:charms/keystone
      options:
        admin-password: openstack
        admin-token: ubuntutesting
      to: lxc:ubuntu=0
    nova-compute:
      branch: lp:charms/nova-compute
      num_units: 1
      constraints: mem=2G
      options:
        config-flags: "auto_assign_floating_ip=False"
        enable-live-migration: False
        enable-resize: False
    nova-cloud-controller:
      branch: lp:charms/nova-cloud-controller
      options:
        network-manager: Quantum
        quantum-security-groups: "yes"
      to: lxc:ubuntu=0
    neutron-gateway:
      branch: lp:charms/quantum-gateway
      options:
        instance-mtu: 1350
    glance:
      branch: lp:charms/glance
      to: lxc:ubuntu=0
      options:
  relations:
    - [ keystone, mysql ]
    - [ nova-cloud-controller, mysql ]
    - [ nova-cloud-controller, rabbitmq-server ]
    - [ nova-cloud-controller, glance ]
    - [ nova-cloud-controller, keystone ]
    - [ nova-compute, nova-cloud-controller ]
    - [ nova-compute, mysql ]
    - - nova-compute
      - rabbitmq-server:amqp
    - [ nova-compute, glance ]
    - [ glance, mysql ]
    - [ glance, keystone ]
    - [ glance, rabbitmq-server ]
    - [ neutron-gateway, mysql ]
    - - neutron-gateway:amqp
      - rabbitmq-server:amqp
    - [ neutron-gateway, nova-cloud-controller ]

 

或者分解动作如下, 一步步有利于调试:

# 部署一个虚机,内存设置大一点, 修改~/.juju/environments.yaml文件中的下列内容能让local provider部署从容器改为部署虚机
    local:
        type: local
        lxc-clone: true
        container: kvm

cat ubuntu-charm.yaml
 ubuntu:
      branch: cs:trusty/ubuntu-3    # 部署ubuntu charm不成功时得检查这个源对不对,如果不对就直接"juju deploy --constraints "mem=5G root-disk=12G" ubuntu"之后再人工去修改网络吧
      series: trusty
      constraints: "mem=2G root-disk=10G"
      num_units: 1
      new-lxc-network: False

juju deploy --constraints "mem=5G root-disk=12G" local:trusty/ubuntu --config ubuntu-charm.yaml
# juju set ubuntu new-lxc-network=False
# 用juju status确定该虚机的ID=1,部署其他服务到该虚机的容器里,容器使用的lxcbr0网桥上有eth0和其他虚机相通.
ubuntu charm通过new-lxc-network=False来让容器使用已存在的lxcbr0, 人工设置/etc/network/interfaces.d/lxcbr0.cfg内容如下,
且删除sudo rm -rf /etc/network/interfaces.d/eth0.cfg, 最后重启网络让其生效(最好不要重启机器, 通过“sudo ifup lxcbr0”命令)。

在ubuntu server中重启网络有一个bug,只能是针对接口进行ifdown与ifup

sudo ifup lxcbr0

https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1301015

 

auto eth0
iface eth0 inet manual
auto lxcbr0
iface lxcbr0 inet dhcp
   bridge_ports eth0

juju deploy --to lxc:1 --constraints mem=1G local:trusty/mysql

 

对于容器,要先执行/var/lib/juju/contaners/[container-name]/cloud-init里的脚本,该脚本在创建了容器之后才会创建日志文件/var/log/juju/[contailer-name].log, 容器也通过下列命令查看:
juju ssh ubuntu/0 sudo lxc-ls --fancy

所有过程都特别慢,是否出错可以通过上述日志查看。


juju deploy --to lxc:1 local:trusty/rabbitmq-server
juju deploy --to lxc:1 local:trusty/keystone
juju deploy --to lxc:1 nova-cloud-controller
juju deploy --to lxc:1 local:trusty/glance
# 部署其他两个虚机安装nova-compute与neutron-gateway
juju deploy --constraints "mem=1G root-disk=9G" ubuntu ubuntu2    # 如果是maas,可用juju add-machine来添加
juju deploy --to 2 local:trusty/nova-compute
juju deploy --constraints "mem=1G root-disk=9G" ubuntu ubuntu3
juju deploy --to 3 local:trusty/neutron-gatwaay
# 设置关系
juju add-relation keystone mysql
juju add-relation nova-cloud-controller mysql
juju add-relation nova-cloud-controller rabbitmq-server
juju add-relation nova-cloud-controller glance
juju add-relation nova-cloud-controller keystone
juju add-relation nova-compute nova-cloud-controller
juju add-relation nova-compute mysql
juju add-relation nova-compute rabbitmq-server:amqp
juju add-relation nova-compute glance
juju add-relation glance mysql
juju add-relation glance keystone
juju add-relation glance rabbitmq-server
juju add-relation neutron-gateway mysql
juju add-relation neutron-gateway:amqp rabbitmq-server:amqp
juju add-relation neutron-gateway nova-cloud-controller
juju set keystone admin-password="openstack"

 

上面运行成功后的juju status如下:

$ juju status
environment: local
machines:
  "0":
    agent-state: started
    agent-version: 1.23.3.1
    dns-name: localhost
    instance-id: localhost
    series: trusty
    state-server-member-status: has-vote
  "2":
    agent-state: started
    agent-version: 1.23.3.1
    dns-name: 192.168.122.221
    instance-id: hua-local-machine-2
    series: trusty
    containers:
      2/lxc/1:
        agent-state: started
        agent-version: 1.23.3.1
        dns-name: 192.168.122.234
        instance-id: hua-local-machine-2-lxc-1
        series: trusty
        hardware: arch=amd64
      2/lxc/2:
        agent-state: started
        agent-version: 1.23.3.1
        dns-name: 192.168.122.59
        instance-id: hua-local-machine-2-lxc-2
        series: trusty
        hardware: arch=amd64
      2/lxc/3:
        agent-state: started
        agent-version: 1.23.3.1
        dns-name: 192.168.122.217
        instance-id: hua-local-machine-2-lxc-3
        series: trusty
        hardware: arch=amd64
      2/lxc/4:
        agent-state: started
        agent-version: 1.23.3.1
        dns-name: 192.168.122.136
        instance-id: hua-local-machine-2-lxc-4
        series: trusty
        hardware: arch=amd64
      2/lxc/5:
        agent-state: started
        agent-version: 1.23.3.1
        dns-name: 192.168.122.94
        instance-id: hua-local-machine-2-lxc-5
        series: trusty
        hardware: arch=amd64
    hardware: arch=amd64 cpu-cores=1 mem=5120M root-disk=12288M
  "3":
    agent-state: started
    agent-version: 1.23.3.1
    dns-name: 192.168.122.190
    instance-id: hua-local-machine-3
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=1024M root-disk=9216M
  "4":
    agent-state: started
    agent-version: 1.23.3.1
    dns-name: 192.168.122.166
    instance-id: hua-local-machine-4
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=1024M root-disk=9216M
services:
  glance:
    charm: local:trusty/glance-150
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - glance
      identity-service:
      - keystone
      image-service:
      - nova-cloud-controller
      - nova-compute
      shared-db:
      - mysql
    units:
      glance/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: 2/lxc/5
        open-ports:
        - 9292/tcp
        public-address: 192.168.122.94
  keystone:
    charm: local:trusty/keystone-0
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - glance
      - nova-cloud-controller
      shared-db:
      - mysql
    units:
      keystone/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: 2/lxc/3
        public-address: 192.168.122.217
  mysql:
    charm: local:trusty/mysql-327
    exposed: false
    relations:
      cluster:
      - mysql
      shared-db:
      - glance
      - keystone
      - neutron-gateway
      - nova-cloud-controller
      - nova-compute
    units:
      mysql/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: 2/lxc/1
        public-address: 192.168.122.234
  neutron-gateway:
    charm: cs:trusty/quantum-gateway-16
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - neutron-gateway
      quantum-network-service:
      - nova-cloud-controller
      shared-db:
      - mysql
    units:
      neutron-gateway/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: "4"
        public-address: 192.168.122.166
  nova-cloud-controller:
    charm: cs:trusty/nova-cloud-controller-56
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cloud-compute:
      - nova-compute
      cluster:
      - nova-cloud-controller
      identity-service:
      - keystone
      image-service:
      - glance
      quantum-network-service:
      - neutron-gateway
      shared-db:
      - mysql
    units:
      nova-cloud-controller/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: 2/lxc/4
        open-ports:
        - 3333/tcp
        - 8773/tcp
        - 8774/tcp
        public-address: 192.168.122.136
  nova-compute:
    charm: local:trusty/nova-compute-134
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cloud-compute:
      - nova-cloud-controller
      compute-peer:
      - nova-compute
      image-service:
      - glance
      shared-db:
      - mysql
    units:
      nova-compute/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: "3"
        public-address: 192.168.122.190
  rabbitmq-server:
    charm: local:trusty/rabbitmq-server-151
    exposed: false
    relations:
      amqp:
      - glance
      - neutron-gateway
      - nova-cloud-controller
      - nova-compute
      cluster:
      - rabbitmq-server
    units:
      rabbitmq-server/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: 2/lxc/2
        open-ports:
        - 5672/tcp
        public-address: 192.168.122.59
  ubuntu:
    charm: cs:trusty/ubuntu-3
    exposed: false
    units:
      ubuntu/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: "2"
        public-address: 192.168.122.221
  ubuntu2:
    charm: cs:trusty/ubuntu-3
    exposed: false
    units:
      ubuntu2/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: "3"
        public-address: 192.168.122.190
  ubuntu3:
    charm: cs:trusty/ubuntu-3
    exposed: false
    units:
      ubuntu3/0:
        agent-state: started
        agent-version: 1.23.3.1
        machine: "4"
        public-address: 192.168.122.166

 

 

 

在安装maas和juju的过程中填过的坑总结:
1, maas的安装包把postgresql的依赖做进去了,它又没有自动安装postgresql,也没有自动建库,所以报错,但报出的错与我说的可不一样
2, 在界面将一个网卡设置成“dhcp与dns”时报内部服务器错误,没有更多信息了,刚开始以为可以不用dns,后来juju老出错,才发现dns是必选项,然后下定决定解决dns的问题,然后研究bind9,发现是一个bug,需要touch一个空目录

3, 有时候创建了maas node,但在界面中不显示,重启三个maas服务可以解决问题, 也可能是因为我的机器慢要等的时候长
4, 然后就是创建的maas node无法登录的问题,原来是maas node在使用juju部署服务之前本来就是不能登录的,刚开始不知道啊
5, 然后就是研究虚机里的容器与虚机的网络问题,发现maas provider与local provider不一样,maas provider用的是juju-br0这个网桥
6, ERROR charm upload failed: 400 ({"Error":"error processing file upload: write /tmp/charm248900750: no space left on device"})

 

安装MAAS可以采用maas-deployer更快捷的方式,它会自动在物理机上创建maas虚机:

export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
sudo dpkg-reconfigure locales

sudo apt-get install maas-cli maas-deployer
cp /usr/share/maas-deployer/examples/deployment.yaml ~
maas-deployer -c deployment.yaml --debug
maas-deployer -c deployment.yaml --debug --use-existing
sudo maas maas nodes list
#sudo maas maas vlans create 0 name="API Network" vid=318
#sudo maas maas subnets create name="API Network" fabric=0 vlan=318 space=0 cidr="<cidr>" gateway_ip='<gw>'
http://192.168.122.2/MAAS  ubuntu/ubuntu

 

如何往MAAS里添加物理节点

maas maas nodes new\
  autodetect_nodegroup=yes\
  hostname=<NEW_HOSTNAME>\
  architecture=amd64/generic\
  tags=<TAGNAME:compute>\
  mac_addresses=<MAC_ADDRESS_FOR_PXE>\
  power_type=ipmi\
  power_parameters_power_driver=LAN_2_0\
  power_parameters_power_address=<IPMI_IP_ADDRESS>\
  power_parameters_power_user=<IPMI_USERNAME>\
  power_parameters_power_pass=<IPMI_PASSWORD>


如何查看支持IPMI的物理节点
maas maas maas set-config name=kernel_opts value='console=tty0console=ttyS1,115200'
sudo apt-get install freeipmi-tools
ipmiconsole -h<IPMI_HOST> -u<IPMI_USER> -P



增加的节点在Juju中如何使用
maas maas nodeclaim-sticky-ip-address\
    <SYSTEM_ID>\
    requested_address=<IP_ADDRESS_TO_CLAIM>\
    mac_address=<MAC_ADDRESS_FOR_THE_IP>
juju add-unit nodes-compute
juju add-unit --to<MACHINE_NUMBER> ceph-osd

 



Reference:
1, http://dinosaursareforever.blogspot.co.uk/2014/06/

2, https://insights.ubuntu.com/2014/05/21/ubuntu-cloud-documentation-14-04lts/

3, http://dinosaursareforever.blogspot.co.uk/2014/06/manually-deploying-openstack-with_16.html

4, 源码,安装maas, https://maas.ubuntu.com/docs/hacking.html

5, http://niusmallnan.github.io/_build/html/_templates/openstack/maas_juju.html

6, http://linux.dell.com/files/whitepapers/Deploying_Workloads_With_Juju_And_MAAS.pdf

7, https://github.com/yoshikado/maas-on-openstack/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

quqi99

你的鼓励就是我创造的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值