Install OpenStack on rhel6.2 ( by quqi99 )

Install OpenStack on rhel6.2 ( by quqi99 )


作者:张华  发表于:2013-04-17
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
( http://blog.csdn.net/quqi99 )


This instruction will describe how to use devstack to install openstack on rhel6.2 with nova-network and flat mode.
more info can visit http://devstack.org/
NOTE: Please make sure you are using a common user instead of root user to excute following all commands. Assume a common user named hua.

1, System precondition
   1.1 enable sudo function for common user.
       su
       vi /etc/sudoer , append bellow line, then press :wq! to save and exit.
          hua ALL=(ALL) ALL
   1.2 disable selinux.
       vi /etc/selinux/config
         SELINUX=disabled
   1.3 [optional] first manually create linux bridge br100 if you only have one NIC to avoid lossing your ssh connection.
       add bellow content, then restart the network service by the command: sudo service network start

    $ cat /etc/sysconfig/network-scripts/ifcfg-br100
    DEVICE=br100
    TYPE=Bridge
    ONBOOT=yes
    NM_CONTROLLED=no
    BOOTPROTO=static
    IPADDR=192.168.1.122
    GATEWAY=192.168.1.1
    NETMASK=255.255.255.0
    DNS1=8.8.8.8

    $ cat /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth0
    HWADDR=00:0C:29:43:D4:7E
    NM_CONTROLLED=no
    ONBOOT=yes
    TYPE=Ethernet
    BRIDGE=br100
 
   1.4 start ssh service
       sudo service sshd start   

   1.5 prepare appropriate yum repository, then run the command "sudo yum clean all" && "sudo yum update" to update your repository.
       first mount iso into /mnt/iso directory.
        sudo mkdir /mnt/iso
        sudo mount -t iso9660 -o loop /dev/cdrom /mnt/iso

cat /etc/yum.repos.d/rhel.repo
[centos6]
name=centos6
baseurl=http://mirror.centos.org/centos/6/os/x86_64
enabled=1
gpgcheck=0

[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=ftp://ftp.redhat.com/pub/redhat/linux/enterprise/$releasever/en/os/SRPMS/
enabled=0
gpgcheck=0
   
   1.6 install ntp server in control node, and install ntp client in the all compute nodes
       sudo yum install ntp
       install ntp server in control node:
          vi /etc/ntp.conf
             server 192.168.1.122
             fudge 192.168.1.122 stratum 10
             restrict 192.168.1.122 mask 255.255.255.0 nomodify notrap

       Configure the NTP server to follow the controller node in compute node,
        #Comment the centos NTP servers
    sudo sed -i 's/server 0.centos.pool.ntp.org/#server 0.centos.pool.ntp.org/g' /etc/ntp.conf
    sudo sed -i 's/server 1.centos.pool.ntp.org/#server 1.centos.pool.ntp.org/g' /etc/ntp.conf
    sudo sed -i 's/server 2.centos.pool.ntp.org/#server 2.centos.pool.ntp.org/g' /etc/ntp.conf
    sudo sed -i 's/server 3.centos.pool.ntp.org/#server 3.centos.pool.ntp.org/g' /etc/ntp.conf

    #Set the network node to follow up your conroller node
    sudo sed -i 's/server ntp.centos.com/server 192.168.1.122/g' /etc/ntp.conf

    sudo chkconfig  ntpd on  
    sudo service ntpd restart

    1.7 use common user, so
        sudo chown -R hua:root /usr/lib/python2.6/
        sudo chown -R hua:root /usr/lib64/python2.6/
 
    1.8 enable hardware virtualization function from the BIOS

    1.9 upgrade qemu, let it's version >= 0.15 because a issue of libvirt to cause the error "libvirtError: internal error unable to send file handle 'migrate': No file descriptor supplied via SCM_RIGHTS"
       sudo yum -y install http://mirror.centos.org/centos/6/os/x86_64/Packages/audiofile-0.2.6-11.1.el6.x86_64.rpm
       sudo yum -y install http://mirror.centos.org/centos/6/os/x86_64/Packages/esound-libs-0.2.41-3.1.el6.x86_64.rpm
       sudo yum -y install http://pkgs.repoforge.org/qemu/qemu-0.15.0-1.el6.rfx.x86_64.rpm
   
       and check the right of libvirt to use a common user when occur the issue about libvirt on all the hosts.
    sudo chown -R hua:root ~/data/nova/instances
    sudo chown -R hua:root /var/cache/libvirt
    sudo chown -R hua:root /var/cache/glance
    sudo chown -R hua:root /var/lib/libvirt
    sudo chown -R hua:root /var/run/libvirt
    sudo chown -R hua:root /etc/libvirt
    sudo chown -R hua:root /etc/sysconfig/libvirt*

    sudo chown -R hua:root /usr/bin/qemu*


    Modify /etc/libvirt/qemu.conf
    user = "hua"
    group = "root"

    sudo groupadd libvirtd

    sudo usermod -a -G libvirtd hua
    sudo usermod -a -G root hua


    final restart libvirtd by the command: service libvirtd restart

   1.10 fix some known python module issues.

        1.10.1, install the paste using source code instead of binary
                to avoid the error from the line "deploy.loadapp("config:%s" % '/etc/nova/api-paste.ini', 'osapi_compute')" when running nova-api or glance-api.
        rpm -qa|grep paste
        sudo rpm -e python-paste-script-1.7.3-5.el6_3.noarch
        sudo rpm -e python-paste-deploy-1.3.3-2.1.el6.noarch
        sudo rpm -e python-paste-1.7.4-2.el6.noarch
        sudo easy_install --upgrade paste
        sudo easy_install --upgrade PasteDeploy
                then update devstack script referring following patch to avoid updating PasteDeploy and lxml module which has installed in above step.
        [hua@openstack devstack]$ git branch
          master
        * stable/grizzly
        [hua@openstack devstack]$ git diff
        diff --git a/functions b/functions
        index 88e4a62..8c67382 100644
        --- a/functions
        +++ b/functions
        @@ -279,6 +279,10 @@ function get_packages() {
                 continue
                 fi
        
        +             if [[ $line =~ 'paste' ||  $line =~ 'lxml' ]]; then
        +                continue
        +             fi
        +
                 if [[ $line =~ (.*)#.*dist:([^ ]*) ]]; then
                 # We are using BASH regexp matching feature.
                 package=${BASH_REMATCH[1]}


         1.10.2,  Note you will need gcc buildtools installed and must have installed libxml headers for lxml to be successfully installed using pip,
                 therefore you will need to install the libxml2-devel and libxslt-devel packages.
                sudo yum install libxml2-devel
                sudo yum install libxslt-devel

                sudo yum install python-devel
        sudo easy_install --upgrade lxml  ( sudo python setup.py install )

         1.10.3,  Download error: [Errno 110] Connection timed out -- Some packages may not be found! Reading http://pytz.sourceforge.net
                $ find . -name "pip*" -exec grep --color -H "pytz" {} \;
                  ./horizon/tools/pip-requires:pytz
                $ sudo yum install pytz
                then comment bellow line in the fine ./horizon/tools/pip-reqires
                [hua@openstack horizon]$ pwd
        /home/hua/horizon
        [hua@openstack horizon]$ git diff
        diff --git a/tools/pip-requires b/tools/pip-requires
        index 29e2cdd..e484d6c 100644
        --- a/tools/pip-requires
        +++ b/tools/pip-requires
        @@ -9,7 +9,7 @@ python-keystoneclient
         python-novaclient>=2.12.0,<3
         python-quantumclient>=2.2.0,<3.0.0
         python-swiftclient>1.1,<2
        -pytz
        +#pytz
        
         # Horizon Utility Requirements
         lockfile # for SECURE_KEY generation

         1.10.4, for qpid, update the file $devstack/lib/rpc_bakend as bellow:
                [hua@openstack devstack]$ git diff lib/rpc_backend
        diff --git a/lib/rpc_backend b/lib/rpc_backend
        index 7d165a4..e62302b 100644
        --- a/lib/rpc_backend
        +++ b/lib/rpc_backend
        @@ -57,7 +57,7 @@ function cleanup_rpc_backend {
             fi
             elif is_service_enabled qpid; then
             if is_fedora; then
        -            uninstall_package qpid-cpp-server-daemon
        +            uninstall_package qpid-cpp-server
             elif is_ubuntu; then
                 uninstall_package qpidd
             else
        @@ -87,7 +87,7 @@ function install_rpc_backend() {
             rm -f "$tfile"
             elif is_service_enabled qpid; then
             if is_fedora; then
        -            install_package qpid-cpp-server-daemon
        +            install_package qpid-cpp-server
             elif is_ubuntu; then
                 install_package qpidd
                 sudo sed -i '/PLAIN/!s/mech_list: /mech_list: PLAIN /' /etc/sasl2/qpidd.conf

         1.10.5, Install and configure mysql, let we can visit mysql remotely like mysql -uroot -ppassword -h192.168.1.122
           yum install mysql mysql-server MySQL-python
           chkconfig --level 2345 mysqld on
           service mysqld start
           mysqladmin -u root -p password password   # set password='password' for mysql
           sudo mysql -uroot -ppassword -h127.0.0.1 -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' identified by 'password';"

           # if need to clear the former password of mysql
           sudo service mysqld stop && sudo mysqld_safe --skip-grant-tables &
           use mysql
           UPDATE user SET password=PASSWORD('password') WHERE user='root';
           flush privileges;

           then kill the mysqld_safe process and start the mysqld service
           sudo mysql -uroot -ppassword -h127.0.0.1 -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' identified by 'password';"


           some basic operations are bellow when using postgresql:

sudo service postgresql stop
sudo vi /var/lib/pgsql/data/pg_hba.conf
change
local   all             all                                     password
to
local   all             all                                     trust

sudo -u postgres psql
sudo -u postgres psql template1
sudo service postgresql start
[hua@laptop devstack]$ sudo psql -u postgres -h 172.16.1.122 -W
psql (9.1.9)
Type "help" for help.

postgres=# \l
postgres=# \c nova
postgres=# \d

psql -h172.16.1.122 -Upostgres -dtemplate1 -c 'DROP DATABASE IF EXISTS quqi'



         1.10.6, install nodejs_0.4 if there is a 403 error when running dashboard(Horizon).
        sudo yum remove nodejs
        git clone http://github.com/joyent/node.git
        cd node
        git checkout v0.4
                sudo yum install gcc-c++
        ./configure --without-ssl
        make
        sudo make install
     
      1.11 configure /etc/hosts file, eg:
           192.168.1.122 openstack

      1.12 we don't create user rc, using bellow env instead:
                #export SERVICE_TOKEN=ADMIN
        #export SERVICE_ENDPOINT=http://192.168.1.122:35357/v2.0
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.1.122:5000/v2.0
export OS_AUTH_STRATEGY=keystone

            [hua@openstack devstack]$ git diff stack.sh
        diff --git a/stack.sh b/stack.sh
        index 8c92ea6..1b89edf 100755
        --- a/stack.sh
        +++ b/stack.sh
        @@ -1042,9 +1042,6 @@ fi
         # This step also creates certificates for tenants and users,
         # which is helpful in image bundle steps.
        
        -if is_service_enabled nova && is_service_enabled key; then
        -    $TOP_DIR/tools/create_userrc.sh -PA --target-dir $TOP_DIR/accrc
        -fi
        
        
         # Install Images
   
    1.13 AuthenticationFailure: Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Credentials cache file '/tmp/krb5cc_500' not found)
           to avoid glance hang,see https://bugs.launchpad.net/glance/+bug/1100317
           [hua@openstack devstack]$ git diff lib/glance
        diff --git a/lib/glance b/lib/glance
        index 583f879..a15122a 100644
        --- a/lib/glance
        +++ b/lib/glance
        @@ -102,7 +102,7 @@ function configure_glance() {
             iniset $GLANCE_API_CONF keystone_authtoken admin_user glance
             iniset $GLANCE_API_CONF keystone_authtoken admin_password $SERVICE_PASSWORD
             if is_service_enabled qpid; then
        -        iniset $GLANCE_API_CONF DEFAULT notifier_strategy qpid
        +        iniset $GLANCE_API_CONF DEFAULT notifier_strategy noop
             elif [ -n "$RABBIT_HOST" ] &&  [ -n "$RABBIT_PASSWORD" ]; then
             iniset $GLANCE_API_CONF DEFAULT notifier_strategy rabbit
             fi




2, use devstack to install openstack
   2.1 install the git, and fetch the devstack source code:
       cd ~
       sudo yum install git
       git clone git://github.com/openstack-dev/devstack.git
       cd ~/devstack
       git branch -r
       git checkout stable/grizzly
 

   2.2 edit the localrc file (~/devstack/localrc) of devstack based on your own suituation.
   NOTE: for compute node of multi-node mode, you just need to add one line "ENABLED_SERVICES=n-cpu, n-api" and update CONTROL_HOST,
      SERVICE_HOST variable.
   
DEST=~
#OFFLINE=true
FLAT_INTERFACE=eth1
HOST_IP=192.168.1.122
CONTROL_HOST=$HOST_IP
SERVICE_HOST=$HOST_IP

FIXED_RANGE=10.0.1.0/24
FIXED_NETWORK_SIZE=256
NETWORK_GATEWAY=10.0.1.1
FLOATING_RANGE=192.168.1.128/25

MYSQL_USER=root
MYSQL_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_TOKEN=ADMIN
SERVICE_PASSWORD=password
ADMIN_PASSWORD=password
SYSLOG=false
LOGFILE=$DEST/logs/stack.log
SCREEN_LOGDIR=$DEST/logs

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-net,n-cond,cinder,c-sch,c-api,c-vol,n-sch,n-cauth,horizon,rabbit,mysql
#ENABLED_SERVICES=n-cpu,n-api
MULTI_HOST=0
MYSQL_HOST=$CONTROL_HOST
RABBIT_HOST=$CONTROL_HOST
GLANCE_HOSTPORT=$CONTROL_HOST:9292
KEYSTONE_TOKEN_FORMAT=UUID

HORIZON_BRANCH=stable/grizzly
CINDER_BRANCH=stable/grizzly
CINDERCLIENT_BRANCH=stable/grizzly
NOVA_BRANCH=stable/grizzly
NOVACLIENT_BRANCH=stable/grizzly
GLANCE_BRANCH=stable/grizzly
GLANCECLIENT_BRANCH=stable/grizzly
KEYSTONE_BRANCH=stable/grizzly
KEYSTONECLIENT_BRANCH=stable/grizzly

   2.3 run the devstack script
       FORCE=yes ./stack.sh
   
3, now you can use command tool to verify if openstack has been installed correctly.
   3.1, first you need add env variables:
export SERVICE_TOKEN=ADMIN
export SERVICE_ENDPOINT=http://192.168.1.122:35357/v2.0

export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://192.168.1.122:5000/v2.0
export OS_AUTH_STRATEGY=keystone

   3.2, following is some basic commands to deploy a VM for examaple.
   
$ nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 1  | m1.tiny   | 512       | 0    | 0         |      | 1     | 1.0         | True      | {}          |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      | {}          |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      | {}          |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      | {}          |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      | {}          |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
$ keystone user-list
+----------------------------------+--------+---------+--------------------+
|                id                |  name  | enabled |       email        |
+----------------------------------+--------+---------+--------------------+
| 59618a5aa3d04385ad3119ca9b46a879 | cinder |   True  | cinder@example.com |
| a44adbec00eb43c4b65b696b3879695f | glance |   True  | glance@example.com |
| ae7a1d5ed6714e018b6227bde50c65c9 |  demo  |   True  |  demo@example.com  |
| b3111f672ee04fb59d158e9ea60bfa6c |  nova  |   True  |  nova@example.com  |
| db904e06dbc1482191b022dc6f509da7 | admin  |   True  | admin@example.com  |
+----------------------------------+--------+---------+--------------------+
$ glance index
ID                                   Name                           Disk Format          Container Format     Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
7865c840-af32-4a9c-bc51-cc00542beb45 cirros-0.3.0-x86_64-uec-ramdis ari                  ari                         2254249
5f0a6d03-1f72-4d19-ab91-61daf8996c83 cirros-0.3.0-x86_64-uec        ami                  ami                        25165824
43c55766-11ee-43e0-b77a-49defad1d6c1 cirros-0.3.0-x86_64-uec-kernel aki                  aki                         4731440

$ nova --debug boot --flavor 1 --image 5f0a6d03-1f72-4d19-ab91-61daf8996c83 i1

$ nova list
+--------------------------------------+------+--------+------------------+
| ID                                   | Name | Status | Networks         |
+--------------------------------------+------+--------+------------------+
| 17c7b307-5a66-4fa0-a197-734a2007afbf | i1   | ACTIVE | private=10.0.1.2 |
+--------------------------------------+------+--------+------------------+


   
Some known issues on rhel6.2:

1, if you encounter the error "No handlers could be found for logger "keystoneclient.v2_0.client"" when running the command "keystone user-list",
   you need point out endpoint, like "keystone --token ADMIN --endpoint http://192.168.1.122:35357/v2.0 user-list", or add "SERVICE_ENDPOINT" in your env variables.
   
2, if you encounter the error "ImproperlyConfigured: Error importing middleware horizon.middleware: "cannot import name SafeExceptionReporterFilter""
    you can run the command "cd $horizon && sudo pip-python install -r tools/pip-requires"
    sudo easy_install --upgrade django

3, if there is a issue about apache when running horizon
    ./manage.py runserver 192.168.1.122:8000
    sudo iptables -I INPUT -p tcp --dport 8000 -j ACCEPT


4, /gpfssan3/shendian/grizzly/devstack
-bash-4.1$ vi localrc
-bash-4.1$ git diff lib/databases/mysql
diff --git a/lib/databases/mysql b/lib/databases/mysql
index 056aec4..211d797 100644
--- a/lib/databases/mysql
+++ b/lib/databases/mysql
@@ -36,8 +36,8 @@ function cleanup_database_mysql {
 function recreate_database_mysql {
     local db=$1
     local charset=$2
-    mysql -u$DATABASE_USER -p$DATABASE_PASSWORD -e "DROP DATABASE IF EXISTS $db;"
-    mysql -u$DATABASE_USER -p$DATABASE_PASSWORD -e "CREATE DATABASE $db CHARACTER SET $charset;"
+    mysql -u$DATABASE_USER -p$DATABASE_PASSWORD -h$MYSQL_HOST -e "DROP DATABASE IF EXISTS $db;"
+    mysql -u$DATABASE_USER -p$DATABASE_PASSWORD -h$MYSQL_HOST -e "CREATE DATABASE $db CHARACTER SET $charset;"
 }
 
 function configure_database_mysql {


5, Starting OpenStack Nova Compute Worke, TRACE: TypeError: wait() got an unexpected keyword argument 'timeout'

解决方法:
修改/usr/lib/python2.6/site-packages/eventlet/green/subprocess.py 大约第35行
  1. -def wait(self, check_interval=0.01):  
  2. +def wait(self, check_interval=0.01,timeout=None)



2013.05.30添加

今天用redhat6.4上的eclipse来启动nova-network进程时,然后eclipse就死掉了。用ps命令看到eclipse的状态是T (stop),然后通过命令“kill -SIGCONT"可以重新发信号激活死掉的进程。原来就是上面这点造成的这么严重的问题。(还有一种情况,在运行nova-compute时,如果用nfs做客户端连的共享存储的话,如果nfs服务端挂了的话,因为客户端不会自动重连,也会造成这一问题,并且不好查)

另外,

1, 在redhat6.4中运行devstack时用户会自动注销,是因为devstack重启了dbus,注释它: #sudo service messagebus restart

2, 如果手工配置了br-ex,可能在运行devstack中外网中断,因为 # sudo ip addr flush dev $PUBLIC_BRIDGE

3,在redhat6.4中用的是pip,不是pip-python

   diff --git a/functions b/functions
index 669fa69..aa02f88 100644
--- a/functions
+++ b/functions
@@ -1409,7 +1409,7 @@ function get_rootwrap_location() {
 # get_pip_command
 function get_pip_command() {
     if is_fedora; then
-        which pip-python
+        which pip
     else
         which pip
     fi


migration,

if hope libvirt support live-migration, following steps are added:
1, config nfs share storage for instances
   1.1, select control node as nfs server,
      vi /etc/exports
       /home/olympics/source/share/instances *(rw,sync,fsid=0,no_root_squash)
    
   2.2, restart nfs service
      chkconfig nfs on
      service nfs restart
      
   2.3, on all hosts ( also include ControlNode because it is another compute node), mount it:
       #showmount -e 9.123.106.28
       Export list for 9.123.106.28:
       /home/olympics/source/data/nova/instances *

       chmod o+x /home/olympics/source/data/nova/instances
       
       mount -t nfs -o vers=3 9.123.106.28:/home/olympics/source/share/instances /home/olympics/source/data/nova/instances
       NOTE: add "-t nfs -o vers=3" to avoid a flaw in nfsv4 "chown: changing ownership of myfile.ext : Invalid argument ", can refer the link: http://www.goldfisch.at/knowledge/460
       
       or you can add it into /etc/fstab
       9.123.106.28:/ /home/olympics/source/data/nova/instances nfs3 defaults 0 0
       mount -a -v
       
       final, you can use "df -k" or "mount" command to check:
       # df -k
       Filesystem           1K-blocks      Used Available Use% Mounted on
       9.123.106.28:/        16070144  12312576   2941184  81% /home/olympics/source/data/nova/instances
       # mount
       9.123.106.28:/ on /home/olympics/source/data/nova/instances type nfs3 (rw,addr=9.123.106.28,clientaddr=9.123.106.30)

images缓存在每个计算节点的_base目录下,为了避免加重glance的负担,可以将_base目录共享,但nfsv3的文件锁实现很烂(https://www.mail-archive.com/openstack@lists.openstack.org/msg06131.html),所以最好还是用nfsv4,一种方法如下:

## setup nfs server 
sudo apt-get install -y nfs-kernel-server 
#set gid/uid same as nova node 
sudo addgroup --system --gid 112 nova 
sudo adduser --system --uid 107 --ingroup nova --home /var/lib/nova nova 
# change /srv/nova/instances to your sharde folder 
sudo mkdir -p /srv/nova/instances 
sudo chown nova:nova /srv/nova/instances 
echo '/srv/nova/instances *(rw,sync,fsid=0,no_root_squash)' | sudo tee -a /etc/exports 
sudo service nfs-kernel-server restart 

## setup nfs mount on nova-compute 
sudo apt-get install -y nfs-common 

if ! grep -q nfs4 /etc/fstab; then 
echo 'juju-machine-12.openstacklocal:/ /var/lib/nova/instances nfs4 defaults 0 0' | sudo tee -a /etc/fstab 
fi 
sudo mount -a 
mount | grep nfs4 

       
2, on all hosts, update the libvirt configurations.
Modify /etc/libvirt/libvirtd.conf
listen_tls = 0
listen_tcp = 1
tcp_port = "16509"
auth_tcp = "none"

Modify /etc/sysconfig/libvirtd
LIBVIRTD_ARGS="--listen"


3, double check there exists following options in nova.conf
state_path=/home/olympics/source/data/nova
lock_path=/home/olympics/source/data/nova
instances_path=/home/olympics/source/data/nova/instances


4, you can use bellow command to do live-migration.
   nova live-migration 25fc1b2e-fb86-4ca7-8cb7-773e316c502a Openstack_x86_ControlNode





1, https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

quqi99

你的鼓励就是我创造的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值