CentOS6.5本地源安装OpenstackIcehouse 网络类型neutron-flat 原创

本片博客主要针对openstack的快速安装,本人亲自制作,决定转载和抄袭。

注:本次实验本人已安装成功,以下yum安装后的提示为正常提示。本文档是对实验步骤的一个总结。本次实验各个服务的安装、配置,本人都写到了固定脚本中,下载链接:

链接:http://pan.baidu.com/s/1eSDS1kY 密码:ocuj

本次实验环境

1 Centos6.5 X86_64位 英文版minimal安装环境

2 openstack源使用本地源(最后有本地源下载地址:CentOS6.5 链接:http://pan.baidu.com/s/1sl3wDzv 密码:n2ze  Openstack-icehouse: 链接:http://pan.baidu.com/s/1c2JwlK8 密码:fw6y)

3 双节点(controller、compute)

4 每个节点双网卡:

eth0  管理、公共

eth1 私有、提供虚拟机IP地址

5 使用virtualbox虚拟化软件

性能需求:

controller:2G内存、100G硬盘、双网卡

compute:4G内存、120G硬盘、双网卡

virtualbox网络设置如下图:




具体步骤如下:

1 安装系统

使用virtualBox创建两台虚拟机,一台为controller节点、一台为compute节点。

创建的虚拟机配置如下图所示:


以上为控制点的主要配置。计算点配置类似,就是内存设置大点即可。

在安装系统过程中,需要注意以下几点

(1)建议选择全英文安装,这样在安装过程中会很快

(2)时区选择时,选择Asia/Shanghai时区,取消系统自动更新时区勾选

(3)分区时,一般选择自定义分区。本人的分区主要有:根目录50G、/boot分区200M、swap分区(设置为内存的2倍)、创建一块没有挂载点的分区(10G,留着以后做cinder和swift,计算点就设置两块没有挂载点的分区)

(4)选择系统类型时,建议双节点全部选择为minimal,这样安装就会节约很多时间


2 基本设置

系统正常安装后做如下设置:

controller节点:

(1)[root@controller ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter mangle na[  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@controller ~]# chkconfig iptables off
[root@controller ~]#

(2)修改selinux为disabled

[root@controller ~]# cat /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted
(3)修改主机名

[root@controller ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.20.0.200 controller
10.20.0.201 compute


[root@controller ~]#

(4)关闭iptables服务,关闭开机自启

[root@controller ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter mangle na[  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@controller ~]# chkconfig iptables off
[root@controller ~]#
(5)修改yum源,本次实验使用的源为本地源,所以我们需要把系统自带的源更改。把/etc/yum.repos.d/目录下的文件进行备份,然后创建local.repo文件

[root@controller ~]# cat /etc/yum.repos.d/local.repo
[centos]
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[openstack-icehouse]
baseurl=file:///opt/iaas-repo
gpgcheck=0
enabled=1
[root@controller ~]#

以上都是controller点的前期准备工作。

compute节点:

(1)修改网卡

[root@compute ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=08:00:27:C1:53:F2
TYPE=Ethernet
UUID=30923f68-51f3-41e5-bb79-6061b213378a
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=no
IPADDR=10.20.0.201
NETMASK=255.255.255.0
GATEWAY=10.20.0.1
[root@compute ~]#
(2)修改主机名

[root@compute ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.20.0.200 controller
10.20.0.201 compute
[root@compute ~]#

(3)修改yum源,我们在控制点上安装FTP服务器,然后计算点所使用的源全部来自此服务器。先把/etc/yum.repos.d/目录下的文件进行备份,然后创建local.repo文件

[root@compute ~]# cat /etc/yum.repos.d/local.repo
[centos]
baseurl=ftp://10.20.0.200/centos
gpgcheck=0
enabled=1
[openstack-icehouse]
baseurl=ftp://10.20.0.200/iaas-repo/
gpgcheck=0
enabled=1
[root@compute ~]#

以上为计算点的前期准备工作

测试:

控制点ping计算点

[root@controller ~]# ping -c 4 compute
PING compute (10.20.0.201) 56(84) bytes of data.
64 bytes from compute (10.20.0.201): icmp_seq=1 ttl=64 time=0.258 ms
64 bytes from compute (10.20.0.201): icmp_seq=2 ttl=64 time=0.388 ms
64 bytes from compute (10.20.0.201): icmp_seq=3 ttl=64 time=0.396 ms
64 bytes from compute (10.20.0.201): icmp_seq=4 ttl=64 time=0.444 ms

--- compute ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.258/0.371/0.444/0.071 ms
[root@controller ~]#
计算点ping控制点

[root@compute ~]# ping -c 4 controller
PING controller (10.20.0.200) 56(84) bytes of data.
64 bytes from controller (10.20.0.200): icmp_seq=1 ttl=64 time=0.176 ms
64 bytes from controller (10.20.0.200): icmp_seq=2 ttl=64 time=0.218 ms
64 bytes from controller (10.20.0.200): icmp_seq=3 ttl=64 time=0.220 ms
64 bytes from controller (10.20.0.200): icmp_seq=4 ttl=64 time=0.220 ms

--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3007ms
rtt min/avg/max/mdev = 0.176/0.208/0.220/0.023 ms
[root@compute ~]#

以上都配置完成后,我们重启两台机器

重启完成后,我们利用FTP上传工具把源上传到controller节点的/home目录。上传完成后,我们进行以下的操作

[root@controller home]# mount -o loop CentOS6.5.iso /mnt/
[root@controller home]# cd /mnt/
[root@controller mnt]# ll
total 682
-r--r--r-- 2 root root     14 Nov 29  2013 CentOS_BuildTag
dr-xr-xr-x 3 root root   2048 Nov 29  2013 EFI
-r--r--r-- 2 root root    212 Nov 28  2013 EULA
-r--r--r-- 2 root root  18009 Nov 28  2013 GPL
dr-xr-xr-x 3 root root   2048 Nov 29  2013 images
dr-xr-xr-x 2 root root   2048 Nov 29  2013 isolinux
dr-xr-xr-x 2 root root 655360 Nov 29  2013 Packages
-r--r--r-- 2 root root   1354 Nov 28  2013 RELEASE-NOTES-en-US.html
dr-xr-xr-x 2 root root   4096 Nov 29  2013 repodata
-r--r--r-- 2 root root   1706 Nov 28  2013 RPM-GPG-KEY-CentOS-6
-r--r--r-- 2 root root   1730 Nov 28  2013 RPM-GPG-KEY-CentOS-Debug-6
-r--r--r-- 2 root root   1730 Nov 28  2013 RPM-GPG-KEY-CentOS-Security-6
-r--r--r-- 2 root root   1734 Nov 28  2013 RPM-GPG-KEY-CentOS-Testing-6
-r--r--r-- 1 root root   3380 Nov 29  2013 TRANS.TBL
[root@controller mnt]# cp -rfv * /opt/centos

复制完成后,我们取消Centos6.5的挂载,然后再把Openstack源进行挂载

[root@controller ~]# umount /mnt/
[root@controller ~]# mount -o loop /home/openstack.iso /mnt/
[root@controller ~]# cd /mnt/
[root@controller mnt]# ll
total 4
drwxrwxr-x 6 nobody nobody 2048 Nov 13  2015 iaas-repo
drwxrwxr-x 2 nobody nobody 2048 Mar  6  2015 images
[root@controller mnt]# cp -rfv * /opt/

以上这两步完成后,我们在controller节点上安装ftp服务:

[root@controller ~]# yum install vsftpd -y
Loaded plugins: fastestmirror, priorities
Repository 'centos' is missing name in configuration, using id
Repository 'openstack-icehouse' is missing name in configuration, using id
Loading mirror speeds from cached hostfile
Setting up Install Process
Package vsftpd-2.2.2-11.el6_4.1.x86_64 already installed and latest version
Nothing to do
[root@controller ~]#
然后我们把ftp的目录进行修改,允许匿名进行访问,只要在/etc/vsftpd/vsftpd.conf文件加上:anon_root=/opt/即可

然后重启ftp服务,并设置开机自启:

[root@controller ~]# service vsftpd restart
Shutting down vsftpd:                                      [  OK  ]
Starting vsftpd for vsftpd:                                [  OK  ]
[root@controller ~]# chkconfig vsftpd on
[root@controller ~]#
最后我们到计算点上进行验证:

[root@compute ~]# yum update
Loaded plugins: fastestmirror, priorities
Repository 'centos' is missing name in configuration, using id
Repository 'openstack-icehouse' is missing name in configuration, using id
Determining fastest mirrors
centos                                                                                                                                                   | 4.0 kB     00:00     
centos/primary_db                                                                                                                                        | 4.4 MB     00:00     
openstack-icehouse                                                                                                                                       | 2.9 kB     00:00     
openstack-icehouse/primary_db                                                                                                                            | 3.0 MB     00:00     
Setting up Update Process
No Packages marked for Update
[root@compute ~]#

yum设置完成后,接着在controller节点安装、配置ntp服务:

[root@controller ~]# yum install -y ntp
Loaded plugins: fastestmirror, priorities
Repository 'centos' is missing name in configuration, using id
Repository 'openstack-icehouse' is missing name in configuration, using id
Loading mirror speeds from cached hostfile
Setting up Install Process
Package ntp-4.2.6p5-1.el6.centos.x86_64 already installed and latest version
Nothing to do
[root@controller ~]#
修改ntp配置文件/etc/ntp.conf:

[root@controller ~]# cat /etc/ntp.conf  
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).

driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict -6 ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 127.127.1.0
fudge 127.127.1.0 stratum 10

#broadcast 192.168.1.255 autokey        # broadcast server
#broadcastclient                        # broadcast client
#broadcast 224.0.1.1 autokey            # multicast server
#multicastclient 224.0.1.1              # multicast client
#manycastserver 239.255.254.254         # manycast server
#manycastclient 239.255.254.254 autokey # manycast client

# Enable public key cryptography.
#crypto

includefile /etc/ntp/crypto/pw

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography.
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8

# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats
[root@controller ~]#
启动ntp,并设置开机自启:

[root@controller ~]# service ntpd restart
Shutting down ntpd:                                        [  OK  ]
Starting ntpd:                                             [  OK  ]
[root@controller ~]# chkconfig ntpd on
[root@controller ~]#

然后到计算点同步时间:

[root@compute ~]# yum install ntp
Loaded plugins: fastestmirror, priorities
Repository 'centos' is missing name in configuration, using id
Repository 'openstack-icehouse' is missing name in configuration, using id
Loading mirror speeds from cached hostfile
Setting up Install Process
Package ntp-4.2.6p5-1.el6.centos.x86_64 already installed and latest version
Nothing to do
[root@compute ~]#
[root@compute ~]# ntpdate controller
14 Dec 02:56:14 ntpdate[10847]: adjust time server 10.20.0.200 offset 0.250648 sec
[root@compute ~]#


然后我们安装qpid服务,用于openstack各个组件的消息通讯服务:

[root@controller ~]# yum install qpid-cpp-server -y
Loaded plugins: fastestmirror, priorities
Repository 'centos' is missing name in configuration, using id
Repository 'openstack-icehouse' is missing name in configuration, using id
Loading mirror speeds from cached hostfile
Setting up Install Process
Package qpid-cpp-server-0.18-18.el6.x86_64 already installed and latest version
Nothing to do

设置qpid认证为否:

[root@controller ~]#cat /etc/qpidd.conf
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Configuration file for qpidd. Entries are of the form:
#   name=value
#
# (Note: no spaces on either side of '='). Using default settings:
# "qpidd --help" or "man qpidd" for more details.
#
# If you are using DIGEST-MD5 for client connections to
# brokers, add to this file the following line:
#
#   auth=yes
#
# If you are using GSSAPI for client connections to
# brokers, add to this file the following two lines:
#
#   auth=yes
#   realm=QPID
#
cluster-mechanism=DIGEST-MD5 ANONYMOUS
acl-file=/etc/qpid/qpidd.acl
auth=no
[root@controller ~]#

以上步骤配置完成后,系统的基本设置就OK了,接下来我们要先安装openstack所需要的基础服务。


3 安装openstack所需要的基础服务,以下服务的安装都可以通过脚本进行执行

控制点:

(1)安装mysql数据库,执行脚本install-mysql.sh

(2)安装keystone服务,执行脚本install-keystone.sh

验证:

[root@controller ~]# keystone user-list
+----------------------------------+---------+---------+-------+
|                id                |   name  | enabled | email |
+----------------------------------+---------+---------+-------+
| 03bc017104c048fe9ccb3b68a73e8afb |  admin  |   True  |       |
| c89c26a82e944083a0a165c16231a42e |   demo  |   True  |       |
+----------------------------------+---------+---------+-------+
[root@controller ~]#
(3)安装glance服务,执行脚本install-glance.sh

验证:

[root@controller ~]# glance image-list
+--------------------------------------+--------------------------+-------------+--------+
| ID           | Name  | Disk Format | Container Format | Size      | Status |
+--------------------------------------+--------------------------+-------------+--------+
|               |               |                         |                                 |                |             |
+--------------------------------------+--------------------------+-------------+--------+
[root@controller ~]#

(4)安装nova服务,执行脚本install-nova-controller.sh

安装完成后,我们需要到计算节点把nova服务也安装一下,执行脚本:install-nova-compute.sh

两个节点都安装成功后,我们在控制节点进行验证:

[root@controller ~]# nova service-list
+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| nova-cert        | controller | internal | enabled | up    | 2016-12-13T19:05:22.000000 | -               |
| nova-consoleauth | controller | internal | enabled | up    | 2016-12-13T19:05:20.000000 | -               |
| nova-conductor   | controller | internal | enabled | up    | 2016-12-13T19:05:19.000000 | -               |
| nova-scheduler   | controller | internal | enabled | up    | 2016-12-13T19:05:20.000000 | -               |
| nova-compute     | compute    | nova     | enabled | up    | 2016-12-13T19:05:18.000000 | -               |
+------------------+------------+----------+---------+-------+----------------------------+-----------------+
[root@controller ~]#
(5)安装neutron服务,在安装之前我们设置一下两个节点的第二块网卡配置:

[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=08:00:27:B5:CB:CA
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no

IPADDR=172.16.100.10
NETMASK=255.255.255.0
GATEWAY=172.16.100.1

[root@controller ~]#

[root@compute ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=08:00:27:DE:5A:BD
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no

IPADDR=172.16.100.20
NETMASK=255.255.255.0
GATEWAY=172.16.100.1

[root@compute ~]#

两块网卡设置成功后,先在控制点执行:install-neutron-controller.sh

验证:

[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+
| id                                   | agent_type         | host       | alive | admin_state_up |
+--------------------------------------+--------------------+------------+-------+----------------+
| 32ba7f2f-ff57-4774-b92f-467e5d7df045 | DHCP agent         | controller | :-)   | True           |
| 98ecd824-cc30-4d8b-b88f-9e31e1da89c1 | L3 agent           | controller | :-)   | True           |
| d7bc4acd-955e-4e82-bf18-49c832d7de3a | Open vSwitch agent | controller | :-)   | True           |
+--------------------------------------+--------------------+------------+-------+----------------+
[root@controller ~]#

然后到计算点执行:install-neutron-compute.sh

安装完成后,我们再到控制点进行验证:

[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+
| id                                   | agent_type         | host       | alive | admin_state_up |
+--------------------------------------+--------------------+------------+-------+----------------+
| 077df758-8178-4558-997c-6c37210ece00 | Metadata agent     | compute    | :-)   | True           |
| 32ba7f2f-ff57-4774-b92f-467e5d7df045 | DHCP agent         | controller | :-)   | True           |
| 78f791d7-8801-4740-82be-462b9360480b | DHCP agent         | compute    | :-)   | True           |
| 97e56832-b467-442c-a3ac-edb2d31146a9 | Open vSwitch agent | compute    | :-)   | True           |
| 98ecd824-cc30-4d8b-b88f-9e31e1da89c1 | L3 agent           | controller | :-)   | True           |
| d7bc4acd-955e-4e82-bf18-49c832d7de3a | Open vSwitch agent | controller | :-)   | True           |
+--------------------------------------+--------------------+------------+-------+----------------+
[root@controller ~]#
如果有以上提示,说明我们neutron安装成功。如果提示不对,或者没有以上的提示,那我们需要去/var/log/neutron目录下查看对应的日志文件。


4 以上的基础服务安装成功后,我们需要在控制点安装openstack-dashboard服务,这个服务提供web操作功能:

[root@controller ~]# yum install -y memcached python-memcached mod_wsgi openstack-dashboard--2014.1.3-1.el6

(openstack-dashboard需要加上版本号,要是不加会提示错误)

安装结束后,设置openstack-dashboard,修改/etc/openstack-dashboard/local_settings

ALLOWED_HOSTS = ['10.20.0.200', 'localhost']

OPENSTACK_HOST = "controller"

取消:

CACHES = {
    'default': {
        'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
        'LOCATION' : '127.0.0.1:11211',
    }
}

前的注释

启动httpd、memcached服务,并且设置开机自启:

[root@controller ~]# service httpd restart
Stopping httpd:                                            [  OK  ]
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 10.20.0.200 for ServerName
                                                           [  OK  ]
[root@controller ~]# service memcached restart
Stopping memcached:                                        [  OK  ]
Starting memcached:                                        [  OK  ]
[root@controller ~]# chkconfig httpd on
[root@controller ~]# chkconfig memcached on
[root@controller ~]#

完成以上步骤后,我们到浏览器中打开:10.20.0.200/dashboard


账号:admin 密码:000000


至此,我们的平台基本搭建成功。接下来我们开启一台虚拟机试试:

1 创建网络

[root@controller ~]# keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| 0bab81db02c9447eb9a51986938a648d |  admin  |   True  |
| 74f8293475a84959a86bb149a8a6e017 |   demo  |   True  |
| 01c8831964944ff78033c0d736de7488 | service |   True  |
+----------------------------------+---------+---------+

[root@controller ~]# neutron net-create --tenant-id 01c8831964944ff78033c0d736de7488 network --shared --provider:network_type flat --provider:physical_network physnet1
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 086cbaaa-0414-42ec-b9f4-36fc4d8682b7 |
| name                      | network                              |
| provider:network_type     | flat                                 |
| provider:physical_network | physnet1                             |
| provider:segmentation_id  |                                      |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 01c8831964944ff78033c0d736de7488     |
+---------------------------+--------------------------------------+
[root@controller ~]#
2 创建子网,此步骤在web页面是进行即可:




3 上传镜像(镜像下载地址:)

[root@controller ~]# glance image-create --name cirros --disk-format qcow2 --container-format bare --progress < cirros-0.3.0-x86_64-disk.img
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 50bdc35edb03a38d91b1b071afb20a3c     |
| container_format | bare                                 |
| created_at       | 2016-12-13T19:29:48                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | f22f30ff-4d50-411a-92c1-3dbd64d0bfd0 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | 0bab81db02c9447eb9a51986938a648d     |
| protected        | False                                |
| size             | 9761280                              |
| status           | active                               |
| updated_at       | 2016-12-13T19:29:48                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[root@controller ~]#

4 创建虚拟机:



5 验证虚拟机




至此整个平台部署完成。

要是有遇到的问题,大家可以给我留言,或者加入QQ群:599576282

发布了1 篇原创文章 · 获赞 0 · 访问量 1万+
展开阅读全文

没有更多推荐了,返回首页

©️2019 CSDN 皮肤主题: 大白 设计师: CSDN官方博客

分享到微信朋友圈

×

扫一扫,手机浏览