Openstack云计算

一、云计算(Cloud Computing)简介:
简单来说,云计算就是运行在互联网上的计算机;
准确来说,云计算是一种按使用量付费的模式

没有虚拟化就没有云计算,虚拟化是一种技术,而云计算是一种模式。
最早的云计算服务:是亚马逊(Amazon)在2006年3月推出的弹性计算云(Elastic Compute Cloud;EC2)服务。
OpenStack 云计算平台,是帮助服务商和企业内部实现类似于Amazon EC2的云基础架构服务(Infrastructure as a Service, IaaS)

二、云计算服务模式
软件即服务(SaaS):
即Software-as-a-service;
Saas软件即服务,简单说就是人家把你想要的功能开发好成应用软件,然后直接卖账号给你用,你也不需要担心服务器、带宽、应用开发等问题,直接交钱使用就行。
例如:我们使用的qq企业邮箱,只管使用,不用操心硬件服务器、操作系统、运行环境、高可用等问题 

平台即服务(PaaS):
即Platform-as-a-service;
云服务商将软件开发的云端开发平台和软件运营的云端运行环境 (包括中间件、数据库、web服务、消息总线等基础通用服务)作为一种服务,提供给最终用户。换句话说,这些PaaS服务商提供的都是基础开发服务平台,主要目的就是让你把精力专注在应用层面的开发上面,而不需要浪费在这些基础重复性的事情上,也不用担心开发完成后部署问题。
例如:docker 官方就提供各种开发语言的运行环境镜像

基础设施即服务(IaaS):
即Infrastructure-as-a-service;
简单理解就是提供虚拟机租用服务,用户可以在虚拟机上自由运行自己的服务,不用担心底层硬件。
例如:阿里云ecs,openstack都是iaas层的实现

三、云计算的类型
公有云(Public Cloud)
简而言之,公用云服务可通过网络及第三方服务供应者,开放给客户使用,“公有”一词并不一定代表“免费”,但也可能代表免费或相当廉价,公用云并不表示用户数据可供任何人查看,公用云供应者通常会对用户实施使用访问控制机制,公用云作为解决方案,既有弹性,又具备成本效益。
私有云(Private Cloud)
私有云具备许多公用云环境的优点,例如弹性、适合提供服务,两者差别在于私有云服务中,数据与程序皆在组织内管理,且与公用云服务不同,不会受到网络带宽、安全疑虑、法规限制影响;此外,私有云服务让供应者及用户更能掌控云基础架构、改善安全与弹性,因为用户与网络都受到特殊限制。
混合云(Hybrid Cloud)
混合云结合公用云及私有云,这个模式中,用户通常将非企业关键信息外包,并在公用云上处理,但同时掌控企业关键服务及数据。

四、为什么要使用云计算
小公司:降低成本,灵活扩展
大公司:充分利用闲置计算资源获利

 

五、实战环境的安装和基本系统优化
 配置:内存最低1G,推荐2G,磁盘50G,系统镜像centos7.2  1511,cpu开启虚拟化
安装系统规范一:调整内核参数net.ifnames=0 biosdevname=0,使网卡名称固定为eth0
安装系统规范二:时区上海,语言支持增加简体中文
安装系统规范三:最小化安装,选择Debugging Tools、Compatibility Libraries、Development Tools三个包组
安装系统规范四:swap分区2G,剩下全部给根分区,不使用LVM,全部使用标准分区 

 

六 、OpenStack 模板系统优化:

 6.1 )修改网卡配置文件
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=10.0.0.11
NETMASK=255.255.255.0
GATEWAY=10.0.0.254
DNS1=223.5.5.5

systemctl restart network

ping 10.0.0.254 -c2


6.2 )防火墙的优化
systemctl disable firewalld.service
systemctl stop firewalld

6.3 )Selinux的优化
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

6.4 )ssh的优化
vi /etc/ssh/sshd_config
93行: GSSAPIAuthentication no
129行: UseDNS no
或者
sed  -i  '93s@GSSAPIAuthentication yes@GSSAPIAuthentication no@;129s@#UseDNS yes@UseDNS no@' /etc/ssh/sshd_config
# 重启ssh #
systemctl restart sshd

6.5 )hosts的优化
 vi /etc/hosts
# 增加2行
10.0.0.11   controller
10.0.0.31   compute1
10.0.0.32   compute2

6.6 )修改主机名
 hostnamectl set-hostname controller

6.7 )yum源优化
# 使用光盘搭建本地yum源
umount /mnt
cd /etc/yum.repos.d/
mkdir test -p
\mv *.repo test
echo '[local]
name=local
baseurl=file:///mnt
gpgcheck=0' >local.repo
mount /dev/cdrom /mnt
echo 'mount /dev/cdrom /mnt' > /etc/rc.d/rc.local
chmod +x  /etc/rc.d/rc.local 
yum makecache

6.8 )其他优化
# 关闭网卡图形化设置模式 #
systemctl stop NetworkManager.service 
systemctl disable NetworkManager.service 
# 关闭邮件服务
systemctl stop postfix.service 
systemctl disable postfix.service
# 下载tab补全命令 #
yum install -y bash-completion.noarch
# 下载 常用命令 #
yum install -y net-tools vim lrzsz wget tree screen lsof tcpdump
# 至此;模板机优化完成;关机开始克隆 #
shutdown -h now


6.9 ) 克隆两台新虚拟机,一个为控制节点,一个为计算节点
controller节点说明
控制节点内存推荐调整为4G+
计算节点内存2G+,记得修改主机名和ip
10.0.0.11   controller
10.0.0.31   compute1
10.0.0.32   compute2

6.10 ) openstack服务介绍
Nova:管理 VM 的生命周期,是 OpenStack 中最核心的服务。
Neutron:为 OpenStack 提供网络连接服务,负责创建和管理L2、L3 网络,为 VM 提供虚拟网络和物理网络连接。
Glance:管理 VM 的启动镜像,Nova 创建 VM 时将使用 Glance 提供的镜像。
Cinder:为 VM 提供块存储服务。Cinder 提供的每一个 Volume 在 VM 看来就是一块虚拟硬盘,一般用作数据盘。
Swift:提供对象存储服务。VM 可以通过 RESTful API 存放对象数据。作为可选的方案,Glance 可以将镜像存放在 Swift 中;Cinder 也可以将 Volume 备份到 Swift 中。
Keystone:为 OpenStack 的各种服务提供认证和权限管理服务。简单的说,OpenStack 上的每一个操作都必须通过 Keystone 的审核。 
Ceilometer:提供 OpenStac k监控和计量服务,为报警、统计或计费提供数据。
Horizon:为 OpenStack 用户提供一个 Web 的自服务 Portal。

 

七 、Openstack基础服务
  7.1)yum源
上传离线镜像仓库包,下载地址,链接:https://pan.baidu.com/s/1mjoGYvA 密码:d4rb
将openstack_rpm.tar.gz上传至/opt目录,并解压。
创建repo文件
cat >/etc/yum.repos.d/local.repo<<-'EOF'
[local]
name=local
baseurl=file:///mnt
gpgcheck=0

[openstack]
name=openstack-mitaka
baseurl=file:///opt/repo
gpgcheck=0
EOF
生成yum缓存
[root@controller repo]# yum makecache

 7.2)NTP时间同步
控制节点(提供时间服务,供其他机器同步)
所有节点安装软件
yum install chrony -y 
配置控制节点,修改第22行
[root@controller ~]# vim /etc/chrony.conf 

# Allow NTP client access from local network.
allow 10/8
启动,设置自启动
systemctl enable chronyd.service
systemctl restart chronyd.service

计算节点(配置chrony客户端)
安装软件
yum install chrony -y 
使用sed命令修改
sed -ri.bak '/server/s/^/#/g;2a server 10.0.0.11 iburst' /etc/chrony.conf
启动,设置自启动
systemctl enable chronyd.service
systemctl start chronyd.service

7.3)安装openstack客户端和selinux
所有节点安装
安装 OpenStack 客户端:
yum -y install python-openstackclient 

RHEL 和 CentOS 默认启用了 SELinux
# 安装 openstack-selinux 软件包以便自动管理 OpenStack 服务的安全策略
yum -y install openstack-selinux

7.4)SQL数据库安装
[root@controller ~]# yum -y install mariadb mariadb-server python2-PyMySQL
创建配置文件
cat > /etc/my.cnf.d/openstack.cnf <<-'EOF'
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF
启动mariadb
systemctl enable mariadb.service
systemctl start mariadb.service

执行mariadb安全初始化
为了保证数据库服务的安全性,运行``mysql_secure_installation``脚本。特别需要说明的是,为数据库的root用户设置一个适当的密码。
[root@controller ~]# mysql_secure_installation

Enter current password for root (enter for none): 
OK, successfully used password, moving on...
Set root password? [Y/n] n
 ... skipping.
Remove anonymous users? [Y/n] Y
 ... Success!
Disallow root login remotely? [Y/n] Y
 ... Success!
Remove test database and access to it? [Y/n] Y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!
Reload privilege tables now? [Y/n] Y
 ... Success!

Thanks for using MariaDB!

 

7.5)消息队列
安装消息队列软件
[root@controller ~]# yum -y install rabbitmq-server
启动消息队列服务并将其配置为随系统启动:
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
添加 openstack 用户:
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
用合适的密码替换 RABBIT_DBPASS。
给``openstack``用户配置写和读权限:
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

7.6)Memcached服务部署
安装memcached软件包
[root@controller ~]# yum -y install memcached python-memcached
配置memcached配置文件
[root@controller ~]# cat  /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 10.0.0.11"  <--修改位置,配置为memcached主机地址或网段信息
启动Memcached服务,并且配置它随机启动。
systemctl enable memcached.service
systemctl start memcached.service

7.7)验证
netstat -lntup

 

八 、安装openstack服务的流程
 OpenStack 相关服务安装流程(keystone服务除外):
 8.1)在数据库中,创库,授权;
 8.2)在keystone中创建用户并授权;
 8.3)在keystone中创建服务实体,和注册API接口;
 8.4)安装软件包;
 8.5)修改配置文件(数据库信息);
 8.6)同步数据库;
 8.7)启动服务。

 

九 、Keystone认证服务配置
9.1)在数据库中,创库,授权;
mysql -e "create database keystone;"
mysql -e "grant all on keystone.* to 'keystone'@'localhost' identified by 'KEYSTONE_DBPASS';"
mysql -e "grant all on keystone.* to 'keystone'@'%' identified by 'KEYSTONE_DBPASS';"

9.2)安装keystone软件包
yum install openstack-keystone httpd mod_wsgi -y

9.3)修改配置文件
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token  ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf database connection  mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider  fernet

9.4)同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone

9.5)初始化Fernet keys
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

9.6)配置httpd
sed -i "95a ServerName controller" /etc/httpd/conf/httpd.conf
#\mv wsgi-keystone.conf /etc/httpd/conf.d/
echo 'Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>' >/etc/httpd/conf.d/wsgi-keystone.conf

9.7)启动httpd
systemctl start httpd.service
systemctl enable httpd.service

 

 

十、创建服务实体和注册api
export OS_TOKEN=ADMIN_TOKEN
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

openstack service create --name keystone --description "OpenStack Identity" identity
openstack endpoint create --region RegionOne  identity public http://controller:5000/v3
openstack endpoint create --region RegionOne  identity internal http://controller:5000/v3
openstack endpoint create --region RegionOne  identity admin http://controller:35357/v3

 

10.1)创建域、项目、用户和角色
openstack domain create --description "Default Domain" default
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default   --password ADMIN_PASS admin
openstack role create admin
openstack role add --project admin --user admin admin
openstack project create --domain default  --description "Service Project" service

 

10.2 )创建环境变量脚本
echo 'export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2' >/root/admin-openrc

 

910.3)验证
unset OS_TOKEN OS_URL
source  /root/admin-openrc
openstack token issue

 

十一 、glance镜像服务
11.1)在数据库中,创库,授权;
mysql -e "create database glance;"
mysql -e "grant all on glance.* to 'glance'@'localhost' identified by 'GLANCE_DBPASS';"
mysql -e "grant all on glance.* to 'glance'@'%' identified by 'GLANCE_DBPASS';"

11.2)在keystone中创建用户并授权;
openstack user create --domain default --password GLANCE_PASS glance
openstack role add --project service --user glance admin

11.3)在keystone中创建服务实体,和注册API接口;
openstack service create --name glance   --description "OpenStack Image" image
openstack endpoint create --region RegionOne   image public http://controller:9292
openstack endpoint create --region RegionOne   image internal http://controller:9292
openstack endpoint create --region RegionOne   image admin http://controller:9292

11.4)安装软件包;
yum install openstack-glance -y

11.5)修改配置文件(数据库信息);
#cat glance-api.conf >/etc/glance/glance-api.conf 
openstack-config --set /etc/glance/glance-api.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf  glance_store stores  file,http
openstack-config --set /etc/glance/glance-api.conf  glance_store default_store  file
openstack-config --set /etc/glance/glance-api.conf  glance_store filesystem_store_datadir  /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf  paste_deploy flavor  keystone
#cat glance-registry.conf >/etc/glance/glance-registry.conf 
openstack-config --set /etc/glance/glance-registry.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf  paste_deploy flavor  keystone

11.6)同步数据库;
su -s /bin/sh -c "glance-manage db_sync" glance

11.7)启动服务。
systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl enable openstack-glance-api.service openstack-glance-registry.service

11.8)验证
#wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
openstack image create "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 \
--container-format bare --public
openstack image list

 

十二、nova计算服务
在控制节点上

12.1)在数据库中,创库,授权;
mysql -e "create database nova;"
mysql -e "grant all on nova.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';"
mysql -e "grant all on nova.* to 'nova'@'%' identified by 'NOVA_DBPASS';"
mysql -e "create database nova_api;"
mysql -e "grant all on nova_api.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';"
mysql -e "grant all on nova_api.* to 'nova'@'%' identified by 'NOVA_DBPASS';"

12.2)在keystone中创建用户并授权;
openstack user create --domain default --password NOVA_PASS nova
openstack role add --project service --user nova admin

12.3)在keystone中创建服务实体,和注册API接口;
openstack service create --name nova   --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne   compute public http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   compute internal http://controller:8774/v2.1/%\(tenant_id\)s
openstack endpoint create --region RegionOne   compute admin http://controller:8774/v2.1/%\(tenant_id\)s

12.4)安装软件包;
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler

12.5)修改配置文件(数据库信息);
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.11
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  api_database connection  mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf  database  connection  mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'

12.6)同步数据库;
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" nova

12.7)启动服务。
  systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

  systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

 

在计算节点上
12.8)安装
yum install openstack-nova-compute -y
12.9)配置
yum install openstack-utils -y
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.31
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://controller:6080/vnc_auto.html

12.10)启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service

12.11)验证
nova service-list 

 

十三、 neutron网络服务
在控制节点上
13.1)在数据库中,创库,授权;
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'NEUTRON_DBPASS';

13.2)在keystone中创建用户并授权;
openstack user create --domain default --password NEUTRON_PASS neutron
openstack role add --project service --user neutron admin

13.3)在keystone中创建服务实体,和注册API接口;
openstack service create --name neutron \
  --description "OpenStack Networking" network
openstack endpoint create --region RegionOne \
  network public http://controller:9696
openstack endpoint create --region RegionOne \
  network internal http://controller:9696
openstack endpoint create --region RegionOne \
  network admin http://controller:9696

13.4)安装软件包;
yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables -y

13.5)修改配置文件(数据库信息);
#cat neutron.conf >/etc/neutron/neutron.conf
cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT core_plugin  ml2
openstack-config --set /etc/neutron/neutron.conf  DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_status_changes  True
openstack-config --set /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_data_changes  True
openstack-config --set /etc/neutron/neutron.conf  database connection  mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  nova auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  nova auth_type  password 
openstack-config --set /etc/neutron/neutron.conf  nova project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  nova user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  nova region_name  RegionOne
openstack-config --set /etc/neutron/neutron.conf  nova project_name  service
openstack-config --set /etc/neutron/neutron.conf  nova username  nova
openstack-config --set /etc/neutron/neutron.conf  nova password  NOVA_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

#配置ml2_conf.ini
#cat ml2_conf.ini >/etc/neutron/plugins/ml2/ml2_conf.ini 
cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 type_drivers  flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 tenant_network_types 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 mechanism_drivers  linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  securitygroup enable_ipset  True

#配置linuxbridge_agent.ini
#cat linuxbridge_agent.ini >/etc/neutron/plugins/ml2/linuxbridge_agent.ini 
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

#配置dhcp_agent.ini
#cat dhcp_agent.ini >/etc/neutron/dhcp_agent.ini 
openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini  DEFAULT enable_isolated_metadata true

#配置metadata_agent.ini
#cat metadata_agent.ini >/etc/neutron/metadata_agent.ini 
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip  controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret  METADATA_SECRET

#nova.conf
openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf  neutron service_metadata_proxy  True
openstack-config --set /etc/nova/nova.conf  neutron metadata_proxy_shared_secret  METADATA_SECRET

13.6)同步数据库;
   ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
   su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

13.7)启动服务。
  systemctl restart openstack-nova-api.service

  systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

 

在计算节点上
13.8)安装
yum install openstack-neutron-linuxbridge ebtables ipset -y

13.9)配置
#配置neutron.conf
cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

#配置linuxbridge_agent.ini
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

#配置nova.conf
openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS

13.10)启动
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

13.11)验证
neutron agent-list

 

十四 、安装dashboard(horizon)

14.1) 安装
yum install openstack-dashboard  -y

 

14.2)配置
配置/etc/openstack-dashboard/local_settings
照着官网配置
主要修改的地方,参考
[root@controller ~]# grep -n -Ev '^$|#' /etc/openstack-dashboard/local_settings 
30:ALLOWED_HOSTS = ['*', ]
55:OPENSTACK_API_VERSIONS = {
57:    "identity": 3,
58:    "volume": 2,
59:    "compute": 2,
60:    "image": 2,
61:}
65:OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
73:OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'default'
129:SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
130:CACHES = {
131:    'default': {
132:        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
133:        'LOCATION': '10.0.0.11:11211',
134:    },
135:}
160:OPENSTACK_HOST = "controller"
161:OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
162:OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
262:OPENSTACK_NEUTRON_NETWORK = {
263:    'enable_router': False,
264:    'enable_quotas': False,
265:    'enable_ipv6': False,
266:    'enable_distributed_router': False,
267:    'enable_ha_router': False,
268:    'enable_lb': False,
269:    'enable_firewall': False,
270:    'enable_vpn': False,
372:TIME_ZONE = "Asia/Shanghai"

 

14.3 )启动
systemctl restart httpd.service memcached.service

 

14.4 )访问
修改window的C:\Windows\System32\drivers\etc\hosts文件
添加控制节点的解析
10.0.0.11  controller

使用浏览器访问
http://10.0.0.11/dashboard

 

十五 、启动第一台云主机
15.1)创建虚拟网络
创建网络
neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider    
创建子网
neutron subnet-create --name provider --allocation-pool start=10.0.0.101,end=10.0.0.250 --dns-nameserver 223.5.5.5 --gateway 10.0.0.254 provider 10.0.0.0/24

15.2)创建m1.nano规格的主机
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

15.3)创建密钥对
ssh-keygen -q -N "" -f ~/.ssh/id_rsa
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

15.4)增加安全组规则:
#允许 ICMP (ping)
openstack security group rule create --proto icmp default
#允许安全 shell (SSH) 的访问
openstack security group rule create --proto tcp --dst-port 22 default

 

15.5)启动云主机
[root@controller ~]# openstack network list 
+--------------------------------------+----------+--------------------------------------+
| ID                                   | Name     | Subnets                              |
+--------------------------------------+----------+--------------------------------------+
| 54f942f7-cc28-4292-a4d6-e37b8833e35f | provider | d507bf57-28e6-4af5-b54b-d969e76f4fd6 |
+--------------------------------------+----------+--------------------------------------+
启动云主机,注意net-id为创建的network ID
openstack server create --flavor m1.nano  --image cirros \
  --nic net-id=54f942f7-cc28-4292-a4d6-e37b8833e35f  --security-group default \
  --key-name mykey www.oldboy.com
检查云主机的状况
[root@controller ~]# nova list 
+--------------------------------------+---------------+--------+------------+-------------+---------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks            |
+--------------------------------------+---------------+--------+------------+-------------+---------------------+
| aa5bcbb8-64a7-44c8-b302-6e1ccd1af6ef |www.oldboy.com | ACTIVE | -          | Running     | provider=10.0.0.102 |
+--------------------------------------+---------------+--------+------------+-------------+---------------------+

 

15.6 )验证查看
浏览器访问:http://10.0.0.31/dashboard/


1)增加一个计算节点(需提前做好host解析)
1.时间同步
2.安装openstack基础包
3.安装nova-compute
4.安装neutron-linuxbridge-agent

 

增加计算节点先检查控制节点各个服务 
检查 neutron 服务
[root@controller ~]# neutron agent-list

+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 56711cbc-767b-4b13-ab28-64c00cb7d07a | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-agent |
| 76daf1d9-8817-435e-9f93-5e3b629012f9 | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| c412ae63-8fad-42f6-86ab-46bdeb2c9e7f | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
| d16d8f9e-889d-4c95-bbf5-44a24a5a852d | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
| eec438bf-2d6c-4576-b81e-2d1569536a19 | Linux bridge agent | compute2   |                   | xxx   | True           | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
 

检查 nova
[root@controller ~]# openstack compute service list

+----+----------------+------------+----------+---------+-------+-----------------+
| Id | Binary         | Host       | Zone     | Status  | State | Updated At      |
+----+----------------+------------+----------+---------+-------+-----------------+
|  1 | nova-conductor | controller | internal | enabled | up    | 2018-03-07T20:1 |
|    |                |            |          |         |       | 9:04.000000     |
|  2 | nova-          | controller | internal | enabled | up    | 2018-03-07T20:1 |
|    | consoleauth    |            |          |         |       | 9:00.000000     |
|  3 | nova-scheduler | controller | internal | enabled | up    | 2018-03-07T20:1 |
|    |                |            |          |         |       | 8:57.000000     |
|  7 | nova-compute   | compute1   | nova     | enabled | up    | 2018-03-07T20:1 |
|    |                |            |          |         |       | 9:02.000000     |
|  8 | nova-compute   | compute2   | nova     | enabled | down  | 2018-03-07T19:5 |
|    |                |            |          |         |       | 7:57.000000     |
+----+----------------+------------+----------+---------+-------+-----------------+

十六 、增加一个计算节点
16.1)配置本地yum仓库(提高安装速度)
上传openstack_rpm.tar.gz到/opt目录
tar xf openstack_rpm.tar.gz
echo  'mount /dev/cdrom /mnt'  >>/etc/rc.d/rc.local
mount /dev/cdrom /mnt
chmod +x /etc/rc.d/rc.local
cat >/etc/yum.repos.d/local.repo<<-'EOF'
[local]
name=local
baseurl=file:///mnt
gpgcheck=0

[openstack]
name=openstack-mitaka
baseurl=file:///opt/repo
gpgcheck=0
EOF

16.2)配置NTP时间服务
# 安装软件
yum install chrony -y 
# 修改配置信息,同步chrony服务
sed -ri.bak '/server/s/^/#/g;2a server 10.0.0.11 iburst' /etc/chrony.conf
# 启动,设置自启动
systemctl enable chronyd.service
systemctl restart chronyd.service

16.3)安装OpenStack的包操作
#安装 OpenStack 客户端:
yum -y install python-openstackclient
#安装 openstack-selinux 软件包
yum -y install openstack-selinux

 

16.4)安装配置计算服务
安装nova软件包
yum -y install openstack-nova-compute    
命令集修改配置文件
yum install openstack-utils -y
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.32
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://controller:6080/vnc_auto.html

 

16.5) 配置neutron网络
安装neutron相关组件
yum -y install openstack-neutron-linuxbridge ebtables ipset
修改neutron配置
cp /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

配置Linuxbridge代理
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

再次配置 nova 服务
openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS

 

16.6)启动计算节点
#启动nova服务,设置开机自启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
# 启动Linuxbridge代理并配置它开机自启动
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
# 查看状态
systemctl status libvirtd.service openstack-nova-compute.service
systemctl stauts neutron-linuxbridge-agent.service

 

16.7)在控制节点上验证
[root@controller ~]# neutron agent-list 
检查服务是否正常 :

+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| 3ab2f17f-737e-4c3f-  | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
| 86f0-2289c56a541b    |                    |            |                   |       |                |                         |
| 4f64caf6-a9b0-4742-b | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
| 0d1-0d961063200a     |                    |            |                   |       |                | agent                   |
| 630540de-d0a0-473b-  | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
| 96b5-757afc1057de    |                    |            |                   |       |                | agent                   |
| 9989ddcb-6aba-4b7f-  | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
| 9bd7-7d61f774f2bb    |                    |            |                   |       |                |                         |
| af40d1db-ff24-4201-b | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-    |
| 0f2-175fc1542f26     |                    |            |                   |       |                | agent                   |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+

 

检查neutroon 是否能发现 compute2 :
[root@controller ~]# neutron agent-list

+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 56711cbc-767b-4b13-ab28-64c00cb7d07a | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-agent |
| 76daf1d9-8817-435e-9f93-5e3b629012f9 | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| c412ae63-8fad-42f6-86ab-46bdeb2c9e7f | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
| d16d8f9e-889d-4c95-bbf5-44a24a5a852d | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
| eec438bf-2d6c-4576-b81e-2d1569536a19 | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
 


在控制节点验证计算节点:
[root@controller ~]# openstack compute service list 

+----+------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-scheduler   | controller | internal | enabled | up    | 2018-01-24T06:06:02.000000 |
|  2 | nova-conductor   | controller | internal | enabled | up    | 2018-01-24T06:06:04.000000 |
|  3 | nova-consoleauth | controller | internal | enabled | up    | 2018-01-24T06:06:03.000000 |
|  6 | nova-compute     | compute1   | nova     | enabled | up    | 2018-01-24T06:06:05.000000 |
|  7 | nova-compute     | compute2   | nova     | enabled | up    | 2018-01-24T06:06:00.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

 

################################  块存储服务  ###############################

十七 、 cinder块存储服务
在控制节点操作 :

17.1)在数据库中,创库,授权
创建 cinder 数据库  要先登陆 mysql
CREATE DATABASE cinder;
允许 cinder 数据库合适的访问权限
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'CINDER_DBPASS';

 

17.2)在keystone中创建用户并授权
创建一个 cinder 用户
openstack user create --domain default --password  CINDER_PASS cinder

添加 admin 角色到 cinder 用户上。
openstack role add --project service --user cinder admin

 

17.3)在keystone中创建服务实体,和注册API接口
创建 cinder 和 cinderv2 服务实体
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

创建块设备存储服务的 API 入口点。注意:需要注册两个版本
# v1版本注册
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s

# v2版本注册
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

 

17.4)安装软件包
yum -y install openstack-cinder

 

17.5)修改配置文件
cp /etc/cinder/cinder.conf{,.bak}
grep -Ev '^$|#' /etc/cinder/cinder.conf.bak >/etc/cinder/cinder.conf
vi  /etc/cinder/cinder.conf
内容如下:
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]

再次编辑/etc/nova/nova.conf文件
在[cinder]增加一行
os_region_name = RegionOne


17.6)同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder

# 忽略输出中任何不推荐使用的信息。

 

17.7)启动服务
重启计算API 服务
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
启动块设备存储服务,并将其配置为开机自启
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

进行验证:
[root@controller ~]# cinder service-list
+------------------+------------+------+---------+-------+------------+-----------------+
|      Binary      |    Host    | Zone |  Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+------------+-----------------+
| cinder-scheduler | controller | nova | enabled |   up  |     -      |        -        |
+------------------+------------+------+---------+-------+------------+-----------------+


################################################################
在计算节点执行 =计算节点兼职存储节点
安装并配置存储节点
1)安装lvm软件
安装支持的工具包
yum -y install lvm2

启动LVM的metadata服务并且设置该服务随系统启动
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service

2)创建物理卷
添加的两块硬盘并创建物理卷
pvcreate /dev/sdb
pvcreate /dev/sdc

3)创建 LVM 卷组  
vgcreate cinder-volumes-sata /dev/sdb 
vgcreate cinder-volumes-ssd /dev/sdc

查看创建出来的卷组
[root@compute1 ~]# vgs
  VG                  #PV #LV #SN Attr   VSize  VFree 
  cinder-volumes-sata   1   0   0 wz--n- 30.00g 30.00g
  cinder-volumes-ssd    1   0   0 wz--n- 20.00g 20.00g

  删除卷组方法
# vgremove vg-name

4)修改/etc/lvm/lvm.conf配置文件
只有实例可以访问块存储卷组。不过,底层的操作系统管理这些设备并将其与卷关联。
默认情况下,LVM卷扫描工具会扫描``/dev`` 目录,查找包含卷的块存储设备。
如果项目在他们的卷上使用LVM,扫描工具检测到这些卷时会尝试缓存它们,可能会在底层操作系统和项目卷上产生各种问题。
编辑  vim /etc/lvm/lvm.conf``文件并完成下面的操作
devices {
...
# 在130行下增加如下行
filter = [ "a/sdb/", "a/sdc/", "r/.*/"]

5)安装软件并配置组件
yum -y install openstack-cinder targetcli python-keystone

6)配置文件修改
vi /etc/cinder/cinder.conf
配置文件最终内容
[root@compute1 ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
glance_api_servers = http://10.0.0.31:9292
enabled_backends = lvm
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.31
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes-sata
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = sata
[ssd]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes-ssd
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ssd

7)启动服务
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
8)验证检查状态
[root@controller ~]#  cinder service-list
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
|      Binary      |     Host     | Zone |  Status | State |         Updated_at         | Disabled Reason |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler |  controller  | nova | enabled |   up  | 2018-01-25T11:45:42.000000 |        -        |
|  cinder-volume   | compute1@lvm | nova | enabled |   up  | 2018-01-25T11:45:21.000000 |        -        |
|  cinder-volume   | compute1@ssd | nova | enabled |   up  | 2018-01-25T11:45:42.000000 |        -        |
+------------------+--------------+------+---------+-------+----------------------------+-----------------+

 

 

十八 、增加一个新的网段
18.1)为openstack服务机器机器添加一块新的网卡(所有机器操作)。
网卡选择LAN区段,并保证所有的机器在同一个LAN区段当中。
  

18.2)配置新增的eth1网卡(所有节点操作)
[root@compute1 ~]# cp /etc/sysconfig/network-scripts/ifcfg-eth{0,1}
修改网卡配置
[root@compute1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
NAME=eth1
DEVICE=eth1
ONBOOT=yes
IPADDR=172.16.1.31
NETMASK=255.255.255.0
启动网卡
[root@compute1 ~]# ifup eth0
测试连通性,互ping

 

18.3)配置neutron服务
再增加一个flat网络,这里添加的名为net172
[root@controller ~]# vim /etc/neutron/plugin.ini 
[DEFAULT]
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider,net172
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
[securitygroup]
enable_ipset = True
修改桥接配置,添加eth1信息
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eth0,net172:eth1
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = False

将桥接配置文件发往各个计算节点
[root@controller ~]# rsync -avz /etc/neutron/plugins/ml2/linuxbridge_agent.ini 10.0.0.31:/etc/neutron/plugins/ml2/linuxbridge_agent.ini

 

18.4)重启服务
在控制节点重启网络服务
[root@controller ~]# systemctl restart  neutron-server.service  neutron-linuxbridge-agent.service
在其他计算节点重启网络服务
[root@compute1 ~]# systemctl restart neutron-linuxbridge-agent.service  
查看当前网络状态
[root@controller ~]# neutron agent-list
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| 3ab2f17f-737e-4c3f-  | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
| 86f0-2289c56a541b    |                    |            |                   |       |                |                         |
| 4f64caf6-a9b0-4742-b | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
| 0d1-0d961063200a     |                    |            |                   |       |                | agent                   |
| 630540de-d0a0-473b-  | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
| 96b5-757afc1057de    |                    |            |                   |       |                | agent                   |
| 9989ddcb-6aba-4b7f-  | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
| 9bd7-7d61f774f2bb    |                    |            |                   |       |                |                         |
| af40d1db-ff24-4201-b | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-    |
| 0f2-175fc1542f26     |                    |            |                   |       |                | agent                   |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+

 

18.5)配置iptables服务器作子网网关
主机信息
[root@route ~]# uname -r 
3.10.0-327.el7.x86_64
[root@route ~]# hostname -I 
10.0.0.2 172.16.1.2
配置内核转发
[root@route ~]# echo 'net.ipv4.ip_forward=1' >>/etc/sysctl.conf
[root@route ~]# sysctl -p 
net.ipv4.ip_forward = 1
配置iptables转发规则
iptables -t nat -A POSTROUTING -s 172.16.1.0/24 -o eth0 -j MASQUERADE


18.6)web界面创建子网
1)创建网络
  
2)配置子网

  
3)创建一个新的实例测试子网
注意:在创建时,网络选择刚刚创建的net172网络
 
4) 检测网络连通性

 

十九 、OpenStack中的VXLAN网络
在开始之前:建议删除之前所有的虚拟机,节省资源
19.1)添加网卡eth2并配置网卡(所有节点操作),配置网段172.16.0.x。
cp /etc/sysconfig/network-scripts/ifcfg-eth{1,2}
vim  /etc/sysconfig/network-scripts/ifcfg-eth2
TYPE=Ethernet
BOOTPROTO=none
NAME=eth2
DEVICE=eth2
ONBOOT=yes
IPADDR=172.16.0.X
NETMASK=255.255.255.0

启动网卡
ifup eth2

 

19.2)修改控制节点配置文件
修改 /etc/neutron/neutron.conf 
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
配置 Modular Layer 2 (ML2) 插件,修改/etc/neutron/plugins/ml2/ml2_conf.ini
在``[ml2]``部分,启用flat,VLAN以及VXLAN网络
[ml2]
...
type_drivers = flat,vlan,vxlan                        
在``[ml2]``部分,启用VXLAN私有网络
[ml2]
...
tenant_network_types = vxlan
在``[ml2]``部分,启用Linuxbridge和layer-2机制:
[ml2]
...
mechanism_drivers = linuxbridge,l2population
在``[ml2_type_vxlan]``部分,为私有网络配置VXLAN网络识别的网络范围
[ml2_type_vxlan]
...
vni_ranges = 1:1000
配置Linuxbridge代理,修改 /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan]
enable_vxlan = True
local_ip = 172.16.0.11
l2_population = True
配置layer-3代理,编辑``/etc/neutron/l3_agent.ini``文件并完成以下操作:
在``[DEFAULT]``部分,配置Linuxbridge接口驱动和外部网络网桥
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
external_network_bridge =

19.3)升级数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

 

19.4)启动服务
systemctl restart neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# 启动l3网络
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service

 

19.5)检查网络服务状态
[root@controller ~]# neutron agent-list
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| 3ab2f17f-737e-4c3f-  | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
| 86f0-2289c56a541b    |                    |            |                   |       |                |                         |
| 4f64caf6-a9b0-4742-b | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
| 0d1-0d961063200a     |                    |            |                   |       |                | agent                   |
| 630540de-d0a0-473b-  | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
| 96b5-757afc1057de    |                    |            |                   |       |                | agent                   |
| 9989ddcb-6aba-4b7f-  | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
| 9bd7-7d61f774f2bb    |                    |            |                   |       |                |                         |
| af40d1db-ff24-4201-b | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-    |
| 0f2-175fc1542f26     |                    |            |                   |       |                | agent                   |
| b08be87c-4abe-48ce-  | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent        |
| 983f-0bb08208f6de    |                    |            |                   |       |                |                         |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+

 

19.6)修改配置计算节点文件
配置Linuxbridge代理,添加配置
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[vxlan]
enable_vxlan = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = True
重启服务
systemctl restart neutron-linuxbridge-agent.service

19.7)再次检查网络状态
[root@controller ~]# neutron agent-list
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
| 3ab2f17f-737e-4c3f-  | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
| 86f0-2289c56a541b    |                    |            |                   |       |                |                         |
| 4f64caf6-a9b0-4742-b | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
| 0d1-0d961063200a     |                    |            |                   |       |                | agent                   |
| 630540de-d0a0-473b-  | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
| 96b5-757afc1057de    |                    |            |                   |       |                | agent                   |
| 9989ddcb-6aba-4b7f-  | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
| 9bd7-7d61f774f2bb    |                    |            |                   |       |                |                         |
| af40d1db-ff24-4201-b | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-    |
| 0f2-175fc1542f26     |                    |            |                   |       |                | agent                   |
| b08be87c-4abe-48ce-  | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent        |
| 983f-0bb08208f6de    |                    |            |                   |       |                |                         |
+----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+

 

19.8)修改dashboard开启路由界面显示
该操作是在web界面开启route功能
vim /etc/openstack-dashboard/local_settings
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,

重启dashboard服务
systemctl restart httpd.service

 

19.9)配置VXLAN网络
1)查看现在网络拓扑
 
2)编辑网络配置,开启外部网络
 
3)配置网络
 
4)配置子网
 
5)创建路由器
创建路由时,注意配置外部网络连接.
 
6)为路由器添加接口连接子网
 
7)创建一台实例,使用配置的VXLAN网络
注意选择配置vxlan的网络配置
 
8)为创建的实例配置浮动IP

 
9)连接浮动IP测试
使用ssh连接主机,由于之前定制的进行root密码进行修改可以使用root用户直接进行 连接。
[root@compute2 ~]# ssh root@10.0.0.115
root@10.0.0.115's password: 
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast qlen 1000
    link/ether fa:16:3e:fc:70:31 brd ff:ff:ff:ff:ff:ff
    inet 1.1.1.101/24 brd 1.1.1.255 scope global eth0
    inet6 fe80::f816:3eff:fefc:7031/64 scope link 
       valid_lft forever preferred_lft forever
# ping baidu.com -c1
PING baidu.com (111.13.101.208): 56 data bytes
64 bytes from 111.13.101.208: seq=0 ttl=127 time=5.687 ms

--- baidu.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 5.687/5.687/5.687 ms

 

二十 、openstack API简单应用
20.1)获取token方法
获取token
[root@controller ~]# openstack token issue |awk '/ id /{print $4}' 
gAAAAABaa0MpXNGCHgaytnvyPMbIF3IecIu9jA4WeMaL1kLWueNYs_Q1APXwdXDU7K34wdLg0I1spUIzDhAkst-Qdrizn_L3N5YBlApUrkY7gSw96MkKpTTDjUhIgm0eAD85Ayi6TL_1HmJJQIhm5ERY91zcKi9dvl73jj0dFNDWRqD9Cc9_oPA
将获取token给变量复制
token=`openstack token issue |awk '/ id /{print $4}'`

 

20.2)常用获取命令
参考:http://www.qstack.com.cn/archives/168.html
使用api端口查看镜像列表
curl -H "X-Auth-Token:$token"  -H "Content-Type: application/json"  http://10.0.0.32:9292/v2/images
获取roles列表
curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:35357/v3/roles
获取主机列表
curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:8774/v2.1/servers
获取网络列表
curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:9696/v2.0/networks
获取子网列表
curl -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:9696/v2.0/subnets
下载一个镜像
curl -o oldboy.qcow2 -H "X-Auth-Token:$token" -H "Content-Type: application/json" http://10.0.0.11:9292/v2/images/eb9e7015-d5ef-48c7-bd65-88a144c59115/file

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值