虚拟机 openstack 基础镜像制作 + 安装全过程 + 新增计算节点

OpenStack是一种开源的云计算平台,用户可以在其上创建和管理虚拟机和其他云服务。在使用OpenStack时,镜像是非常重要的一部分,它是虚拟机的基础,包含了操作系统和应用程序。本文将介绍两种制作OpenStack镜像的方法,并提供相应的代码示例和注释。

 

# 制作centos7.4-1708 镜像
# 见本目录视频

# 一 安装系统
# 1 
开始安装装作系统 install 界面按tab键输入: net-ifnames=0 boisdevname=0

# 2
设置network IP
langviage seppot 选择支持英文+中文 
minimal+选择前三个软件 
勾选时区右上角ON-安装chrony-NTP
如果是虚拟机不建议选择lvm
分区右边有个内核崩溃备份 取消节省内存

# 二 系统优化
#修改sshd配置文件  
UseDNS no 
GSSAPIAuthentication yes
UsePAM yes

#停止防火墙 selinux NetworkManger 邮件服务
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i s///g /etc/selinux/config
systemctl stop NetworkManager
systemctl disable NetworkManager
systmctl stop postfix
systmctl diable postfix

#安装常用插件
yum -y install base-completion.noarch # tab补全
yum -y net-tools lrzsz wget tree screen lsof tcpdump 

# --------------- 两台机器都要操作
# 配置hostname 并且修改 /etc/hosts 文件
echo 10.0.0.11 controller >> /etc/hosts
echo 10.0.0.31 computer1 >> /etc/hosts

# 挂载本地yum
# mount -o loop /root/code/CentOS-7-x86_64-DVD-1810.iso /mnt
mount /dev/cdrom /mnt
cd /opt/
tar -zxvf openstack_rpm.tar.gz 
echo '[local]
name=local
baseurl=file:///mnt
gpgcheck=0

[openstack]
name=openstack
baseurl=file:///opt/repo
gpgcheck=0' >/etc/yum.repos.d/local.repo

资料包下载地址:
链接:https://pan.baidu.com/s/1tQzbz_qeGF0tht3vXh8RTg 提取码:artp 

1:什么是云计算?
云计算是通过虚拟化技术去实现的,它是一种按量付费的模式!

2:为什么要用云计算?
小公司:1年,20人+,500w,招一个运维,15k,(10台*1.5w,托管IDC机房,8k/年/每台,带宽 100M,5个公网ip, 10k/月),  买10台云主机,600*10=6000

大公司:举行活动,加集群,把闲置时间出租,超卖(kvm  ksm)
16G,kvm,64G(ksm),金牌用户(200w+/月)


3:云计算的服务类型
IDC   

IAAS           基础设施即服务 ECS云主机  自己部署环境,自己管理代码和数据
PAAS   平台即服务   提供软件的运行环境php,java,python,go,c#,nodejs  自己管理代码和数据
SAAS           软件即服务   企业邮箱,cdn,rds

4:云计算IAAS有哪些功能?kvm虚拟化的管理平台(计费)

kvm:1000宿主机(agent),虚拟出2w虚拟机,
虚拟机的详细情况:硬件资源,ip情况统计?
虚拟机管理平台:每台虚拟机的管理,都用数据库来统计



5:openstack实现的是云计算IAAS,开源的云计算平台,apache 2.0,阿里云(飞天云平台)
青云

6:openstack (soa架构)
云平台(keystone认证服务,glance镜像服务,nova计算服务,neutron网络服务,cinder存储服务,horizon web界面)

每个服务:数据库,消息队列,memcached缓存,时间同步

MVC
首页   www.jd.com/index.html
秒杀   www.jd.com/miaosha/index.html
优惠卷 www.jd.com/juan/index.html
会员   www.jd.com/plus/index.html
登录   www.jd.com/login/index.html



SOA(拆业务) 千万用户同时访问
首页   www.jd.com/index.html(5张)+ 缓存 + web + 文件存储
秒杀   miaosha.jd.com/index.html(15张)
优惠卷 juan.jd.com/index.html (15张)
会员   plus.jd.com/index.html(15张)
登录   login.jd.com/index.html(15张)
200个业务

微服务: 亿级用户
阿里开源的dubbo
Spring Boot

自动化代码上线  Jenkins,gitlab ci
自动化代码质量检查   sonarqube




7:虚拟机规划
controller:内存3G,cpu开启虚拟化,        ip:10.0.0.11
compute1:  内存1G,cpu开启虚拟化(必开),ip:10.0.0.31

修改主机名,ip地址,host解析,测试ping百度

8:配置yum源
# mount /dev/cdrom /mnt
# rz 上传openstack_rpm.tar.gz到/opt,并解压

#生成repo配置文件
mount /dev/cdrom /mnt
echo '[local]
name=local
baseurl=file:///mnt
gpgcheck=0

[openstack]
name=openstack
baseurl=file:///opt/repo
gpgcheck=0' >/etc/yum.repos.d/local.repo



echo 'mount /dev/cdrom /mnt' >>/etc/rc.local
chmod +x /etc/rc.d/rc.local
yum makecache
yum repolist

9:安装基础服务 
在所有节点上执行:
yum -y install chrony

a:时间同步
控制节点:
vim /etc/chrony.conf
修改第26行为
allow 10/8
systemctl restart chronyd

计算节点:
vim /etc/chrony.conf
修改第3行为
server 10.0.0.11 iburst

systemctl restart chronyd
systemctl status chronyd
netstat -lntup


b:安装openstack客户端和openstack-selinux
yum install python-openstackclient openstack-selinux -y

# ----------------------------------------------以上为基础环境-----------------------------
# -----------------------------------------------------------------------------------------
# ----------------------------------------------以下开始控制节点----------------------------
# ----------------------------------------------仅控制节点执行----------------------------
# c: 安装配置mariadb
yum install mariadb mariadb-server python2-PyMySQL -y

echo '[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8'  >/etc/my.cnf.d/openstack.cnf

systemctl start mariadb
systemctl enable mariadb

mysql_secure_installation
回车
n
y
y
y
y

d:安装rabbitmq并创建用户
yum install rabbitmq-server -y
systemctl start rabbitmq-server.service 
systemctl enable rabbitmq-server.service

rabbitmqctl add_user openstack RABBIT_PASS # 授权openstack并创建密码 Creating user "openstack" ...
rabbitmqctl set_permissions openstack ".*" ".*" ".*" # 给 openstack 配置 写 读 权限 Setting permissions for user "openstack" in vhost "/" ...


rabbitmq-plugins enable rabbitmq_management # Applying plugin configuration to rabbit@oldboy... started 6 plugins.

# e:memcached缓存token
yum install memcached python-memcached -y
sed -i 's#127.0.0.1#10.0.0.11#g' /etc/sysconfig/memcached
systemctl restart memcached.service
systemctl enable memcached.service


# 10:keystone认证服务
# a:创库授权
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
  IDENTIFIED BY 'KEYSTONE_DBPASS';
# b:安装keystone相关软件包
yum install openstack-keystone httpd mod_wsgi -y
# c:修改配置文件
\cp /etc/keystone/keystone.conf{,.bak} # 复制
grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf #过滤注释
yum install openstack-utils -y
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token  ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf database connection  mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider  fernet
#校验
md5sum /etc/keystone/keystone.conf
# d5acb3db852fe3f247f4f872b051b7a9  /etc/keystone/keystone.conf


# d:同步数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
mysql keystone -e 'show tables'; # j检查是否有表
# e:初始化fernet
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# f:配置httpd
echo "ServerName controller" >>/etc/httpd/conf/httpd.conf
# 新增编辑 /etc/httpd/conf.d/wsgi-keystone.conf
echo 'Listen 5000
Listen 35357

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /usr/bin/keystone-wsgi-public
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:35357>
    WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
    WSGIProcessGroup keystone-admin
    WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    ErrorLogFormat "%{cu}t %M"
    ErrorLog /var/log/httpd/keystone-error.log
    CustomLog /var/log/httpd/keystone-access.log combined

    <Directory /usr/bin>
        Require all granted
    </Directory>
</VirtualHost>' > /etc/httpd/conf.d/wsgi-keystone.conf


#校验配置文件MD5值
md5sum /etc/httpd/conf.d/wsgi-keystone.conf
# 8f051eb53577f67356ed03e4550315c2  /etc/httpd/conf.d/wsgi-keystone.conf


# g:启动httpd
systemctl enable httpd.service
systemctl start httpd.service

# h:创建服务和注册api:
export OS_TOKEN=ADMIN_TOKEN
export OS_URL=http://controller:35357/v3  
export OS_IDENTITY_API_VERSION=3

# 检查环境变量
env | grep OS


openstack service create \
  --name keystone --description "OpenStack Identity" identity
  
openstack endpoint create --region RegionOne \
  identity public http://controller:5000/v3 
  
openstack endpoint create --region RegionOne \
  identity internal http://controller:5000/v3 
  
openstack endpoint create --region RegionOne \
  identity admin http://controller:35357/v3 

# 验证
openstack service list
openstack endpoint list

I:创建域、项目、用户、角色
openstack domain create --description "Default Domain" default

openstack project create --domain default \
  --description "Admin Project" admin
  
openstack user create --domain default \
  --password ADMIN_PASS admin # 密码非123456
  
openstack role create admin

#关联项目,用户,角色 。 # 没有创建demo项目 和 user用户
openstack role add --project admin --user admin admin
#在admin项目上,给admin用户赋予admin角色

openstack project create --domain default \
  --description "Service Project" service

# 这里不要去掉 token 暂时
# timedatectl 查看 UTC时间和CST时间

j:创建环境变量脚本
# 去掉上面的两个变量
unset OS_TOKEN OS_URL

cd ~ # 去root家目录新建脚本 admin-openrc
echo "export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2" > admin-openrc

source admin-openrc
# 验证
env | grep OS
openstack user list # 查看user user 可以换成 projeck role 等参数
openstack token issue # 生成 token 如果401是密码错误 如果是'NoneType' object has no attribute 'service_catalog' 缺少 unset OS_TOKEN OS_URL
# | Field      | Value                                                                                   |
# +------------+-----------------------------------------------------------------------------------------+
# | expires    | 2020-12-31T10:18:15.000000Z                                                             |
# | id         | gAAAAABf7ZdXbrrIlT4Bpiw72fWHZ__HymegN8WLR52GCBgv5zyGBdwS-                               |
# |            | H9c_vGi_3FdIbN7ZCGWjiFMDvNNOLE8GtZULTpTNw2Zk-                                           |
# |            | p96LEPYCYKicbBzCim_M9YGHR9ijIdJWMnSDrZG__kclxYDkYpbeqGHrNrurVhd1T57zKWvCjJvkbdjy8       |
# | project_id | afde967f63aa44c0b7d9bbe98b3ed967                                                        |
# | user_id    | 15015bb37e414f34aa9227cc380f0301       


11:安装glance镜像服务 7步
# a:数据库创库授权
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'GLANCE_DBPASS';
  
# b:在keystone创建glance用户关联角色
openstack user create --domain default --password GLANCE_PASS glance
openstack role add --project service --user glance admin

# c:在keystone上创建服务和注册api
openstack service create --name glance \
  --description "OpenStack Image" image
openstack endpoint create --region RegionOne \
  image public http://controller:9292
openstack endpoint create --region RegionOne \
  image internal http://controller:9292
openstack endpoint create --region RegionOne \
  image admin http://controller:9292

# d:安装服务相应软件包
yum install openstack-glance -y

# e:修改相应服务的配置文件
cp /etc/glance/glance-api.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
openstack-config --set /etc/glance/glance-api.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf  glance_store stores  file,http
openstack-config --set /etc/glance/glance-api.conf  glance_store default_store  file
openstack-config --set /etc/glance/glance-api.conf  glance_store filesystem_store_datadir  /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-api.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-api.conf  paste_deploy flavor  keystone
md5sum /etc/glance/glance-api.conf
# 3e1a4234c133eda11b413788e001cba3  /etc/glance/glance-api.conf

#####
cp /etc/glance/glance-registry.conf{,.bak}
grep '^[a-Z\[]' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-registry.conf  database  connection  mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken project_name  service
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken username  glance
openstack-config --set /etc/glance/glance-registry.conf  keystone_authtoken password  GLANCE_PASS
openstack-config --set /etc/glance/glance-registry.conf  paste_deploy flavor  keystone
md5sum /etc/glance/glance-registry.conf
# 46acabd81a65b924256f56fe34d90b8f  /etc/glance/glance-registry.conf

f:同步数据库
su -s /bin/sh -c "glance-manage db_sync" glance # 这一步会有警告
# Option "verbose" from group "DEFAULT" is deprecated for removal.  Its value may be silently ignored in the future.
# /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1056: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade
#   expire_on_commit=expire_on_commit, _conf=conf)
# /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `ix_image_properties_image_id_name`. This is deprecated and will be disallowed in a future release.')
#   result = self._query(query)

mysql glance -e "show tables;" # 这一步有表即可

# g:启动服务 监听9191 9292端口
systemctl enable openstack-glance-api.service \
  openstack-glance-registry.service
systemctl start openstack-glance-api.service \
  openstack-glance-registry.service

# h: 验证 上传镜像文件cirros-0.3.4-x86_64-disk.img 到当前目录
openstack image create "cirros" \
  --file cirros-0.3.4-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public


# +------------------+------------------------------------------------------+
# | Field            | Value                                                |
# +------------------+------------------------------------------------------+
# | checksum         | ee1eca47dc88f4879d8a229cc70a07c6                     |
# | container_format | bare                                                 |
# | created_at       | 2020-12-31T09:54:28Z                                 |
# | disk_format      | qcow2                                                |
# | file             | /v2/images/ed812f6b-a831-4b00-aa13-94893351d52d/file |
# | id               | ed812f6b-a831-4b00-aa13-94893351d52d                 |
# | min_disk         | 0                                                    |
# | min_ram          | 0                                                    |
# | name             | cirros                                               |
# | owner            | afde967f63aa44c0b7d9bbe98b3ed967                     |
# | protected        | False                                                |
# | schema           | /v2/schemas/image                                    |
# | size             | 13287936                                             |
# | status           | active                                               |
# | tags             |                                                      |
# | updated_at       | 2020-12-31T09:54:29Z                                 |
# | virtual_size     | None                                                 |
# | visibility       | public                                               |

# 确认glance服务 对比两个镜像的md5值是否一致
openstack image list # 查看id 
md5sum cirros-0.3.4-x86_64-disk.img #和下面这个镜像id一致
md5sum /var/lib/glance/images/ed812f6b-a831-4b00-aa13-94893351d52d # id更换成上面命令里面的id
  
十二:nova 计算服务
nova-api:接受并响应所有的计算服务请求,管理虚拟机(云主机)生命周期
nova-compute(多个):真正管理虚拟机
nova-scheduler:      nova调度器(挑选出最合适的nova-compute来创建虚机)
nova-conductor:      帮助nova-compute代理修改数据库中虚拟机的状态
nova-network          早期openstack版本管理虚拟机的网络(已弃用,neutron)
nova-consoleauth和nova-novncproxy:web版的vnc来直接操作云主机
novncproxy:web版 vnc客户端
nova-api-metadata:接受来自虚拟机发送的元数据请求

# 在控制节点上:
# 1:数据库创库授权
CREATE DATABASE nova_api;
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY 'NOVA_DBPASS';
  
# 2:在keystone创建系统用户(glance,nova,neutron)关联角色
openstack user create --domain default \
  --password NOVA_PASS nova
openstack role add --project service --user nova admin


# 3:在keystone上创建服务和注册api
openstack service create --name nova \
  --description "OpenStack Compute" compute 
openstack endpoint create --region RegionOne \
  compute public http://controller:8774/v2.1/%\(tenant_id\)s 
openstack endpoint create --region RegionOne \
  compute internal http://controller:8774/v2.1/%\(tenant_id\)s  
openstack endpoint create --region RegionOne \
  compute admin http://controller:8774/v2.1/%\(tenant_id\)s

# 4:安装服务相应软件包
yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-console openstack-nova-novncproxy \
  openstack-nova-scheduler -y
  
5:修改相应服务的配置文件
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.11
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  api_database connection  mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf  database  connection  mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
#校验
md5sum /etc/nova/nova.conf
47ded61fdd1a79ab91bdb37ce59ef192  /etc/nova/nova.conf

6:同步数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage db sync" # nova 会有警告
# /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.')
#   result = self._query(query)
# /usr/lib/python2.7/site-packages/pymysql/cursors.py:166: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.')
#   result = self._query(query)
# 验证
mysql nova -e "show tables";


# 7:启动服务
systemctl enable openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service \
  openstack-nova-consoleauth.service openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

openstack compute service list # 会出现三个服务 安装好 compute1之后 会出现4个
# 如果报错 Missing parameter(s): 请检查环境变量 env | grep OS 重新source admin-openrc
# Set a username with --os-username, OS_USERNAME, or auth.username
# Set an authentication URL, with --os-auth-url, OS_AUTH_URL or auth.auth_url
# Set a scope, such as a project or domain, set a project scope with --os-project-name, OS_PROJECT_NAME or auth.project_name, set a domain scope with --os-domain-name, OS_DOMAIN_NAME or auth.domain_name

# +----+------------------+------------+----------+---------+-------+----------------------------+
# | Id | Binary           | Host       | Zone     | Status  | State | Updated At                 |
# +----+------------------+------------+----------+---------+-------+----------------------------+
# |  1 | nova-scheduler   | controller | internal | enabled | up    | 2020-12-31T11:01:20.000000 |
# |  2 | nova-consoleauth | controller | internal | enabled | up    | 2020-12-31T11:01:20.000000 |
# |  3 | nova-conductor   | controller | internal | enabled | up    | 2020-12-31T11:01:20.000000 |
# |  6 | nova-compute     | compute1   | nova     | enabled | up    | 2020-12-31T11:01:21.000000 |
# +----+------------------+------------+----------+---------+-------+----------------------------+


  
# ---------------------------------------计算节点compute1上---------------------------------------
# nova-compute调用libvirt来创建虚拟机 
yum -y install libvirt
systemctl  start  libvirtd
systemctl  enable  libvirtd
# 验证
virsh list # nova-compute调用libvirtd来创建虚拟机
# 安装
yum install openstack-nova-compute -y
yum install openstack-utils.noarch -y
# 配置
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
# openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.31
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://controller:6080/vnc_auto.html
#校验
md5sum /etc/nova/nova.conf
# 2f53f4e0848bc5927493925a4ea61f63  /etc/nova/nova.conf
# 启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
#日志
tail -f /var/log/nova/nova-compute.log
# 在控制节点查看  多了一个服务
openstack compute service list

# 十三:neutron 网络服务
# neutron-server  端口(9696)  api:接受和响应外部的网络管理请求
# neutron-linuxbridge-agent:       负责创建桥接网卡
# neutron-dhcp-agent:             负责分配IP
# neutron-metadata-agent:         配合nova-metadata-api实现虚拟机的定制化操作
# L3-agent                         实现三层网络vxlan(网络层)

# ----------------------------------------------在控制节点上:
# 1:数据库创库授权
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY 'NEUTRON_DBPASS';

# 2:在keystone创建系统用户(glance,nova,neutron)关联角色
openstack user create --domain default --password NEUTRON_PASS neutron
openstack role add --project service --user neutron admin

# 3:在keystone上创建服务和注册api
openstack service create --name neutron \
  --description "OpenStack Networking" network
openstack endpoint create --region RegionOne \
  network public http://controller:9696
openstack endpoint create --region RegionOne \
  network internal http://controller:9696
openstack endpoint create --region RegionOne \
  network admin http://controller:9696
  
# 4:安装服务相应软件包
yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables -y
  
# 5:修改相应服务的配置文件
#### a:/etc/neutron/neutron.conf
cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT core_plugin  ml2
openstack-config --set /etc/neutron/neutron.conf  DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_status_changes  True
openstack-config --set /etc/neutron/neutron.conf  DEFAULT notify_nova_on_port_data_changes  True
openstack-config --set /etc/neutron/neutron.conf  database connection  mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  nova auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  nova auth_type  password 
openstack-config --set /etc/neutron/neutron.conf  nova project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  nova user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  nova region_name  RegionOne
openstack-config --set /etc/neutron/neutron.conf  nova project_name  service
openstack-config --set /etc/neutron/neutron.conf  nova username  nova
openstack-config --set /etc/neutron/neutron.conf  nova password  NOVA_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS
#校验
md5sum /etc/neutron/neutron.conf
#e399b7958cd22f47becc6d8fd6d3521a  /etc/neutron/neutron.conf

#### b:/etc/neutron/plugins/ml2/ml2_conf.ini
cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 type_drivers  flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 tenant_network_types 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 mechanism_drivers  linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini  securitygroup enable_ipset  True
#校验
md5sum /etc/neutron/plugins/ml2/ml2_conf.ini
#2640b5de519fafcd675b30e1bcd3c7d5  /etc/neutron/plugins/ml2/ml2_conf.ini


#### c:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False
#校验
md5sum /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# 3f474907a7f438b34563e4d3f3c29538  /etc/neutron/plugins/ml2/linuxbridge_agent.ini

#### d:/etc/neutron/dhcp_agent.ini
cp /etc/neutron/dhcp_agent.ini{,.bak} 
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini
openstack-config --set  /etc/neutron/dhcp_agent.ini  DEFAULT  interface_driver  neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set  /etc/neutron/dhcp_agent.ini  DEFAULT dhcp_driver  neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set  /etc/neutron/dhcp_agent.ini  DEFAULT enable_isolated_metadata  True
#校验
md5sum /etc/neutron/dhcp_agent.ini 
#d39579607b2f7d92e88f8910f9213520  /etc/neutron/dhcp_agent.ini

#### e:/etc/neutron/metadata_agent.ini
cp /etc/neutron/metadata_agent.ini{,.bak} 
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT  nova_metadata_ip  controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT  metadata_proxy_shared_secret  METADATA_SECRET
#校验
md5sum /etc/neutron/metadata_agent.ini
# e1166b0dfcbcf4507d50860d124335d6  /etc/neutron/metadata_agent.ini

#### f:再次修改/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf  neutron service_metadata_proxy  True
openstack-config --set /etc/nova/nova.conf  neutron metadata_proxy_shared_secret  METADATA_SECRET
#校验
md5sum /etc/nova/nova.conf
# 6334f359655efdbcf083b812ab94efc1  /etc/nova/nova.conf


6:同步数据库
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
# No handlers could be found for logger "oslo_config.cfg"
# INFO  [alembic.runtime.migration] Context impl MySQLImpl.
# INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
#   Running upgrade for neutron ...
# INFO  [alembic.runtime.migration] Context impl MySQLImpl.
# INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
# INFO  [alembic.runtime.migration] Running upgrade  -> kilo, kilo_initial
# INFO  [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225, nsxv_vdr_metadata.py
# INFO  [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151, neutrodb_ipam
# INFO  [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf, Initial operations in support of address scopes
# INFO  [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee, Flavor framework
# INFO  [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f, network_rbac
# INFO  [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773, quota_usage
# INFO  [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592, subnetpool hash
# INFO  [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7, add order to dnsnameservers
# INFO  [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79, address scope support in subnetpool
# INFO  [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051, qos db changes
# INFO  [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136, quota_reservations
# INFO  [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59, Add dns_name to Port
# INFO  [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d, Add availability zone
# INFO  [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a, add is_default to subnetpool
# INFO  [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25, Add standard attribute table
# INFO  [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee, Add network availability zone
# INFO  [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9, Add router availability zone
# INFO  [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4, Add ip_version to AddressScope
# INFO  [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664, Add tables and attributes to support external DNS integration
# INFO  [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5, add_unique_ha_router_agent_port_bindings
# INFO  [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f, Auto Allocated Topology - aka Get-Me-A-Network
# INFO  [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821, add dynamic routing model data
# INFO  [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4, add_bgp_dragent_model_data
# INFO  [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81, rbac_qos_policy
# INFO  [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6, Add resource_versions row to agent table
# INFO  [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532, tag support
# INFO  [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f, add_timestamp_to_base_resources
# INFO  [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a, Add desc to standard attr table
# INFO  [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99, Initial no-op Liberty contract rule.
# INFO  [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada, network_rbac
# INFO  [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016, Drop legacy OVS and LB plugin tables
# INFO  [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3, Metaplugin removal
# INFO  [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d, Add missing foreign keys
# INFO  [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d, add geneve ml2 type driver
# INFO  [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297, Drop cisco monolithic tables
# INFO  [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c, Drop embrane plugin table
# INFO  [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39, standardattributes migration
# INFO  [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b, DVR sheduling refactoring
# INFO  [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050, Drop NEC plugin tables
# INFO  [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9, rbac_qos_policy
# INFO  [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada, network_rbac_external
# INFO  [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc, standard_desc
#   OK

7:启动服务
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
systemctl restart neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-mestatustadata-agent.service
systemctl status openstack-nova-api.service neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

# 验证
neutron agent-list
# +----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
# | id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
# +----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
# | 0c6e064b-d616-47c0   | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
# | -9fcd-b00e0a044e02   |                    |            |                   |       |                |                         |
# | 2600607d-84c7-41f1-8 | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
# | bc0-247245715fc6     |                    |            |                   |       |                | agent                   |
# | 9c971298-aa84-47b6-b | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
# | 091-cfeb6ecb6c30     |                    |            |                   |       |                |                         |
# +----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+



##网络服务 计算节点computer1上:
# 安装
yum install openstack-neutron-linuxbridge ebtables ipset -y

# 配置
####
cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS
#校验
md5sum /etc/neutron/neutron.conf
# 77ffab503797be5063c06e8b956d6ed0  /etc/neutron/neutron.conf

####
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False
#校验
md5sum /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# 3f474907a7f438b34563e4d3f3c29538  /etc/neutron/plugins/ml2/linuxbridge_agent.ini

####
openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS
#校验
md5sum /etc/nova/nova.conf
# 328cd5f0745e26a420e828b0dfc2934e  /etc/nova/nova.conf

# 启动
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service openstack-nova-compute.service

# 控制节点验证
neutron agent-list
# +----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
# | id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                  |
# +----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+
# | 0c6e064b-d616-47c0   | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent      |
# | -9fcd-b00e0a044e02   |                    |            |                   |       |                |                         |
# | 2600607d-84c7-41f1-8 | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-    |
# | bc0-247245715fc6     |                    |            |                   |       |                | agent                   |
# | 48d5d244-81f2-46c5-8 | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-    |
# | 679-75650e752688     |                    |            |                   |       |                | agent                   |
# | 9c971298-aa84-47b6-b | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent  |
# | 091-cfeb6ecb6c30     |                    |            |                   |       |                |                         |
# +----------------------+--------------------+------------+-------------------+-------+----------------+-------------------------+


十四:安装horizon web界面

# 1:安装
yum install openstack-dashboard -y

# 2:配置
cp -ar local_settings local_settings.bak
# 拷贝local_settings到/etc/openstack-dashboard/local_settings

3:启动
systemctl start httpd
# 报错
# Internal Server Error
# The server encountered an internal error or misconfiguration and was unable to complete your request.
# Please contact the server administrator at root@localhost to inform them of the time this error occurred, and the actions you performed just before this error.

# 查看日志 tail -f /var/log/httpd/error_log 
# More information about this error may be available in the server error log.
# [Thu Dec 31 20:21:35.835883 2020] [suexec:notice] [pid 17491] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
# AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using fe80::20c:29ff:feec:8766. Set the 'ServerName' directive globally to suppress this message
# [Thu Dec 31 20:21:35.868596 2020] [auth_digest:notice] [pid 17491] AH01757: generating secret for digest authentication ...
# [Thu Dec 31 20:21:35.869068 2020] [lbmethod_heartbeat:notice] [pid 17491] AH02282: No slotmem from mod_heartmonitor
# [Thu Dec 31 20:21:35.871873 2020] [mpm_prefork:notice] [pid 17491] AH00163: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 configured -- resuming normal operations
# [Thu Dec 31 20:21:35.871903 2020] [core:notice] [pid 17491] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
# [Thu Dec 31 20:29:05.144892 2020] [core:error] [pid 17495] [client 10.0.0.1:64310] End of script output before headers: django.wsgi

# 解决办法
# 在配置文件中 /etc/apache2/conf-available/openstack-dashboard.conf 增加如下的一句解决问题
# WSGIApplicationGroup %{GLOBAL}

十五:启动一个实例
1:创建网络(网络名+子网)
neutron net-create --shared --provider:physical_network provider \
  --provider:network_type flat oldboy
  
neutron subnet-create --name oldgirl \
  --allocation-pool start=10.0.0.101,end=10.0.0.250 \
  --dns-nameserver 223.5.5.5 --gateway 10.0.0.254 \
  oldboy 10.0.0.0/24
  
# 2:创建云主机的硬件配置方案
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
openstack flavor list

# 3:创建密钥对
ssh-keygen -q -N "" -f ~/.ssh/id_rsa
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
# 验证
openstack keypair list


# 4:创建安全组规则
openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default
openstack security group  list

5:启动一个实例:
# openstack image list
openstack network list # 获取你的net-id 替换
# +--------------------------------------+--------+--------------------------------------+
# | ID                                   | Name   | Subnets                              |
# +--------------------------------------+--------+--------------------------------------+
# | ae14cf7a-55c1-49f7-aa0c-86423b7402cb | oldboy | 8460ec8b-0f26-4234-94b1-62d133bb8992 |
# +--------------------------------------+--------+--------------------------------------+

# 替换id 设置各项参数启动
openstack server create --flavor m1.nano --image cirros \
  --nic net-id=ae14cf7a-55c1-49f7-aa0c-86423b7402cb --security-group default \
  --key-name mykey oldboy3
  
# 验证
openstack server list

# 重启之后校验 加上前面几个list
openstack token issue  # kestone


十六:增加一个计算节点
1:配置yum源
mount /dev/cdrom /mnt
# mount: /dev/sr0 is write-protected, mounting read-only

rz 上传openstack_rpm.tar.gz到/opt,并解压
cd /opt
# rz openstack_rpm.tar.gz
tar -zxvf openstack_rpm.tar.gz
生成repo配置文件
# 编辑 /etc/yum.repos.d/local.repo
echo '[local]
name=local
baseurl=file:///mnt
gpgcheck=0

[openstack]
name=openstack
baseurl=file:///opt/repo
gpgcheck=0' >/etc/yum.repos.d/local.repo

# 设置开机挂载
echo 'mount /dev/cdrom /mnt' >>/etc/rc.local
chmod +x /etc/rc.d/rc.local

# 验证
yum makecache
yum repolist
# Loaded plugins: fastestmirror
# Loading mirror speeds from cached hostfile
# repo id                                                         repo name                                                       status
# local                                                           local                                                           3,894
# openstack                                                       openstack                                                         598
# repolist: 4,492


2: 时间同步
yum -y install chrony
# 控制节点:
# vim /etc/chrony.conf
# 修改第26行为
# allow 10/8
# systemctl restart chronyd

计算节点:
vim /etc/chrony.conf
修改第3行为
server 10.0.0.11 iburst

systemctl restart chronyd
systemctl status chronyd
netstat -lntup

3:安装openstack客户端和openstack-selinux
yum install python-openstackclient.noarch  openstack-selinux.noarch -y

4:安装nova-compute
yum install openstack-nova-compute -y
yum install openstack-utils.noarch -y

# 配置nova
\cp /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.32 #修改ip
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS

#5:安装neutron-linuxbridge-agent
yum install openstack-neutron-linuxbridge ebtables ipset -y
\cp /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS

\cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False

6:启动服务
systemctl restart  libvirtd openstack-nova-compute neutron-linuxbridge-agent
systemctl enable  libvirtd openstack-nova-compute neutron-linuxbridge-agent
systemctl status  libvirtd openstack-nova-compute neutron-linuxbridge-agent

7: 创建虚机来检查新增的计算节点是否可用!
# 控制节点查看
nova service-list
neutron agent-list
# 坑1 
vim /etc/nova/nova.conf #添加以下
[libvirt]
cpu_mode = none
virt_type = qemu

十七:openstack,域 用户,项目,角色的关系
域包含 用户和项目 项目和角色多对多的关系;admin和user只有二级权限需要二次开发。

十八:glance镜像服务迁移

前提:在控制节点上
systemctl stop openstack-glance-api.service openstack-glance-registry.service

1)glance数据库迁移
yum install mariadb-server.x86_64 python2-PyMySQL -y
systemctl start mariadb
systemctl enable mariadb
mysql_secure_installation
导入从控制节点上备份的glance数据库
mysql < glance.sql

mysql>
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY 'GLANCE_DBPASS';

2)安装glance服务
yum install openstack-glance -y
vim /etc/glance/glance-api.conf
vim /etc/glance/glance-registry.conf
systemctl start openstack-glance-api.service openstack-glance-registry.service 
systemctl enable openstack-glance-api.service openstack-glance-registry.service 

3)迁移原有镜像文件
scp -rp 10.0.0.11:/var/lib/glance/images/* /var/lib/glance/images
chown -R glance:glance /var/lib/glance/images/

4)修改keystone中glance的api地址
在控制节点上
mysqldump keystone endpoint >endpoint.sql
cp endpoint.sql /opt/
sed -i 's#http://controller:9292#http://10.0.0.32:9292#g' endpoint.sql
mysql keystone < endpoint.sql 
openstack endpoint list|grep image

5)修改所有nova节点配置文件
sed -i 's#http://controller:9292#http://10.0.0.32:9292#g' /etc/nova/nova.conf
grep '9292'  /etc/nova/nova.conf
systemctl restart openstack-nova-api.service openstack-nova-compute.service 

6)测试,上传镜像,创建实例

十九:cinder块存储服务
cinder-api:       接收和响应外部有关块存储请求
cinder-volume:   提供存储空间
cinder-scheduler:调度器,决定将要分配的空间由哪一个cinder-volume提供
cinder-backup:    备份存储

1:数据库创库授权
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY 'CINDER_DBPASS';

2:在keystone创建系统用户(glance,nova,neutron,cinder)关联角色
openstack user create --domain default --password CINDER_PASS cinder
openstack role add --project service --user cinder admin

3:在keystone上创建服务和注册api
openstack service create --name cinder \
  --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 \
  --description "OpenStack Block Storage" volumev2
openstack endpoint create --region RegionOne \
  volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
  volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
  volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
  volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
  volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
  volumev2 admin http://controller:8776/v2/%\(tenant_id\)s

4:安装服务相应软件包
yum install openstack-cinder

5:修改相应服务的配置文件
cp /etc/cinder/cinder.conf{,.bak}
grep -Ev '^$|#' /etc/cinder/cinder.conf.bak >/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf   DEFAULT  rpc_backend  rabbit
openstack-config --set /etc/cinder/cinder.conf   DEFAULT  auth_strategy  keystone
openstack-config --set /etc/cinder/cinder.conf   DEFAULT  my_ip  10.0.0.11
openstack-config --set /etc/cinder/cinder.conf   database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   auth_uri  http://controller:5000
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   auth_url  http://controller:35357
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   memcached_servers  controller:11211
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   auth_type  password
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   project_domain_name  default
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   user_domain_name  default
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   project_name  service
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   username  cinder
openstack-config --set /etc/cinder/cinder.conf   keystone_authtoken   password  CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf   oslo_concurrency  lock_path  /var/lib/cinder/tmp
openstack-config --set /etc/cinder/cinder.conf   oslo_messaging_rabbit  rabbit_host  controller
openstack-config --set /etc/cinder/cinder.conf   oslo_messaging_rabbit  rabbit_userid  openstack
openstack-config --set /etc/cinder/cinder.conf   oslo_messaging_rabbit  rabbit_password  RABBIT_PASS

6:同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder

7:启动服务
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

在计算节点上:
先决条件
yum install lvm2 -y
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
###增加两块硬盘
echo '- - -' >/sys/class/scsi_host/host0/scan 
fdisk -l
pvcreate /dev/sdb
pvcreate /dev/sdc
vgcreate cinder-ssd /dev/sdb
vgcreate cinder-sata /dev/sdc
###修改/etc/lvm/lvm.conf
在130下面插入一行:
filter = [ "a/sdb/", "a/sdc/","r/.*/"]

安装
yum install openstack-cinder targetcli python-keystone -y

配置
[root@compute1 ~]# cat /etc/cinder/cinder.conf
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.31
glance_api_servers = http://10.0.0.32:9292
enabled_backends = ssd,sata
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[ssd]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-ssd
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ssd
[sata]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-sata
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = sata


启动
systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

二十:再增加flat网段
1:控制节点
a:
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2_type_flat]
flat_networks = provider,net172_16

b:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0,net172_16:eth1

c:重启
systemctl restart neutron-server.service neutron-linuxbridge-agent.service

2:计算节点
a:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0,net172_16:eth1

b:重启
service neutron-linuxbridge-agent.service





二十一:cinder对接nfs后端存储
修改/etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = sata,ssd,nfs
[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
volume_backend_name = nfs

vi  /etc/cinder/nfs_shares
10.0.0.11:/data

nova:不提供虚拟化,支持多种虚拟化技术,nova-compute对接vmware ESXI
cinder:不提供存储,支持多种存储技术,lvm,nfs,glusterFS,ceph

二十二:openstack云主机冷迁移


yum install openstack-nova-compute -y

vi /etc/nova/nova.conf

二十三:openstack创建虚拟机的流程

二十四:openstack定制化实例

二十五:三层网络vxlan
# 不重启配置网卡ip命令
ifconfig eth2 172.16.0.11/24 up
ifconfig eth3 172.16.1.31/24 up


二十六:openstack的api使用
token
gAAAAABbuyHgQy1qGFzhV67YJXUfizesN53ejsxoxXAe-kfVGuldrWbA2SD34XB2OGGJTFCj_Y2lmxXV31ttDTPE3_ryMM9EKmHTT6cIHfKwZyPhGtFizodHnckS2ToBEHEfKs-B9uD3fArbbWRtfyP2renXRC6dQKaxpqnZvv5RORaYARRR3mGQaE054FWzFBXmFg41pLlx

token='gAAAAABbuyNaVnWNVnnvRbFX1ekAxpCk4u366gukhE0miOxeuel_GFvebZeHo4VVs58LtSqYbCroQ5t_4EM2vhYHrXmfDFrozHHZY_5BVydZ8qg38vAgrvTzOyEFVo-ErsB0l5TSsNrQQ9nYY6-u7tIsYpsUwnZVJVM4n__2c3juWJ8EVscNlQ4'

查看glance镜像列表
curl -H "X-Auth-Token:$token"    "http://10.0.0.11:9292/v1/images/detail?sort_key=name&sort_dir=asc&limit=21"
查看实例的列表
curl -H "Accept:application/json" -H "X-Auth-Token:$token" "http://10.0.0.11:8774/v2.1/9aa6d546892746a1a2763b1928d073c3/servers?limit=21&project_id=9aa6d546892746a1a2763b1928d073c3"

启动一个实例:
curl -H "Content-Type:application/json" -H "X-Auth-Token:$token" -d '
{
    "server": {
        "name": "aaaaa",
        "imageRef": "074b295c-2ede-44a1-996b-d0c903c32968",
        "availability_zone": "oldboy",
        "key_name": "mykey",
        "flavorRef": "0",
        "OS-DCF:diskConfig": "AUTO",
        "max_count": 1,
        "min_count": 1,
        "networks": [{
            "uuid": "be41fef0-8183-42a6-87d7-3f1e16206c8a"
        }],
        "security_groups": [{
            "name": "691af537-9e6b-4bd0-af3f-be222c6f87bc"
        }]
    }
}'  http://10.0.0.11:8774/v2.1/9aa6d546892746a1a2763b1928d073c3/servers

删除一个实例:

curl -X DELETE -H "Content-Type:application/json" -H "X-Auth-Token:$token"       http://10.0.0.11:8774/v2.1/9aa6d546892746a1a2763b1928d073c3/servers/ae6aa841-f5e8-4205-8590-96257fa39958



{
    "flavor": {
        "name": "test_flavor",
        "ram": 1024,
        "vcpus": 2,
        "disk": 10,
        "id": "10",
        "rxtx_factor": 2.0,
        "description": "test description"
    }
}

了解api,可以对软件进行二次开发。

# 新增计算节点

#检查yum源 和 端口
yum repolist
netstat -lntup

# 在所有节点上执行:
yum -y install chrony

# a:时间同步
# 控制节点:
# echo 'allow 10/8' >> /etc/chrony.conf
# systemctl restart chronyd
# cat /etc/chrony.conf | grep -v ^# | grep -v ^$

# 计算节点:
#  vim /etc/chrony.conf 修改第3行为
# server 10.0.0.11 iburst

sed -i 's/server 0.centos.pool.ntp.org/#&/' /etc/chrony.conf
sed -i 's/server 1.centos.pool.ntp.org/#&/' /etc/chrony.conf
sed -i 's/server 2.centos.pool.ntp.org/#&/' /etc/chrony.conf
sed -i 's/server 3.centos.pool.ntp.org/server 10.0.0.11 iburst/' /etc/chrony.conf

systemctl restart chronyd
systemctl status chronyd
netstat -lntup


# b:安装openstack客户端和openstack-selinux
yum install python-openstackclient openstack-selinux -y

计算节点compute1上:
# nova-compute调用libvirt来创建虚拟机 
yum -y install libvirt
systemctl  start  libvirtd
systemctl  enable  libvirtd
# 验证
virsh list

# 安装
yum install openstack-nova-compute -y
yum install openstack-utils.noarch -y

# 配置 注意修改修改ip!!!注意修改修改ip!!!注意修改修改ip!!!
cp /etc/nova/nova.conf{,.bak}
grep '^[a-Z\[]' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
# openstack-config --set /etc/nova/nova.conf  DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/nova/nova.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/nova/nova.conf  DEFAULT my_ip  10.0.0.32 # 如果这里填错别的计算节点ip 会很好玩
openstack-config --set /etc/nova/nova.conf  DEFAULT use_neutron  True
openstack-config --set /etc/nova/nova.conf  DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf  glance api_servers  http://controller:9292
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_uri  http://controller:5000
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  memcached_servers  controller:11211
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  auth_type  password
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  user_domain_name  default
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  project_name  service
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  username  nova
openstack-config --set /etc/nova/nova.conf  keystone_authtoken  password  NOVA_PASS
openstack-config --set /etc/nova/nova.conf  libvirt cpu_mode none # 如果是虚拟机需要加上这两项 不然找不到磁盘
openstack-config --set /etc/nova/nova.conf  libvirt virt_type qemu # 如果是虚拟机需要加上这两项 不然找不到磁盘


openstack-config --set /etc/nova/nova.conf  oslo_concurrency lock_path  /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_host  controller
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_userid  openstack
openstack-config --set /etc/nova/nova.conf  oslo_messaging_rabbit   rabbit_password  RABBIT_PASS
openstack-config --set /etc/nova/nova.conf  vnc enabled  True
openstack-config --set /etc/nova/nova.conf  vnc vncserver_listen  0.0.0.0
openstack-config --set /etc/nova/nova.conf  vnc vncserver_proxyclient_address  '$my_ip'
openstack-config --set /etc/nova/nova.conf  vnc novncproxy_base_url  http://10.0.0.11:6080/vnc_auto.html

#校验
md5sum /etc/nova/nova.conf
# aeab09476018af03959ea9d6a6c92f67  /etc/nova/nova.conf

# 启动
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
#日志
tail -f /var/log/nova/nova-compute.log
# 服务端校验 多了一个服务
openstack compute service list






##二 计算节点 网络服务 computer1上:
# 安装
yum install openstack-neutron-linuxbridge ebtables ipset -y

# 配置
####
cp /etc/neutron/neutron.conf{,.bak}
grep '^[a-Z\[]' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf  DEFAULT rpc_backend  rabbit
openstack-config --set /etc/neutron/neutron.conf  DEFAULT auth_strategy  keystone
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_url  http://controller:35357
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken memcached_servers  controller:11211
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken auth_type  password
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken user_domain_name  default
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken project_name  service
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken username  neutron
openstack-config --set /etc/neutron/neutron.conf  keystone_authtoken password  NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf  oslo_concurrency lock_path  /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_host  controller
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_userid  openstack
openstack-config --set /etc/neutron/neutron.conf  oslo_messaging_rabbit rabbit_password  RABBIT_PASS
#校验
md5sum /etc/neutron/neutron.conf
# 77ffab503797be5063c06e8b956d6ed0  /etc/neutron/neutron.conf

####
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep '^[a-Z\[]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  linux_bridge physical_interface_mappings  provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup enable_security_group  True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  securitygroup firewall_driver  neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini  vxlan enable_vxlan  False
#校验
md5sum /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# 3f474907a7f438b34563e4d3f3c29538  /etc/neutron/plugins/ml2/linuxbridge_agent.ini # eth0 网卡的md5
# 2a969958faaf393ebdc774504df21c83  /etc/neutron/plugins/ml2/linuxbridge_agent.ini # eno5 网卡的md5
####
openstack-config --set /etc/nova/nova.conf  neutron url  http://controller:9696
openstack-config --set /etc/nova/nova.conf  neutron auth_url  http://controller:35357
openstack-config --set /etc/nova/nova.conf  neutron auth_type  password
openstack-config --set /etc/nova/nova.conf  neutron project_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron user_domain_name  default
openstack-config --set /etc/nova/nova.conf  neutron region_name  RegionOne
openstack-config --set /etc/nova/nova.conf  neutron project_name  service
openstack-config --set /etc/nova/nova.conf  neutron username  neutron
openstack-config --set /etc/nova/nova.conf  neutron password  NEUTRON_PASS
#校验
md5sum /etc/nova/nova.conf
# 9f029a46b8fa6da2d2ea9ab2e6c17ab4  /etc/nova/nova.conf

# 启动
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service openstack-nova-compute.service

# 控制节点验证 一计算节点时 四个笑脸 
neutron agent-list

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值