基于openEuler-22.03-lts-sp1部署Openstack

OpenStack-Train 部署指南

OpenStack 简介

OpenStack 是一个社区,也是一个项目。它提供了一个部署云的操作平台或工具集,为组织提供可扩展的、灵活的云计算。

作为一个开源的云计算管理平台,OpenStack 由nova、cinder、neutron、glance、keystone、horizon等几个主要的组件组合起来完成具体工作。OpenStack 支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack 通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供 API 进行集成。

openEuler 22.03-LTS-SP1版本官方源已经支持 OpenStack-Train 版本,用户可以配置好 yum 源后根据此文档进行 OpenStack 部署。

部署说明

1.本文参考:

https://openeuler.gitee.io/openstack/install/openEuler-22.03-LTS-SP1/OpenStack-train/#openstack

2.部署表

节点ip常用服务
controller192.168.57.30, 10.0.10.30nova,neurton,cinder; keystone,glance,placement,horizon; mysql, rabbitmq, memcache, etcd
compute1192.168.57.31, 10.0.10.31nova,neutron,cinder
compute2192.168.57.32, 10.0.10.32nova,neutron,cinder

搭建的系统版本:openEuler-22.03-lts-sp1

3.节点环境配置

需要在每个节点上关闭selinux

vi /etc/sysconfig/selinux
SELINUX=enforcing指令更改为SELINUX=disabled

约定

OpenStack 支持多种形态部署,此文档支持ALL in One以及Distributed两种部署方式,按照如下方式约定:

ALL in One模式:

忽略所有可能的后缀

Distributed模式:

以 `(CTL)` 为后缀表示此条配置或者命令仅适用`控制节点`
以 `(CPT)` 为后缀表示此条配置或者命令仅适用`计算节点`
以 `(STG)` 为后缀表示此条配置或者命令仅适用`存储节点`
除此之外表示此条配置或者命令同时适用`控制节点`和`计算节点`

注意

涉及到以上约定的服务如下:

  • Cinder
  • Nova
  • Neutron

准备环境

环境配置

  1. 启动OpenStack Train yum源

    yum update
    yum install openstack-release-train
    yum clean all && yum makecache
    

    注意:如果你的环境的YUM源没有启用EPOL,需要同时配置EPOL,确保EPOL已配置,如下所示

    vi /etc/yum.repos.d/openEuler.repo
    
    [EPOL]
    name=EPOL
    baseurl=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/EPOL/main/$basearch/
    enabled=1
    gpgcheck=1
    gpgkey=http://repo.openeuler.org/openEuler-22.03-LTS-SP1/OS/$basearch/RPM-GPG-KEY-openEuler
    EOF
    
  2. 修改主机名以及映射

    设置各个节点的主机名

    hostnamectl set-hostname controller       (CTL)       
    hostnamectl set-hostname compute          (CPT)
    

    假设controller节点的IP是10.0.10.30,compute节点的IP是10.0.10.31(如果存在的话),则于/etc/hosts新增如下:

    10.0.10.30   controller
    10.0.10.31   compute1
    10.0.10.32   compute2
    

安装 SQL DataBase

  1. 执行如下命令,安装软件包。

    yum install mariadb mariadb-server python3-PyMySQL
    
  2. 执行如下命令,创建并编辑 /etc/my.cnf.d/openstack.cnf 文件。

    vim /etc/my.cnf.d/openstack.cnf
    
    [mysqld]
    bind-address = 10.0.10.30
    default-storage-engine = innodb
    innodb_file_per_table = on
    max_connections = 4096
    collation-server = utf8_general_ci
    character-set-server = utf8
    

    注意其中 bind-address 设置为控制节点的管理IP地址。

  3. 启动 DataBase 服务,并为其配置开机自启动:

    systemctl enable mariadb.service
    systemctl start mariadb.service
    
  4. 配置DataBase的默认密码(可选)

    mysql_secure_installation
    

    注意

    根据提示进行即可

安装 RabbitMQ

  1. 执行如下命令,安装软件包。

    yum install rabbitmq-server
    
  2. 启动 RabbitMQ 服务,并为其配置开机自启动。

    systemctl enable rabbitmq-server.service
    systemctl start rabbitmq-server.service
    
  3. 添加 OpenStack用户。

    rabbitmqctl add_user openstack RABBIT_PASS
    

    注意替换 RABBIT_PASS,为 OpenStack 用户设置密码

  4. 设置openstack用户权限,允许进行配置、写、读:

    rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    

安装 Memcached

  1. 执行如下命令,安装依赖软件包。

    yum install memcached python3-memcached
    
  2. 编辑 /etc/sysconfig/memcached 文件。

    vim /etc/sysconfig/memcached
    
    OPTIONS="-l 127.0.0.1,::1,controller"
    
  3. 执行如下命令,启动 Memcached 服务,并为其配置开机启动。

    systemctl enable memcached.service
    systemctl start memcached.service
    

    注意服务启动后,可以通过命令memcached-tool controller stats确保启动正常,服务可用,其中可以将controller替换为控制节点的管理IP地址。

安装 OpenStack

Keystone 安装

  1. 创建 keystone 数据库并授权。

    mysql -u root -p
    
    MariaDB [(none)]> CREATE DATABASE keystone;
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
    IDENTIFIED BY 'KEYSTONE_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
    IDENTIFIED BY 'KEYSTONE_DBPASS';
    MariaDB [(none)]> exit
    

    注意替换 KEYSTONE_DBPASS,为 Keystone 数据库设置密码

  2. 安装软件包。

    yum install openstack-keystone httpd mod_wsgi
    
  3. 配置keystone相关配置

    vim /etc/keystone/keystone.conf
    
    [database]
    connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
    
    [token]
    provider = fernet
    

    解释

    [database]部分,配置数据库入口

    [token]部分,配置token provider

    注意替换 KEYSTONE_DBPASS 为 Keystone 数据库的密码

  4. 同步数据库。

    su -s /bin/sh -c "keystone-manage db_sync" keystone
    
  5. 初始化Fernet密钥仓库。

    keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    
  6. 启动服务。

    keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
    --bootstrap-admin-url http://controller:5000/v3/ \
    --bootstrap-internal-url http://controller:5000/v3/ \
    --bootstrap-public-url http://controller:5000/v3/ \
    --bootstrap-region-id RegionOne
    

    注意替换 ADMIN_PASS,为 admin 用户设置密码

  7. 配置Apache HTTP server

    vim /etc/httpd/conf/httpd.conf
    
    ServerName controller
    
    ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
    

    解释:配置 ServerName 项引用控制节点

    注意 如果 ServerName 项不存在则需要创建

  8. 启动Apache HTTP服务。

    systemctl enable httpd.service
    systemctl start httpd.service
    
  9. 创建环境变量配置。

    cat << EOF >> ~/.admin-openrc
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_NAME=admin
    export OS_USERNAME=admin
    export OS_PASSWORD=ADMIN_PASS
    export OS_AUTH_URL=http://controller:5000/v3
    export OS_IDENTITY_API_VERSION=3
    export OS_IMAGE_API_VERSION=2
    EOF
    

    注意替换 ADMIN_PASS 为 admin 用户的密码

  10. 依次创建domain, projects, users, roles,需要先安装好python3-openstackclient:

    yum install python3-openstackclient
    

    导入环境变量

    source ~/.admin-openrc
    

    创建project service,其中 domain default 在 keystone-manage bootstrap 时已创建

    openstack domain create --description "An Example Domain" example
    
    openstack project create --domain default --description "Service Project" service
    

    创建(non-admin)project myproject,user myuser 和 role myrole,为 myprojectmyuser 添加角色myrole

    openstack project create --domain default --description "Demo Project" myproject
    openstack user create --domain default --password-prompt myuser
    openstack role create myrole
    openstack role add --project myproject --user myuser myrole
    
  11. 验证

    取消临时环境变量OS_AUTH_URL和OS_PASSWORD:

    source ~/.admin-openrc
    unset OS_AUTH_URL OS_PASSWORD
    

    为admin用户请求token:

    openstack --os-auth-url http://controller:5000/v3 \
    --os-project-domain-name Default --os-user-domain-name Default \
    --os-project-name admin --os-username admin token issue
    
    

    为myuser用户请求token:

    openstack --os-auth-url http://controller:5000/v3 \
    --os-project-domain-name Default --os-user-domain-name Default \
    --os-project-name myproject --os-username myuser token issue
    
    

Glance 安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    mysql -u root -p
    
    MariaDB [(none)]> CREATE DATABASE glance;
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
    IDENTIFIED BY 'GLANCE_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
    IDENTIFIED BY 'GLANCE_DBPASS';
    MariaDB [(none)]> exit
    
    

    注意替换 GLANCE_DBPASS,为 glance 数据库设置密码

    创建服务凭证

    source ~/.admin-openrc
    
    openstack user create --domain default --password-prompt glance
    openstack role add --project service --user glance admin
    openstack service create --name glance --description "OpenStack Image" image
    
    

    创建镜像服务API端点:

    openstack endpoint create --region RegionOne image public http://controller:9292
    openstack endpoint create --region RegionOne image internal http://controller:9292
    openstack endpoint create --region RegionOne image admin http://controller:9292
    
    
  2. 安装软件包

    yum install openstack-glance
    
    
  3. 配置glance相关配置:

    vim /etc/glance/glance-api.conf
    
    [database]
    connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
    
    [keystone_authtoken]
    www_authenticate_uri  = http://controller:5000
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = service
    username = glance
    password = GLANCE_PASS
    
    [paste_deploy]
    flavor = keystone
    
    [glance_store]
    stores = file,http
    default_store = file
    filesystem_store_datadir = /var/lib/glance/images/
    

    解释:

    [database]部分,配置数据库入口

    [keystone_authtoken] [paste_deploy]部分,配置身份认证服务入口

    [glance_store]部分,配置本地文件系统存储和镜像文件的位置

    注意

    替换 GLANCE_DBPASS 为 glance 数据库的密码

    替换 GLANCE_PASS 为 glance 用户的密码

  4. 同步数据库:

    su -s /bin/sh -c "glance-manage db_sync" glance
    
  5. 启动服务:

    systemctl enable openstack-glance-api.service
    systemctl start openstack-glance-api.service
    
  6. 验证

    下载镜像

    source ~/.admin-openrc
    
    wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    

    注意如果您使用的环境是鲲鹏架构,请下载aarch64版本的镜像;已对镜像cirros-0.5.2-aarch64-disk.img进行测试。

    向Image服务上传镜像:

    openstack image create --disk-format qcow2 --container-format bare \
                           --file cirros-0.4.0-x86_64-disk.img --public cirros
    
    

    确认镜像上传并验证属性:

    openstack image list
    
    

Placement安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    作为 root 用户访问数据库,创建 placement 数据库并授权。

    mysql -u root -p
    MariaDB [(none)]> CREATE DATABASE placement;
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' \
    IDENTIFIED BY 'PLACEMENT_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
    IDENTIFIED BY 'PLACEMENT_DBPASS';
    MariaDB [(none)]> exit
    
    

    注意替换 PLACEMENT_DBPASS 为 placement 数据库设置密码

    source admin-openrc
    
    

    执行如下命令,创建 placement 服务凭证、创建 placement 用户以及添加‘admin’角色到用户‘placement’。

    创建Placement API服务

    openstack user create --domain default --password-prompt placement
    openstack role add --project service --user placement admin
    openstack service create --name placement --description "Placement API" placement
    
    

    创建placement服务API端点:

    openstack endpoint create --region RegionOne placement public http://controller:8778
    openstack endpoint create --region RegionOne placement internal http://controller:8778
    openstack endpoint create --region RegionOne placement admin http://controller:8778
    
    
  2. 安装和配置

    安装软件包:

    yum install openstack-placement-api
    
    

    配置placement:

    编辑 /etc/placement/placement.conf 文件:

    在[placement_database]部分,配置数据库入口

    在[api] [keystone_authtoken]部分,配置身份认证服务入口

    # vim /etc/placement/placement.conf
    [placement_database]
    # ...
    connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
    [api]
    # ...
    auth_strategy = keystone
    [keystone_authtoken]
    # ...
    auth_url = http://controller:5000/v3
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = service
    username = placement
    password = PLACEMENT_PASS
    
    

    其中,替换 PLACEMENT_DBPASS 为 placement 数据库的密码,替换 PLACEMENT_PASS 为 placement 用户的密码。

    同步数据库:

    su -s /bin/sh -c "placement-manage db sync" placement
    

    启动httpd服务:

    systemctl restart httpd
    
  3. 验证

    执行如下命令,执行状态检查:

    . admin-openrc
    placement-status upgrade check
    

    安装osc-placement,列出可用的资源类别及特性:

    yum install python3-osc-placement
    openstack --os-placement-api-version 1.2 resource class list --sort-column name
    openstack --os-placement-api-version 1.6 trait list --sort-column name
    

Nova 安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    mysql -u root -p   (CTL)
    
    MariaDB [(none)]> CREATE DATABASE nova_api;
    MariaDB [(none)]> CREATE DATABASE nova;
    MariaDB [(none)]> CREATE DATABASE nova_cell0;
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
    IDENTIFIED BY 'NOVA_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
    IDENTIFIED BY 'NOVA_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
    IDENTIFIED BY 'NOVA_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
    IDENTIFIED BY 'NOVA_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
    IDENTIFIED BY 'NOVA_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
    IDENTIFIED BY 'NOVA_DBPASS';
    MariaDB [(none)]> exit
    

    注意替换NOVA_DBPASS,为nova数据库设置密码

    source ~/.admin-openrc      (CTL)
    

    创建nova服务凭证:

    openstack user create --domain default --password-prompt nova          (CTL)
    openstack role add --project service --user nova admin                 (CTL)
    openstack service create --name nova --description "OpenStack Compute" compute   (CTL)
    
    

    创建nova API端点:

    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1        (CTL)
    openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1      (CTL)
    openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1         (CTL)
    
    
  2. 安装软件包

    yum install openstack-nova-api openstack-nova-conductor  (CTL)
    yum install openstack-nova-novncproxy openstack-nova-scheduler  (CTL)
    yum install openstack-nova-compute                   (CPT)
    
    

    注意如果为arm64结构,还需要执行以下命令

    yum install edk2-aarch64                   (CPT)
    
    
  3. 配置nova相关配置

    vim /etc/nova/nova.conf
    
    [DEFAULT]
    enabled_apis = osapi_compute,metadata
    transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
    my_ip = 10.0.0.1
    use_neutron = true
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    compute_driver=libvirt.LibvirtDriver                                                           (CPT)
    instances_path = /var/lib/nova/instances/                                                      (CPT)
    lock_path = /var/lib/nova/tmp                                                                  (CPT)
    
    [api_database]
    connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api                              (CTL)
    
    [database]
    connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova                                  (CTL)
    
    [api]
    auth_strategy = keystone
    
    [keystone_authtoken]
    www_authenticate_uri = http://controller:5000/
    auth_url = http://controller:5000/
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = service
    username = nova
    password = NOVA_PASS
    
    [vnc]
    enabled = true
    server_listen = $my_ip
    server_proxyclient_address = $my_ip
    novncproxy_base_url = http://controller:6080/vnc_auto.html                                     (CPT)
    
    [glance]
    api_servers = http://controller:9292
    
    [oslo_concurrency]
    lock_path = /var/lib/nova/tmp                                                                  (CTL)
    
    [placement]
    region_name = RegionOne
    project_domain_name = Default
    project_name = service
    auth_type = password
    user_domain_name = Default
    auth_url = http://controller:5000/v3
    username = placement
    password = PLACEMENT_PASS
    
    [neutron]
    auth_url = http://controller:5000
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    region_name = RegionOne
    project_name = service
    username = neutron
    password = NEUTRON_PASS
    service_metadata_proxy = true                                                                  (CTL)
    metadata_proxy_shared_secret = METADATA_SECRET                                                 (CTL)
    
    

    解释

    [default]部分,启用计算和元数据的API,配置RabbitMQ消息队列入口,配置my_ip,启用网络服务neutron;

    [api_database] [database]部分,配置数据库入口;

    [api] [keystone_authtoken]部分,配置身份认证服务入口;

    [vnc]部分,启用并配置远程控制台入口;

    [glance]部分,配置镜像服务API的地址;

    [oslo_concurrency]部分,配置lock path;

    [placement]部分,配置placement服务的入口。

    注意

    替换 RABBIT_PASS 为 RabbitMQ 中 openstack 账户的密码;

    配置 my_ip 为控制节点的管理IP地址;

    替换 NOVA_DBPASS 为nova数据库的密码;

    替换 NOVA_PASS 为nova用户的密码;

    替换 PLACEMENT_PASS 为placement用户的密码;

    替换 NEUTRON_PASS 为neutron用户的密码;

    替换METADATA_SECRET为合适的元数据代理secret。

    额外

    确定是否支持虚拟机硬件加速(x86架构):

    egrep -c '(vmx|svm)' /proc/cpuinfo                                                             (CPT)
    
    

    如果返回值为0则不支持硬件加速,需要配置libvirt使用QEMU而不是KVM:

    vim /etc/nova/nova.conf                                                                        (CPT)
    
    [libvirt]
    virt_type = qemu
    
    

    如果返回值为1或更大的值,则支持硬件加速,则virt_type可以配置为kvm

    注意如果为arm64结构,还需要在计算节点执行以下命令

    mkdir -p /usr/share/AAVMF
    chown nova:nova /usr/share/AAVMF
    
    ln -s /usr/share/edk2/aarch64/QEMU_EFI-pflash.raw \
          /usr/share/AAVMF/AAVMF_CODE.fd
    ln -s /usr/share/edk2/aarch64/vars-template-pflash.raw \
          /usr/share/AAVMF/AAVMF_VARS.fd
    
    vim /etc/libvirt/qemu.conf
    
    nvram = ["/usr/share/AAVMF/AAVMF_CODE.fd: \
             /usr/share/AAVMF/AAVMF_VARS.fd", \
             "/usr/share/edk2/aarch64/QEMU_EFI-pflash.raw: \
             /usr/share/edk2/aarch64/vars-template-pflash.raw"]
    
    

    并且当ARM架构下的部署环境为嵌套虚拟化时,libvirt配置如下:

    [libvirt]
    virt_type = qemu
    cpu_mode = custom
    cpu_model = cortex-a72
    
    
  4. 同步数据库

    同步nova-api数据库:

    su -s /bin/sh -c "nova-manage api_db sync" nova                                       (CTL)
    
    

    注册cell0数据库:

    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova                                 (CTL)
    
    

    创建cell1 cell:

    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova        (CTL)
    
    

    同步nova数据库:

    su -s /bin/sh -c "nova-manage db sync" nova             (CTL)
    

    验证cell0和cell1注册正确:

    su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova          (CTL)
    

    添加计算节点到openstack集群

    su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova     (CTL)
    
  5. 启动服务

    systemctl enable \                   (CTL)
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service
    
    systemctl start \                     (CTL)
    openstack-nova-api.service \
    openstack-nova-scheduler.service \
    openstack-nova-conductor.service \
    openstack-nova-novncproxy.service
    
    systemctl enable libvirtd.service openstack-nova-compute.service       (CPT)
    systemctl start libvirtd.service openstack-nova-compute.service        (CPT)
    
  6. 验证

    source ~/.admin-openrc                       (CTL)
    

    列出服务组件,验证每个流程都成功启动和注册:

    openstack compute service list               (CTL)
    

    列出身份服务中的API端点,验证与身份服务的连接:

    openstack catalog list                        (CTL)
    

    列出镜像服务中的镜像,验证与镜像服务的连接:

    openstack image list                          (CTL)
    
    

    检查cells是否运作成功,以及其他必要条件是否已具备。

    nova-status upgrade check                      (CTL)
    
    

Neutron 安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    mysql -u root -p       (CTL)
    
    MariaDB [(none)]> CREATE DATABASE neutron;
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
    IDENTIFIED BY 'NEUTRON_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
    IDENTIFIED BY 'NEUTRON_DBPASS';
    MariaDB [(none)]> exit
    
    

    注意替换 NEUTRON_DBPASS 为 neutron 数据库设置密码。

    source ~/.admin-openrc                                   (CTL)
    
    

    创建neutron服务凭证

    openstack user create --domain default --password-prompt neutron                               (CTL)
    openstack role add --project service --user neutron admin                                      (CTL)
    openstack service create --name neutron --description "OpenStack Networking" network           (CTL)
    
    

    创建Neutron服务API端点:

    openstack endpoint create --region RegionOne network public http://controller:9696             (CTL)
    openstack endpoint create --region RegionOne network internal http://controller:9696           (CTL)
    openstack endpoint create --region RegionOne network admin http://controller:9696              (CTL)
    
    
  2. 安装软件包:

    yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset openstack-neutron-ml2                (CTL)
    
    
   yum install openstack-neutron-linuxbridge ebtables ipset            (CPT)

  1. 配置neutron相关配置:

    配置主体配置

    vim /etc/neutron/neutron.conf
    
    [database]
    connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron                         (CTL)
    
    [DEFAULT]
    core_plugin = ml2                                                                              (CTL)
    service_plugins = router                                                                       (CTL)
    allow_overlapping_ips = true                                                                   (CTL)
    transport_url = rabbit://openstack:RABBIT_PASS@controller
    auth_strategy = keystone
    notify_nova_on_port_status_changes = true                                                      (CTL)
    notify_nova_on_port_data_changes = true                                                        (CTL)
    api_workers = 3                                                                                (CTL)
    
    [keystone_authtoken]
    www_authenticate_uri = http://controller:5000
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = service
    username = neutron
    password = NEUTRON_PASS
    
    [nova]
    auth_url = http://controller:5000                                                              (CTL)
    auth_type = password                                                                           (CTL)
    project_domain_name = Default                                                                  (CTL)
    user_domain_name = Default                                                                     (CTL)
    region_name = RegionOne                                                                        (CTL)
    project_name = service                                                                         (CTL)
    username = nova                                                                                (CTL)
    password = NOVA_PASS                                                                           (CTL)
    
    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp
    
    

    解释

    [database]部分,配置数据库入口;

    [default]部分,启用ml2插件和router插件,允许ip地址重叠,配置RabbitMQ消息队列入口;

    [default] [keystone]部分,配置身份认证服务入口;

    [default] [nova]部分,配置网络来通知计算网络拓扑的变化;

    [oslo_concurrency]部分,配置lock path。

    注意

    替换NEUTRON_DBPASS为 neutron 数据库的密码;

    替换RABBIT_PASS为 RabbitMQ中openstack 账户的密码;

    替换NEUTRON_PASS为 neutron 用户的密码;

    替换NOVA_PASS为 nova 用户的密码。

    配置ML2插件:

    vim /etc/neutron/plugins/ml2/ml2_conf.ini
    
    [ml2]
    type_drivers = flat,vlan,vxlan
    tenant_network_types = vxlan
    mechanism_drivers = linuxbridge,l2population
    extension_drivers = port_security
    
    [ml2_type_flat]
    flat_networks = provider
    
    [ml2_type_vxlan]
    vni_ranges = 1:1000
    
    [securitygroup]
    enable_ipset = true
    

    创建/etc/neutron/plugin.ini的符号链接

    ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
    

    注意

    [ml2]部分,启用 flat、vlan、vxlan 网络,启用 linuxbridge 及 l2population 机制,启用端口安全扩展驱动;

    [ml2_type_flat]部分,配置 flat 网络为 provider 虚拟网络;

    [ml2_type_vxlan]部分,配置 VXLAN 网络标识符范围;

    [securitygroup]部分,配置允许 ipset。

    补充

    l2 的具体配置可以根据用户需求自行修改,本文使用的是provider network + linuxbridge

    配置 Linux bridge 代理:

    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    
    [linux_bridge]
    physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
    
    [vxlan]
    enable_vxlan = true
    local_ip = OVERLAY_INTERFACE_IP_ADDRESS
    l2_population = true
    
    [securitygroup]
    enable_security_group = true
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
    

    解释

    [linux_bridge]部分,映射 provider 虚拟网络到物理网络接口;

    [vxlan]部分,启用 vxlan 覆盖网络,配置处理覆盖网络的物理网络接口 IP 地址,启用 layer-2 population;

    [securitygroup]部分,允许安全组,配置 linux bridge iptables 防火墙驱动。

    注意

    替换PROVIDER_INTERFACE_NAME为物理网络接口;

    替换OVERLAY_INTERFACE_IP_ADDRESS为控制节点的管理IP地址。

    配置Layer-3代理:

    vim /etc/neutron/l3_agent.ini                (CTL)
    
    [DEFAULT]
    interface_driver = linuxbridge
    
    

    解释

    在[default]部分,配置接口驱动为linuxbridge

    配置DHCP代理:

    vim /etc/neutron/dhcp_agent.ini               (CTL)
    
    [DEFAULT]
    interface_driver = linuxbridge
    dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    enable_isolated_metadata = true
    
    

    解释

    [default]部分,配置linuxbridge接口驱动、Dnsmasq DHCP驱动,启用隔离的元数据。

    配置metadata代理:

    vim /etc/neutron/metadata_agent.ini    (CTL)
    
    [DEFAULT]
    nova_metadata_host = controller
    metadata_proxy_shared_secret = METADATA_SECRET
    
    

    解释

    [default]部分,配置元数据主机和shared secret。

    注意: 替换METADATA_SECRET为合适的元数据代理secret。

  2. 配置nova相关配置

    vim /etc/nova/nova.conf
    
    [neutron]
    auth_url = http://controller:5000
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    region_name = RegionOne
    project_name = service
    username = neutron
    password = NEUTRON_PASS
    service_metadata_proxy = true      (CTL)
    metadata_proxy_shared_secret = METADATA_SECRET    (CTL)
    
    

    解释

    [neutron]部分,配置访问参数,启用元数据代理,配置secret。

    注意

    替换NEUTRON_PASS为 neutron 用户的密码;

    替换METADATA_SECRET为合适的元数据代理secret。

  3. 同步数据库:

    su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
    --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
    
    
  4. 重启计算API服务:

    systemctl restart openstack-nova-api.service
    
    
  5. 启动网络服务

    systemctl enable neutron-server.service neutron-linuxbridge-agent.service \                    (CTL)
    neutron-dhcp-agent.service neutron-metadata-agent.service \
    neutron-l3-agent.service
    
    systemctl restart neutron-server.service neutron-linuxbridge-agent.service \                   (CTL)
    neutron-dhcp-agent.service neutron-metadata-agent.service \
    neutron-l3-agent.service
    
    systemctl enable neutron-linuxbridge-agent.service                                             (CPT)
    systemctl restart neutron-linuxbridge-agent.service openstack-nova-compute.service             (CPT)
    
    
  6. 验证

    验证 neutron 代理启动成功:

    openstack network agent list
    
    

Cinder 安装

  1. 创建数据库、服务凭证和 API 端点

    创建数据库:

    mysql -u root -p
    
    MariaDB [(none)]> CREATE DATABASE cinder;
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
    IDENTIFIED BY 'CINDER_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
    IDENTIFIED BY 'CINDER_DBPASS';
    MariaDB [(none)]> exit
    
    

    注意

    替换 CINDER_DBPASS 为cinder数据库设置密码。

    source ~/.admin-openrc
    
    

    创建cinder服务凭证:

    openstack user create --domain default --password-prompt cinder
    openstack role add --project service --user cinder admin
    openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
    openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
    

    创建块存储服务API端点:

    openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
    openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
    
    
  2. 安装软件包:

    yum install openstack-cinder-api openstack-cinder-scheduler                (CTL)
    
    yum install lvm2 device-mapper-persistent-data scsi-target-utils rpcbind nfs-utils openstack-cinder-volume openstack-cinder-backup   (STG)
    
  3. 准备存储设备,以下仅为示例:

    pvcreate /dev/vdb
    vgcreate cinder-volumes /dev/vdb
    
    vim /etc/lvm/lvm.conf
    
    
    devices {
    ...
    filter = [ "a/vdb/", "r/.*/"]
    
    

    解释

    在devices部分,添加过滤以接受/dev/vdb设备拒绝其他设备。

  4. 准备NFS

    mkdir -p /root/cinder/backup
    
    cat << EOF >> /etc/export
    /root/cinder/backup 192.168.1.0/24(rw,sync,no_root_squash,no_all_squash)
    EOF
    
    
  5. 配置cinder相关配置:

    vim /etc/cinder/cinder.conf
    
    [DEFAULT]
    transport_url = rabbit://openstack:RABBIT_PASS@controller
    auth_strategy = keystone
    my_ip = 10.0.0.11
    enabled_backends = lvm                                                                         (STG)
    backup_driver=cinder.backup.drivers.nfs.NFSBackupDriver                                        (STG)
    backup_share=HOST:PATH                                                                         (STG)
    
    [database]
    connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
    
    [keystone_authtoken]
    www_authenticate_uri = http://controller:5000
    auth_url = http://controller:5000
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = Default
    user_domain_name = Default
    project_name = service
    username = cinder
    password = CINDER_PASS
    
    [oslo_concurrency]
    lock_path = /var/lib/cinder/tmp
    
    [lvm]
    volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver                                      (STG)
    volume_group = cinder-volumes                                                                  (STG)
    iscsi_protocol = iscsi                                                                         (STG)
    iscsi_helper = tgtadm                                                                          (STG)
    

    解释

    [database]部分,配置数据库入口;

    [DEFAULT]部分,配置RabbitMQ消息队列入口,配置my_ip;

    [DEFAULT] [keystone_authtoken]部分,配置身份认证服务入口;

    [oslo_concurrency]部分,配置lock path。

    注意

    替换CINDER_DBPASS为 cinder 数据库的密码;

    替换RABBIT_PASS为 RabbitMQ 中 openstack 账户的密码;

    配置my_ip为控制节点的管理 IP 地址;

    替换CINDER_PASS为 cinder 用户的密码;

    替换HOST:PATH为 NFS 的HOSTIP和共享路径的密码;

  6. 同步数据库:

    su -s /bin/sh -c "cinder-manage db sync" cinder                                                (CTL)
    
  7. 配置nova:

    vim /etc/nova/nova.conf                                                                        (CTL)
    
    [cinder]
    os_region_name = RegionOne
    
  8. 重启计算API服务

    systemctl restart openstack-nova-api.service
    
  9. 启动cinder服务

    systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service               (CTL)
    systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service                (CTL)
    
    systemctl enable rpcbind.service nfs-server.service tgtd.service iscsid.service \              (STG)
                     openstack-cinder-volume.service \
                     openstack-cinder-backup.service
    systemctl start rpcbind.service nfs-server.service tgtd.service iscsid.service \               (STG)
                    openstack-cinder-volume.service \
                    openstack-cinder-backup.service
    

    注意

    当cinder使用tgtadm的方式挂卷的时候,要修改/etc/tgt/tgtd.conf,内容如下,保证tgtd可以发现cinder-volume的iscsi target。

    include /var/lib/cinder/volumes/*
    
  10. 验证

    source ~/.admin-openrc
    openstack volume service list
    

horizon 安装

  1. 安装软件包

    yum install openstack-dashboard
    
  2. 修改文件

    修改变量

    vim /etc/openstack-dashboard/local_settings
    
    OPENSTACK_HOST = "controller"
    ALLOWED_HOSTS = ['*', ]
    
    SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
    
    CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
        }
    }
    
    OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
    OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
    OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
    OPENSTACK_KEYSTONE_DEFAULT_ROLE = "member"
    WEBROOT = '/dashboard'
    POLICY_FILES_PATH = "/etc/openstack-dashboard"
    
    OPENSTACK_API_VERSIONS = {
        "identity": 3,
        "image": 2,
        "volume": 3,
    }
    
  3. 重启 httpd 服务

    systemctl restart httpd.service memcached.service
    
  4. 验证 打开浏览器,输入网址http://HOSTIP/dashboard/,登录 horizon。

    注意替换HOSTIP为控制节点管理平面IP地址

  • 4
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
回答: 你提到的问题是关于git提交后发生冲突的情况。这种情况通常是因为同一个项目的不同员工在相同位置提交了不同的更改,导致冲突发生。解决这个问题的方法有几种。一种方法是使用git pull命令来合并远程的更改到本地仓库。你可以使用命令"git pull origin master --allow-unrelated-histories"来告诉git允许合并不相关的历史。执行这个命令后,你可能需要提供一些合并信息并保存。另一种方法是在初次push之前,先使用git pull命令来将远程的更改合并到本地仓库,然后再进行push操作。这样可以避免冲突的发生。\[2\]至于你提到的链接"https://gitee.com/src-openeuler/kernel/repository/archive/openEuler-22.03-LTS-SP1.zip",它是一个压缩文件的链接,可能与git提交冲突的问题无关。 #### 引用[.reference_title] - *1* [git提交冲突:To https://gitee.com/men_zi_qi/practical-training.git ! [rejected] dev -> dev (fetch ...](https://blog.csdn.net/Menqq/article/details/114034902)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [git项目初次push提示error: failed to push some refs to https://gitee.com/xxxx/gittest.git’解决方案](https://blog.csdn.net/qq_41853988/article/details/122933694)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值