openstack-T openEuler系统搭建

openEuler镜像下载地址:

一、环境准备

1、修改网卡名称

双节点配置

net.ifnames=0 biosdevname=0

2、修改时区、最小化安装、硬盘stan分配、密码(Admin@3.21 123456)

3、修改网络

双节点配置

TYPE=Ethernet
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=172.18.9.5
NETMASK=255.255.255.0
GATEWAY=172.18.9.1
DNS1=114.114.114.114
nmcli c reload 
nmcli c up eth0

4、修改主机名

控制节点
hostnamectl set-hostname controller
存储节点
hostnamectl set-hostname compute

bash

5、配置hosts解析

vi /etc/hosts
双节点配置

172.18.9.2 controller
172.18.9.5 compute

6、关闭防火墙与selinux策略

双节点配置

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config; setenforce 0; systemctl stop firewalld; systemctl disable firewalld

7、部署常用工具

双节点配置

yum install -y chrony vim bash-completion net-tools

8、配置时间源、同步时间

controller节点

vim /etc/chrony.conf
修改和放开位置
server ntp6.aliyun.com iburst

allow all

local stratum 10
重启时间服务器
systemctl restart chronyd
查看获取情况
chronyc sources -v
date
硬件同步
clock -w
compute节点

vim /etc/chrony.conf
server controller iburst  

重启时间服务器
systemctl restart chronyd
chronyc sources -v
date
硬件同步
clock -w 

9、查看并部署openstack版本

dnf list|grep openstack
双节点 此次安装版本为
openstack-release-train.noarch     
部署
dnf install -y openstack-release-train.noarch -y
查看yum仓库
ll /etc/yum.repos.d/
单节点(控制节点)controller  部署客户端
yum install python3-openstackclient -y
使用openstack查看是否安装成功

10、数据库安装以及配置

单节点(控制节点)controller

安装
yum install mariadb mariadb-server python3-PyMySQL -y

配置
vim /etc/my.cnf.d/openstack.cnf




[mysqld]
bind-address = 0.0.0.0

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8


启用数据库
systemctl enable --now mariadb.service
查看端口号
ss -tnl 3306 
初始化
mysql_secure_installation


初始化过程

  • 回车
  • n
  • n
  • y
  • n
  • y
  • y

11、安装消息队列

单节点(控制节点)controller

安装
yum install rabbitmq-server -y
开启消息队列
systemctl enable --now rabbitmq-server.service
查看端口号
ss -tnl 5672
设置用户名密码
rabbitmqctl add_user openstack 123456
提供给openstack使用
rabbitmqctl set_permissions openstack ".*" ".*" ".*"

12、安装缓存系统

 单节点(控制节点)controller
 yum install memcached python3-memcached -y
 修改配置文件 开放0.0.0.0
 vim /etc/sysconfig/memcached 
 
 OPTIONS="-l 0.0.0.0,::1"
开机自启并且同时启动
systemctl enable  --now memcached.service
查看端口号
ss -tnl 11211

二、Keystone组件部署(身份服务)

1、数据库创建

数据库创建
单节点(控制节点)controller
mysql
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone123';
完成后退出
quit

2、环境安装包

yum install openstack-keystone httpd python3-mod_wsgi -y

3、修改配置文件

备份文件
cp /etc/keystone/keystone.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
编辑
vim /etc/keystone/keystone.conf

[database]
connection = mysql+pymysql://keystone:keystone123@controller/keystone
[token]
provider = fernet

同步信息
su -s /bin/sh -c "keystone-manage db_sync" keystone
查看表结构
use keystone;
show tables;
quit 退出

4、初始化 Fernet 密钥存储库

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

5、设置引导身份服务

keystone-manage bootstrap --bootstrap-password 123456 \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

6、http配置文件

vim /etc/httpd/conf/httpd.conf

ServerName controller
创建软链接
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动
systemctl enable --now httpd.service
查看端口号
ss -tnl 5000

7、配置环境变量

vim /etc/keystone/admin-openrc.sh

export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
加载环境变量
source /etc/keystone/admin-openrc.sh
查看是否生效
export

8、创建域、项目、用户和角色

openstack domain create --description "An Example Domain" example
openstack project create --domain default  --description "Service Project" service

9、配置环境变量

vim /etc/keystone/admin-openrc.sh

#! /bin/bash
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2


加载环境变量
source /etc/keystone/admin-openrc.sh
查看是否生效
export

10、查看是否配置成功

openstack token issue

三、glance组件部署(镜像存储)

1、数据库创建

mysql
CREATE DATABASE glance;GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'  IDENTIFIED BY 'glance123';

2、创建服务

openstack user create --domain default --password glance glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
安装包下载
yum install openstack-glance -y

3、修改配置文件

备份文件
cp /etc/glance/glance-api.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
编辑
vim /etc/glance/glance-api.conf


[DEFAULT]
log_file = /var/log/glance/glance-api.log
[database]
connection = mysql+pymysql://glance:glance123@controller/glance
[keystone_authtoken]
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

4、同步数据库

su -s /bin/sh -c "glance-manage db_sync" glance

5、启动

systemctl enable --now openstack-glance-api.service
查看端口号
ss -tnl 9292

6、验证服务是否正正常

上传cirros-0.4.0-x86_64-disk.img文件
创建镜像
openstack image create "cirros" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --public
查看镜像
openstack image list

四、placement组件部署(资源提供者)

1、数据库创建

mysql
CREATE DATABASE placement;GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement123';
quit 退出

2、创建服务

openstack user create --domain default --password placement placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
安装包下载
yum install openstack-placement-api -y

3、修改配置文件

备份文件
cp /etc/placement/placement.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/placement/placement.conf.bak > /etc/placement/placement.conf
编辑
vim /etc/placement/placement.conf

[placement_database]
connection = mysql+pymysql://placement:placement123@controller/placement
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = placement

4、同步数据库

su -s /bin/sh -c "placement-manage db sync" placement

5、http服务

vim /etc/httpd/conf.d/00-placement-api.conf

placement
重启http服务
systemctl restart httpd
查看端口
ss -tnl 8778
命令验证
placement-status upgrade check

五、nova组件部署

1、数据库创建

mysql
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%'  IDENTIFIED BY 'nova123';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
GRANT  ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova123';
quit 退出

2、创建nove用户以及服务

openstack user create --domain default --password nova nova
openstack role add --project service --user nova admin
openstack service create --name nova  --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

3、安装相关服务

yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

4、修改配置文件

备份文件
cp /etc/nova/nova.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/nova/nova.conf.bak > /etc/nova/nova.conf
编辑
vim /etc/nova/nova.conf



[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller:5672/
my_ip = 172.18.9.2
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
log_file = /var/log/nova/nova.log
[api_database]
connection = mysql+pymysql://nova:nova123@controller/nova_api
[database]
connection = mysql+pymysql://nova:nova123@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

同步数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova  
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

5、服务自启

vim nova-restart.sh

systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service



bash nova-restart.sh

systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
查看端口号
ss -tnl 8774  6080

6、计算节点部署(compute)

yum install openstack-nova-compute -y


配置:
备份文件
cp /etc/nova/nova.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/nova/nova.conf.bak > /etc/nova/nova.conf
编辑
vim /etc/nova/nova.conf



[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
compute_driver=libvirt.LibvirtDriver
my_ip = 172.18.9.5
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
log_file = /var/log/nova/nova-compute.log
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = nova
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://172.18.9.2:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

7、服务启动

查看是否开启虚拟化
egrep -c '(vmx|svm)' /proc/cpuinfo

服务启动
systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd
查看是否启用
systemctl status libvirtd

创建所需目录并授权
mkdir /usr/lib/python3.9/site-packages/instances
chown root.nova /usr/lib/python3.9/site-packages/instances

chown -R nova.nova /lib/python3.9/site-packages/instances/

systemctl start openstack-nova-compute.service

8、对接计算节点

openstack compute service list --service nova-compute

执行主机发现
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

计算节点重启服务
systemctl start openstack-nova-compute.service

控制节点验证
openstack compute service list      所有openstack命令都需要在控制节点执行

六、neutron部署(网络节点)

1、数据库创建

mysql 
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron123';
quit

openstack user create --domain default --password neutron neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron  --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

2、配置三层网络(控制节点)

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y


备份文件
cp /etc/neutron/neutron.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
编辑
vim /etc/neutron/neutron.conf



[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[database]
connection = mysql+pymysql://neutron:neutron123@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp


备份文件
cp /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
编辑
vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = extnal
[ml2_type_vxlan]
vni_ranges = 1:10000
[securitygroup]
enable_ipset = true

备份文件
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
编辑
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini


[linux_bridge]
physical_interface_mappings = extnal:eth0
[vxlan]
enable_vxlan = true
local_ip = 172.18.9.2
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver



vim /etc/sysctl.conf

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

加载模块
modprobe br_netfilter
查看是否生效
sysctl -p


编辑三层
vim /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

dhcp编辑
vim  /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

metadata编辑 


[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = zjc

vim /etc/nova/nova.conf

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = zjc

软链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

3、同步数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron


重启API服务
systemctl restart openstack-nova-api.service
配置服务自启
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
配置restart脚本
vim neutron-restar.sh

#! /bin/bash
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

4、配置三层网络(计算节点)

yum install openstack-neutron-linuxbridge ebtables ipset -y


备份文件
cp /etc/neutron/neutron.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
编辑
vim /etc/neutron/neutron.conf


[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp




备份文件
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
编辑
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini


[linux_bridge]
physical_interface_mappings = extnal:eth0
[vxlan]
enable_vxlan = true
local_ip = 172.18.9.5
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

vim /etc/sysctl.conf

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

加载模块
modprobe br_netfilter
查看是否生效
sysctl -p


vim /etc/nova/nova.conf


[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron


重启操作
systemctl restart openstack-nova-compute.service
查看日志
tail -f /var/log/nova/nova-compute.log

systemctl enable --now neutron-linuxbridge-agent.service
查看日志
tail -f /var/log/neutron/linuxbridge-agent.log
验证 (控制节点验证)
openstack network agent list

七、dashboard部署

1、控制节点部署

yum install openstack-dashboard -y

配置文件
vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ["*"]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
WEBROOT = "/dashboard"
TIME_ZONE = "Asia/Shanghai"




cd /usr/bin
ln -sv python3.9 python

systemctl restart httpd.service memcached.service




查看端口
ss -ntl  80

查看前端页面日志路径 /etc/httpd/conf.d/openstack-dashboard.conf

访问地址:http://172.18.9.2/dashboard

vim /etc/httpd/conf.d/openstack-dashboard.conf

八、cinder部署(存储节点)

1、创建数据库(控制节点)

mysql
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'  IDENTIFIED BY 'cinder123';
quit

2、创建用户

openstack user create --domain default --password cinder cinder
openstack role add --project service --user cinder admin
openstack service create --name cinderv2  --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3  --description "OpenStack Block Storage" volumev3
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

3、安装服务、修改配置文件

yum install openstack-cinder -y
 

备份文件
cp /etc/cinder/cinder.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf 
编辑
vim /etc/cinder/cinder.conf 


[database]
connection = mysql+pymysql://cinder:cinder123@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 172.18.9.2
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder



vim /etc/nova/nova.conf

[cinder]
os_region_name = RegionOne
重启api
systemctl restart openstack-nova-api.service
启动
systemctl enable  --now openstack-cinder-api.service openstack-cinder-scheduler.service
查看端口
 ss -ntl 8776

4、存储节点配置

pvcreate /dev/sdb  根据硬盘来区分
vgcreate cinder-volumes /dev/sdb 创建卷组

安全保护次硬盘节点
备份文件
cp /etc/lvm/lvm.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/lvm/lvm.conf.bak > /etc/lvm/lvm.conf 
编辑
vim /etc/lvm/lvm.conf


devices {
...
filter = [ "a/sdb/", "r/.*/"]
}

安装
yum install openstack-cinder targetcli python-keystone -y

备份文件
cp /etc/cinder/cinder.conf{,.bak}
提取有效内容覆盖
grep -Ev "^$|#" /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf 
编辑
vim /etc/cinder/cinder.conf

[database]
connection = mysql+pymysql://cinder:cinder123@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 172.18.9.5
enabled_backends = lvm
glance_api_servers = http://controller:9292
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp


启动服务
systemctl enable --now openstack-cinder-volume.service target.service

验证(控制节点)
openstack volume service list (down的时候查看时间对不对)

  • 2
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值