OpenStack Train版部署(1)


**创建nova用户**



openstack user create --domain default --password NOVA_PASS nova


**向nova用户添加admin角色**



openstack role add --project service --user nova admin


**创建nova服务实体**



openstack service create --name nova --description “OpenStack Compute” compute


**创建Compute API服务端点**



openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1


**安装nova软件包**



yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y


**编辑nova服务的配置文件/etc/nova/nova.conf**



cp -a /etc/nova/nova.conf{,.bak}
grep -Ev ‘^$|#’ /etc/nova/nova.conf.bak > /etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.10
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS

openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen ’ $my_ip’
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ’ $my_ip’

openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS


**填充nova-api数据库**



su -s /bin/sh -c “nova-manage api_db sync” nova
su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova
su -s /bin/sh -c “nova-manage cell_v2 create_cell --name=cell1 --verbose” nova
su -s /bin/sh -c “nova-manage db sync” nova


**验证nova cell0和cell1是否正确注册**



su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova


**启动计算服务nova并将其配置为开机自启**



systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service


**检查nova服务是否启动**



netstat -tnlup|egrep ‘8774|8775’
curl http://controller:8774


### 7.2 安装nova计算服务(computel01计算节点 192.168.0.20)


**安装软件包**



yum install centos-release-openstack-train -y
yum install openstack-nova-compute -y
yum install -y openstack-utils -y


**编辑计算节点上的nova配置文件/etc/nova/nova.conf**



cp /etc/nova/nova.conf{,.bak}
grep -Ev ‘^$|#’ /etc/nova/nova.conf.bak > /etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.20
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_PASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ’ $my_ip’
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_PASS


**确定计算节点是否支持虚拟机硬件加速**



egrep -c ‘(vmx|svm)’ /proc/cpuinfo

#如果此命令返回值不是0,则计算节点支持硬件加速,不需要加入下面的配置。
#如果此命令返回值是0,则计算节点不支持硬件加速,并且必须配置libvirt为使用QEMU而不是KVM,需要编辑/etc/nova/nova.conf 配置文件中的[libvirt]部分:
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu


**启动计算节点的nova服务及其相关服务,并设置开机自启**



#如果nova-compute服务无法启动,请检查 /var/log/nova/nova-compute.log。该错误消息可能表明控制器节点上的防火墙阻止访问端口5672。将防火墙配置为打开控制器节点上的端口5672并重新启动 计算节点上的服务。
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service


**到控制节点上验证计算节点(controller)**



[root@controller ~]# openstack compute service list --service nova-compute


**控制节点上发现计算主机**



#添加每台新的计算节点时,必须在控制器节点上运行”su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova“以注册这些新的计算节点。
su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova

#也可以设置适当的发现时间间隔来添加新的计算节点
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 600

systemctl restart openstack-nova-api.service


### 7.3 在控制节点上进行验证nova服务


**controller计算节点 192.168.0.10**


**列出服务组件以验证每个进程的成功启动和注册情况**



openstack compute service list


**列出身份服务中的API端点以验证与身份服务的连接**



openstack catalog list


**列出图像服务中的图像以验证与图像服务的连接性**



openstack image list


**检查Cells和placement API是否正常运行**



nova-status upgrade check


==================================================


## 8. neutron


https://docs.openstack.org/neutron/train/install/  
 ![neutron架构与组件 https://blog.51cto.com/11555417/2438097](https://upload-images.jianshu.io/upload_images/16952149-36c9e5390c2a1290.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)


### 8.1 安装neutron网络服务(controller控制节点192.168.0.10)


**创建neutron数据库**



mysql -uroot
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘localhost’ IDENTIFIED BY ‘NEUTRON_DBPASS’;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘%’ IDENTIFIED BY ‘NEUTRON_DBPASS’;
flush privileges;


**创建neutron用户**



openstack user create --domain default --password NEUTRON_PASS neutron


**向neutron用户添加admin角色**



openstack role add --project service --user neutron admin


**创建neutron服务实体**



openstack service create --name neutron --description “OpenStack Networking” network


**创建neutron服务端点**



openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696


**安装neutron软件包**



yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

  • openstack-neutron:neutron-server的包
  • openstack-neutron-ml2:ML2 plugin的包
  • openstack-neutron-linuxbridge:linux bridge network provider相关的包
  • ebtables:防火墙相关的包

**编辑neutron服务配置文件/etc/neutron/neutron.conf**



#配置二层网络
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password NOVA_PASS


**ML2 plugin的配置文件ml2\_conf.ini**



cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true


**配置Linux网桥代理**



> 
> Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组  
>  修改配置文件`/etc/neutron/plugins/ml2/linuxbridge_agent.ini`
> 
> 
> 



#官方配置文档中,
#PROVIDER_INTERFACE_NAME指的是eth0网卡,就是连接外部网络的那块网卡
#OVERLAY_INTERFACE_IP_ADDRESS指的是控制节点访问外网的IP地址

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.0.10
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

#修改linux内核参数设置为1
echo ‘net.bridge.bridge-nf-call-iptables=1’ >>/etc/sysctl.conf
echo ‘net.bridge.bridge-nf-call-ip6tables=1’ >>/etc/sysctl.conf
#启用网络桥接器支持,加载 br_netfilter 内核模块
modprobe br_netfilter
sysctl -p


**配置第3层 l3代理为自助式虚拟网络提供路由和NAT服务**



#配置三层网络
cp -a /etc/neutron/l3_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge


**配置DHCP代理,DHCP代理为虚拟网络提供DHCP服务**



#修改配置文件/etc/neutron/dhcp_agent.ini
cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true


**配置元数据代理**



#元数据代理提供配置信息,例如实例的凭据
#修改配置文件/etc/neutron/metadata_agent.ini ,并为元数据设置密码METADATA_SECRET
cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET


**在控制节点上配置Nova服务与网络服务进行交互**



#修改配置文件/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET


**创建ml2的软连接 文件指向ML2插件配置的软链接**



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini


**填充数据库**



su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron


**重新启动nova API计算服务**



systemctl restart openstack-nova-api.service


**启动neutron服务和配置开机启动**



systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service


**因配置了第3层l3网络服务 需要启动第三层服务**



systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service


### 8.2 在计算节点安装neutron网络服务(computel01计算节点192.168.0.20)


**安装组件**



yum install openstack-neutron-linuxbridge ebtables ipset -y


**修改neutron主配置文件/etc/neutron/neutron.conf**



cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp


**配置Linux网桥代理**



cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.0.20
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver


**修改linux系统内核网桥参数为1**



echo ‘net.bridge.bridge-nf-call-iptables=1’ >>/etc/sysctl.conf
echo ‘net.bridge.bridge-nf-call-ip6tables=1’ >>/etc/sysctl.conf
modprobe br_netfilter
sysctl -p


**配置计算节点上Nova服务使用网络服务**



#修改nova配置文件/etc/nova/nova.conf,添加neutron模块配置

openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS


**重新启动计算节点上的Nova服务**



systemctl restart openstack-nova-compute.service


**启动neutron网桥代理服务 设置开机自启动**



systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service


**回到控制节点验证Neutron网络服务-(controller控制节点192.168.0.10)**



#列出已加载的扩展,以验证该neutron-server过程是否成功启动
[root@controller ~]# openstack extension list --network

#列出代理商以验证成功
[root@controller ~]# openstack network agent list




---


### 8.3 可选:安装neutron网络服务节点(neutron01网络节点192.168.0.30)



> 
> 网络配置按照官网文档的租户自助网络
> 
> 
> 


**配置系统参数**



echo ‘net.ipv4.ip_forward = 1’ >>/etc/sysctl.conf
sysctl -p


**安装train版yum源**



yum install centos-release-openstack-train -y


**安装客户端**



yum install python-openstackclient -y


**安装组件**



yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables openstack-utils -y


**编辑neutron服务配置文件/etc/neutron/neutron.conf**



#配置二层网络
cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_PASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password NOVA_PASS


**ML2 plugin的配置文件ml2\_conf.ini**



cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true


**配置Linux网桥代理**



#Linux网桥代理为实例构建第2层(桥接和交换)虚拟网络基础结构并处理安全组
#修改配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini

#官网配置文档中:
#PROVIDER_INTERFACE_NAME指的是eth0网卡,就是连接外部网络的那块网卡
#OVERLAY_INTERFACE_IP_ADDRESS指的是控制节点访问外网的IP地址

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.168.0.30
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

#修改linux内核参数设置为1
echo ‘net.bridge.bridge-nf-call-iptables=1’ >>/etc/sysctl.conf
echo ‘net.bridge.bridge-nf-call-ip6tables=1’ >>/etc/sysctl.conf

#启用网络桥接器支持,加载 br_netfilter 内核模块
modprobe br_netfilter
sysctl -p


**配置第3层 l3代理为自助式虚拟网络提供路由和NAT服务**



#配置三层网络
cp -a /etc/neutron/l3_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge


**配置DHCP代理,DHCP代理为虚拟网络提供DHCP服务**



#修改配置文件/etc/neutron/dhcp_agent.ini
cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true


**配置元数据代理**



#元数据代理提供配置信息,例如实例的凭据
#修改配置文件/etc/neutron/metadata_agent.ini ,并为元数据设置密码METADATA_SECRET
cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev ‘^$|#’ /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET


**创建ml2的软连接 文件指向ML2插件配置的软链接**



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini


**填充数据库**



su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron




---


**在controller控制节点上配置nova服务与网络节点服务进行交互**  
 **如果是单独安装网络节点则添加以下操作,如果已经在配置计算节点的网络服务时,在控制节点的配置文件/etc/nova/nova.conf添加了neutron模块 ,则不用再次添加**



openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 #此条官方文档未添加
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_PASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET

#在controller控制节点上重新启动nova API计算服务
systemctl restart openstack-nova-api.service




---


**回到网络节点启动neutron服务和配置开机启动**



systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service


**因配置了第3层l3网络服务 需要启动第三层服务**



systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service


**可以到控制节点再次验证Neutron网络服务-(controller控制节点192.168.0.10)**



#列出已加载的扩展,以验证该neutron-server过程是否成功启动
[root@controller ~]# openstack extension list --network

#列出代理商以验证成功
[root@controller ~]# openstack network agent list
±-------------------------------------±-------------------±-----------±------------------±------±------±--------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
±-------------------------------------±-------------------±-----------±------------------±------±------±--------------------------+
| 44624896-15d1-4029-8ac1-e2ba3f850ca6 | DHCP agent | controller | nova | 😃 | UP | neutron-dhcp-agent |
| 50b90b02-b6bf-4164-ae29-a20592d6a093 | Linux bridge agent | controller | None | 😃 | UP | neutron-linuxbridge-agent |
| 52761bf6-164e-4d91-bcbe-01a3862b0a4e | DHCP agent | neutron01 | nova | 😃 | UP | neutron-dhcp-agent |
| 82780de2-9ace-4e24-a150-f6b6563d7fc8 | Linux bridge agent | computel01 | None | 😃 | UP | neutron-linuxbridge-agent |
| b22dfdda-fcc7-418e-bdaf-6b89e454ee83 | Linux bridge agent | neutron01 | None | 😃 | UP | neutron-linuxbridge-agent |
| bae84064-8cf1-436a-9cb2-bf9f906a9357 | Metadata agent | neutron01 | None | 😃 | UP | neutron-metadata-agent |
| cbd972ef-59f2-4fba-b3b3-2e12c49c5b03 | L3 agent | neutron01 | nova | 😃 | UP | neutron-l3-agent |
| dda8af2f-6c0b-427a-97f7-75fd1912c60d | L3 agent | controller | nova | 😃 | UP | neutron-l3-agent |
| f2193732-9f88-4e87-a82c-a81e1d66c2e0 | Metadata agent | controller | None | 😃 | UP | neutron-metadata-agent |
±-------------------------------------±-------------------±-----------±------------------±------±------±--------------------------+


=====================================================


## 9. Horizon


https://docs.openstack.org/horizon/train/install/



> 
> OpenStack仪表板Dashboard服务的项目名称是Horizon,它所需的唯一服务是身份服务keystone,开发语言是python的web框架Django。
> 
> 
> 


**安装Train版本的Horizon有以下要求**


* Python 2.7、3.6或3.7
* Django 1.11、2.0和2.2
* Django 2.0和2.2支持在Train版本中处于试验阶段
* Ussuri发行版(Train发行版之后的下一个发行版)将使用Django 2.2作为主要的Django版本。Django 2.0支持将被删除。


**在计算节点(compute01 192.168.0.20)上安装仪表板服务horizon**



> 
> **由于horizon运行需要apache,为了不影响控制节点上的keystone等其他服务使用的apache,故在计算节点上安装。安装之前确认以前安装的服务是否正常启动。(也可以按照官方文档步骤部署在控制节点上)**
> 
> 
> 



#安装软件包
yum install openstack-dashboard memcached python-memcached -y


**修改memcached配置文件**



sed -i ‘/OPTIONS/c\OPTIONS=“-l 0.0.0.0,::1”’ /etc/sysconfig/memcached
systemctl restart memcached.service
systemctl enable memcached.service


**修改配置文件/etc/openstack-dashboard/local\_settings**



cp -a /etc/openstack-dashboard/local_settings{,.bak}
grep -Ev ‘^$|#’ /etc/openstack-dashboard/local_settings.bak >/etc/openstack-dashboard/local_settings


**下面的所有注释不要写到配置文件中,这里只是用作解释含义,可以继续向下查看修改完整的配置文件内容**



[root@computel01 ~]# vim /etc/openstack-dashboard/local_settings
#配置仪表盘在controller节点上使用OpenStack服务
OPENSTACK_HOST = “controller”

#允许主机访问仪表板,接受所有主机,不安全不应在生产中使用
ALLOWED_HOSTS = [‘*’]
#ALLOWED_HOSTS = [‘one.example.com’, ‘two.example.com’]

#配置memcached会话存储服务
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}

#启用身份API版本3
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST

#启用对域的支持
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

#配置API版本
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 3,
}

#配置Default为通过仪表板创建的用户的默认域
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”

#配置user为通过仪表板创建的用户的默认角色
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”

#如果选择网络选项1,请禁用对第3层网络服务的支持,如果选择网络选项2,则可以打开
OPENSTACK_NEUTRON_NETWORK = {
#自动分配的网络
‘enable_auto_allocated_network’: False,
#Neutron分布式虚拟路由器(DVR)
‘enable_distributed_router’: False,
#FIP拓扑检查
‘enable_fip_topology_check’: False,
#高可用路由器模式
‘enable_ha_router’: False,
#下面三个已过时,不用过多了解,官方文档配置中是关闭的
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
#ipv6网络
‘enable_ipv6’: True,
#Neutron配额功能
‘enable_quotas’: True,
#rbac政策
‘enable_rbac_policy’: True,
#路由器的菜单和浮动IP功能,如果Neutron部署有三层功能的支持可以打开
‘enable_router’: True,
#默认的DNS名称服务器
‘default_dns_nameservers’: [],
#网络支持的提供者类型,在创建网络时,该列表中的网络类型可供选择
‘supported_provider_types’: [‘*’],
#使用与提供网络ID范围,仅涉及到VLAN,GRE,和VXLAN网络类型
‘segmentation_id_range’: {},
#使用与提供网络类型
‘extra_provider_types’: {},
#支持的vnic类型,用于与端口绑定扩展
#‘supported_vnic_types’: [‘*’],
#物理网络
#‘physical_networks’: [],
}

#配置时区为亚洲上海
TIME_ZONE = “Asia/Shanghai”


**完整的配置文件修改内容**



[root@computel01 ~]# cat /etc/openstack-dashboard/local_settings|head -45
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
ALLOWED_HOSTS = [‘*’]
LOCAL_PATH = ‘/tmp’
SECRET_KEY=‘f8ac039815265a99b64f’
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}
EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’
OPENSTACK_HOST = “controller”
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 3,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_NEUTRON_NETWORK = {
‘enable_auto_allocated_network’: False,
‘enable_distributed_router’: False,
‘enable_fip_topology_check’: False,
‘enable_ha_router’: False,
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
‘enable_ipv6’: True,
‘enable_quotas’: True,
‘enable_rbac_policy’: True,
‘enable_router’: True,
‘default_dns_nameservers’: [],
‘supported_provider_types’: [‘*’],
‘segmentation_id_range’: {},
‘extra_provider_types’: {},
‘supported_vnic_types’: [‘*’],
‘physical_networks’: [],
}
TIME_ZONE = “Asia/Shanghai”


**重建apache的dashboard配置文件**



cd /usr/share/openstack-dashboard
python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf


**若出现不能正常访问,请操作以下步骤**



#建立策略文件(policy.json)的软链接,否则登录到dashboard将出现权限错误和显示混乱
ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf

#/etc/httpd/conf.d/openstack-dashboard.conf如果未包含,则添加以下行
WSGIApplicationGroup %{GLOBAL}


**重新启动compute01计算节点上的apache服务和memcache服务**



systemctl restart httpd.service memcached.service
systemctl enable httpd.service memcached.service


**验证访问**



> 
> 在浏览器访问仪表板,网址为 http://192.168.0.20(注意,和以前版本不一样,不加dashboard)  
>  使用admin或myuser用户和default域凭据进行身份验证。
> 
> 
> 



域: default
用户名: admin
密码: ADMIN_PASS




---


**登陆界面**



> 
> ![登陆界面](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS1mZWZjNTllMzAyNjA0YjQ0LnBuZw?x-oss-process=image/format,png)
> 
> 
> 




---


**登陆成功后的页面**



> 
> ![登陆成功后的页面](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS1iZTc3ZTIyYjA5ZjAzZTIwLnBuZw?x-oss-process=image/format,png)
> 
> 
> 




---


==================================================


## 10. 创建虚拟网络并启动实例操作


* <https://docs.openstack.org/install-guide/launch-instance.html#block-storage>
* [openstack学习-网络管理操作 51CTO博客]( )
* [启动实例的操作 建议参考的博客]( )
* [创建虚拟网络的两种方式]( )



> 
> 使用VMware虚拟机创建网络可能会有不可预测到的故障,可以通过dashboard界面,管理员创建admin用户的网络环境
> 
> 
> 


### 10.1 第一种: 建立公共提供商网络



> 
> 在admin管理员用户下创建
> 
> 
> 



source ~/admin-openrc

openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider



#参数解释:
–share 允许所有项目使用虚拟网络
–external 将虚拟网络定义为外部,如果想创建一个内部网络,则可以使用–internal。默认值为internal
–provider-physical-network provider
#指明物理网络的提供者,provider 与下面neutron的配置文件对应,其中provider是标签,可以更改为其他,但是2个地方必须要统一
#配置文件/etc/neutron/plugins/ml2/ml2_conf.ini中的参数
[ml2_type_flat]
flat_networks = provider
[linux_bridge]
physical_interface_mappings = provider:eth0
–provider-network-type flat 指明这里创建的网络是flat类型,即实例连接到此网络时和物理网络是在同一个网段,无vlan等功能。
最后输入的provider 指定网络的名称


**在网络上创建一个子网 192.168.0.0/24 ; 子网对应真实的物理网络**



openstack subnet create --network provider
–allocation-pool start=192.168.0.195,end=192.168.0.210
–dns-nameserver 255.5.5.5 --gateway 192.168.0.254
–subnet-range 192.168.0.0/24 provider

#参数解释:
–network provider 指定父网络
–allocation-pool start=192.168.0.195,end=192.168.0.210 指定子网的起始地址和终止地址
–dns-nameserver 223.5.5.5 指定DNS服务器地址
–gateway 192.168.0.254 指定网关地址
–subnet-range 192.168.0.0/24 指定子网的网段
最后的provider 指定子网的名称


**查看已创建的网络**



openstack network list


**查看已创建的子网**



openstack subnet list


### 10.2 第二种: 建立普通租户的私有自助服务网络



> 
> 自助服务网络,也叫租户网络或项目网络,它是由openstack租户创建的,完全虚拟的,是租户私有的,只在本网络内部连通,不能在租户之间共享
> 
> 
> 


**在普通租户下创建网络**



source ~/myuser-openrc
openstack network create selfservice



> 
> 非特权用户通常无法为该命令提供其他参数。该服务使用以下配置文件中的信息自动选择参数
> 
> 
> 
> ```
> cat /etc/neutron/plugins/ml2/ml2_conf.ini
> 	[ml2]
> 	type_drivers = flat,vlan,vxlan
> 	tenant_network_types = vxlan
> 	[ml2_type_vxlan]
> 	vni_ranges = 1:1000
> 
> ```
> 
> 


**创建一个子网 172.18.1.0/24**



openstack subnet create --network selfservice
–dns-nameserver 223.5.5.5 --gateway 172.18.1.1
–subnet-range 172.18.1.0/24 selfservice

#参数解释:
–network selfservice 指定父网络
–allocation-pool start=172.16.10.2,end=172.18.1.200
可以指定子网的起始地址和终止地址,不添加此参数则分配从172.16.1.2到172.18.1.254的IP地址
–dns-nameserver 223.5.5.5 指定DNS服务器地址
–gateway 172.18.1.1 指定网关地址
–subnet-range 172.18.1.0/24 指定子网的网段
最后的selfservice 指定子网的名称


**查看已创建的网络**



openstack network list


**查看已创建的子网**



openstack subnet list


**创建路由器,用myuser普通租户创建**



source ~/myuser-openrc
openstack router create router01


**查看创建的路由**



openstack router list


**将创建的租户自助服务网络子网添加为路由器上的接口**



openstack router add subnet router01 selfservice


**在路由器的公共提供商网络上设置网关**



openstack router set router01 --external-gateway provider


**查看网络名称空间,一个qrouter名称空间和两个 qdhcp名称空间**


[一篇讲解ip netns的博客]( )



[root@controller ~]# ip netns
qrouter-919685b9-24c7-4859-b793-48a2add1fd30 (id: 2)
qdhcp-a7acab4d-3d4b-41f8-8d2c-854fb1ff6d4f (id: 0)
qdhcp-926859eb-1e48-44ed-9634-bcabba5eb8b8 (id: 1)

#使用ip netns命令找到这个虚拟路由器之后,用这个虚拟路由器ping真实物理网络中的网关
#ping通即证明OpenStack内部虚拟网络与真实物理网络衔接成功
[root@controller ~]# ip netns exec qrouter-919685b9-24c7-4859-b793-48a2add1fd30 ping 192.168.0.254
PING 192.168.0.254 (192.168.0.254) 56(84) bytes of data.
64 bytes from 192.168.0.254: icmp_seq=1 ttl=128 time=0.570 ms
64 bytes from 192.168.0.254: icmp_seq=2 ttl=128 time=0.276 ms


**验证查看创建网络和子网中的IP地址范围,回到admin用户下**



source ~/admin-openrc


**列出路由器上的端口,以确定提供商网络上的网关IP地址**



openstack port list --router router01

…|ip_address=‘172.18.1.1’, |…| ACTIVE
…|ip_address=‘192.168.0.209’, |…| ACTIVE


**从控制器节点或物理提供商网络上的任何主机ping此IP地址进行验证**



[root@controller ~]# ping 192.168.0.209
PING 192.168.0.209 (192.168.0.209) 56(84) bytes of data.
64 bytes from 192.168.0.209: icmp_seq=1 ttl=64 time=0.065 ms
64 bytes from 192.168.0.209: icmp_seq=2 ttl=64 time=0.066 ms


**创建一个m1.nano的类型模板**



#Flavor:类型模板,虚机硬件模板被称为类型模板,包括RAM和硬盘大小,CPU核数等。
#创建一台1核cpu 128M硬盘的类型模板与CirrOS映像一起使用进行测试

openstack flavor create --id 0 --vcpus 1 --ram 128 --disk 1 m1.nano


**查看创建的类型模板**



openstack flavor list


**创建租户的秘钥对(可选)**



#生产中登陆最好不要用常规的密码验证登陆,启动实例前要 将公共的秘钥添加
#秘钥在文档开始时的配置基础环境中已经生成,所以可以直接添加
source ~/myuser-openrc
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

#查看创建的密钥对
openstack keypair list


**配置安全组规则**



#默认情况下,default安全组适用于所有实例,并包括拒绝对实例进行远程访问的防火墙规则。对于CirrOS之类的Linux映像,建议至少允许ICMP(ping)和ssh。
#许可ICMP协议(ping命令)
openstack security group rule create --proto icmp default

#允许SSH访问(22端口)
openstack security group rule create --proto tcp --dst-port 22 default

#查看安全组
openstack security group list

#查看安全组规则
openstack security group rule list


### 10.3 启动一个实例



#要启动实例,必须至少指定实例类型,映像名称,网络,安全组,密钥和实例名称
#部署的网络环境可以在提供商网络和自助服务网络上启动实例

#查看可用的类型模板
openstack flavor list

#查看可用的镜像
openstack image list

#查看可用的网络
openstack network list

#查看安全组
openstack security group list


#### 10.3.1 在公共提供商网络上启动实例


**创建公共提供商网络下的实例(也可以在dashboard界面上创建,建议掌握命令行的操作)**



#net-id:可用的网络的ID,这里使用公共提供商网络的ID 实例名称(provider-vm1)
source ~/myuser-openrc
openstack server create --flavor m1.nano --image cirros
–nic net-id=926859eb-1e48-44ed-9634-bcabba5eb8b8 --security-group default
–key-name mykey provider-vm1


**查看创建的实例**



[root@controller ~]# openstack server list
±-------------------------------------±-------------±-------±-----------------------±-------±--------+
| ID | Name | Status | Networks | Image | Flavor |
±-------------------------------------±-------------±-------±-----------------------±-------±--------+
| 9c2c558f-0573-4483-8031-ec3ba9c41f57 | provider-vm1 | ACTIVE | provider=192.168.0.199 | cirros | m1.nano |
±-------------------------------------±-------------±-------±-----------------------±-------±--------+


**使用虚拟控制台访问实例**



openstack console url show selfservice-vm1


**登陆到cirros实例验证对公共提供商网络网关的访问**



$ ping 192.168.0.1
PING 192.168.0.1 (192.168.0.1): 56 data bytes
64 bytes from 192.168.0.1: seq=0 ttl=64 time=5.128 ms


**验证对互联网的访问**



$ ping baidu.com
PING baidu.com (220.181.38.148): 56 data bytes
64 bytes from 220.181.38.148: seq=0 ttl=128 time=17.904 ms


**从控制器节点或提供商网络上的任何其他主机使用SSH访问实例**



[root@controller ~]# ssh cirros@192.168.0.199
$ hostname
provider-vm1
$ pwd
/home/cirros




---


#### 10.3.2 在租户自助网络上启动实例


**创建租户自助网络下的实例(也可以在dashboard界面上创建)**



#net-id:可用的网络的ID,这里使用租户自助网络的ID 实例名称(selfservice-vm1)
source ~/myuser-openrc
openstack server create --flavor m1.nano --image cirros
–nic net-id=0e3e56b8-67be-4a83-89c4-b23880d7e688 --security-group default
–key-name mykey selfservice-vm1


**查看创建的实例**



[root@controller ~]# openstack server list
±-------------------------------------±----------------±-------±------------------------±-------±--------+
| ID | Name | Status | Networks | Image | Flavor |
±-------------------------------------±----------------±-------±------------------------±-------±--------+
| a9397f81-9f4d-4130-b72c-d607060c2856 | selfservice-vm1 | ACTIVE | selfservice=172.18.1.22 | cirros | m1.nano |
| 9c2c558f-0573-4483-8031-ec3ba9c41f57 | provider-vm1 | ACTIVE | provider=192.168.0.199 | cirros | m1.nano |
±-------------------------------------±----------------±-------±------------------------±-------±--------+


**使用虚拟控制台访问实例**



openstack console url show selfservice-vm1


**访问实例的控制台并登陆cirros实例验证对公共提供商网络网关的访问**



$ ping 172.18.1.1
PING 172.18.1.1 (172.18.1.1): 56 data bytes
64 bytes from 172.18.1.1: seq=0 ttl=64 time=25.527 ms


**验证对互联网的访问**



$ ping baidu.com
PING baidu.com (220.181.38.148): 56 data bytes
64 bytes from 220.181.38.148: seq=0 ttl=127 time=20.649 ms




---


**※从控制器节点使用SSH远程访问租户实例**


**在公共提供商网络上创建一个浮动IP地址**



openstack floating ip create provider



> 
> **Dashboard创建**
> 
> 
> ![Dashboard创建浮动IP](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS1lZThlYWU3NDdmNjNiNDc2LnBuZw?x-oss-process=image/format,png)
> 
> 
> 


**查看已创建的浮动IP**



[root@controller ~]# openstack floating ip list
±-------------------------------------±--------------------±-----------------±-----±-------------------------------------±------
| ID | Floating IP Address | Fixed IP Address | Port | Floating Network | Projec
±-------------------------------------±--------------------±-----------------±-----±-------------------------------------±------
| f31e429a-4ebd-407a-ae78-220311008f4f | 192.168.0.198 | None | None | 926859eb-1e48-44ed-9634-bcabba5eb8b8 | 6535a5
±-------------------------------------±--------------------±-----------------±-----±-------------------------------------±------


**将浮动IP地址与实例相关联**



openstack server add floating ip selfservice-vm1 192.168.0.198



> 
> **Dashboard关联**
> 
> 
> ![Dashboard关联浮动IP](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS1mNTIwYTRkNGNkMDdkNjRkLnBuZw?x-oss-process=image/format,png)
> 
> 
> 


**查看浮动IP地址绑定的状态**



[root@controller ~]# openstack server list
±-------------------------------------±----------------±-------±---------------------------------------±-------±--------+
| ID | Name | Status | Networks | Image | Flavor |
±-------------------------------------±----------------±-------±---------------------------------------±-------±--------+
| a9397f81-9f4d-4130-b72c-d607060c2856 | selfservice-vm1 | ACTIVE | selfservice=172.18.1.22, 192.168.0.198 | cirros | m1.nano |
| 9c2c558f-0573-4483-8031-ec3ba9c41f57 | provider-vm1 | ACTIVE | provider=192.168.0.199 | cirros | m1.nano |
±-------------------------------------±----------------±-------±---------------------------------------±-------±--------+


**通过控制器节点或公共提供商网络上任何主机的浮动IP地址验证与实例的连接性**



[root@controller ~]# ping 192.168.0.198
PING 192.168.0.198 (192.168.0.198) 56(84) bytes of data.
64 bytes from 192.168.0.198: icmp_seq=1 ttl=63 time=22.0 ms


**从控制器节点或提供商网络上的任何其他主机使用SSH访问实例**



[root@controller ~]# ssh cirros@192.168.0.198
$ hostname
selfservice-vm1
$ pwd
/home/cirros




---


**安装文档创建的网络拓扑环境**



> 
> ![网络拓扑环境](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS04M2I4MWQwZGNmMTgyMWVmLnBuZw?x-oss-process=image/format,png)
> 
> 
> 


**创建的一个新的网络拓扑,两个独立的租户网络创建路由,并在路由上设置访问外网的网关**



> 
> [参考 : 创建虚拟网络的两种方式]( )
> 
> 
> ![网络拓扑环境2](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS01YzIyN2U2MjYwM2QzNjYzLnBuZw?x-oss-process=image/format,png)
> 
> 
> 


#### 10.3.3 故障记录


**安装网桥管理工具brctl来查看网络**



yum install bridge-utils -y
brctl show


**重启实例的方法,可在控制节点命令行重启 也可在dashboard界面进行重启**



source ~/myuser-openrc
openstack server list
nova reboot 1d2ad9d2-2af3-4bc5-8eae-a5a4721c6512


**如果重启报错则使用hard重启**



nova reboot --hard provider-vm1

nova reboot是软重启虚拟机
nova reboot --hard 是硬重启虚拟机
nova reset-state 是重置虚拟机状态


**Web浏览器在无法解析controller主机名的主机上运行**



> 
> 可以替换nova.conf的配置文件中的[vnc]模块,将controller替换为控制节点的IP地址
> 
> 
> 



openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://192.168.0.10:6080/vnc_auto.html


**报错:找不到磁盘无法启动的问题**  
 [Booting from Hard Disk… GRUB]( )



> 
> 解决方法,修改计算节点的nova.conf文件
> 
> 
> 



[root@computel ~]# vim /etc/nova/nova.conf
[libvirt]
cpu_mode = none
virt_type = qemu



> 
> 重启计算节点的nova计算服务
> 
> 
> 



[root@computel ~]# systemctl restart openstack-nova-compute.service


**外部网络没有网的问题**



> 
> 在用VMware虚拟机进行测试时候,要为实例多添加网卡,通过虚拟网络编辑器,否则会导致创建的可访问外部网络没有网  
>  我们部署openstack,大多数都是使用虚拟机,在网络节点的外部网卡,我们需要注意,这个是不需要配置ip地址的。  
>  同时由于每个虚拟机是需要联网的。所以我们需要在原先的网络规划的基础上,在增加一个上网的网卡。
> 
> 
> https://www.aboutyun.com/forum.php?mod=viewthread&tid=13508  
>  https://www.aboutyun.com/forum.php?mod=viewthread&tid=13489&page=1&authorid=61  
>  https://www.aboutyun.com//forum.php/?mod=viewthread&tid=11722&extra=page%3D1&page=1&
> 
> 
> 




---


## 11.cinder



> 
> ![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS05N2Y0ZmM5ZTMyMTM4OTNmLnBuZw?x-oss-process=image/format,png)
> 
> 
> 



> 
> Cinder的核心功能是对卷的管理,允许对卷、卷的类型、卷的快照、卷备份进行处理。它为后端不同的存储设备提供给了统一的接口,不同的块设备服务厂商在Cinder中实现其驱动,可以被Openstack整合管理,nova与cinder的工作原理类似。
> 
> 
> 


**安装cinder块存储服务**  
 https://docs.openstack.org/cinder/train/install/


[一篇cinder原理的详细的介绍]( )  
 [存储管理的操作]( )  
 [从OpenStack的角度看块存储的世界]( )  
 [分布式存储 Ceph 介绍及原理架构分享 上]( )  
 [分布式存储 Ceph 介绍及原理架构分享 下]( )  
 [三种存储方案 DAS,NAS,SAN在数据库存储上的应用]( )  
 [DAS、SAN、NAS三种存储方式的概念及应用]( )


OpenStack块存储服务为实例通过不同后台提供块存储设备。 块存储 API 和调度服务运行在控制节点。volume 服务运行在一个或多个存储节点。cinder为实例提供本地存储或则 SAN/NAS后台适当的驱动。


### 11.1 安装cindoer块存储服务(控制节点192.168.0.10)


**创建cinder数据库并授权**



mysql -u root

create database cinder;

grant all privileges on cinder.* to ‘cinder’@‘%’ identified by ‘CINDER_DBPASS’;
grant all privileges on cinder.* to ‘cinder’@‘localhost’ identified by ‘CINDER_DBPASS’;
flush privileges;


**创建cinder用户,密码设置为CINDER\_PASS**



source ~/admin-openrc
openstack user create --domain default --password CINDER_PASS cinder


**admin向cinder用户添加角色**



openstack role add --project service --user cinder admin


**创建cinderv2和cinderv3服务实体**



openstack service create --name cinderv2 --description “OpenStack Block Storage” volumev2
openstack service create --name cinderv3 --description “OpenStack Block Storage” volumev3


**创建块存储服务API端点**



> 
> 块存储服务需要每个服务实体的端点
> 
> 
> 



openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s


**安装cinder软件包并修改配置文件**



yum install openstack-cinder -y


**编辑配置文件/etc/cinder/cinder.conf**



cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
grep -Ev ‘#|^$’ /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.0.10
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp


**填充块存储数据库**



su -s /bin/sh -c “cinder-manage db sync” cinder


**配置计算服务以使用块存储**



> 
> 编辑配置文件/etc/nova/nova.conf
> 
> 
> 



openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne


**重启nova计算和cinder块存储服务并设置开机自启动**



systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service


**控制节点验证**



[root@controller ~]# cinder service-list
±-----------------±-----------±-----±--------±------±---------------------------±--------±----------------±--------------+
| Binary | Host | Zone | Status | State | Updated_at | Cluster | Disabled Reason | Backend State |
±-----------------±-----------±-----±--------±------±---------------------------±--------±----------------±--------------+
| cinder-scheduler | controller | nova | enabled | up | 2020-04-26T09:58:18.000000 | - | - | |
±-----------------±-----------±-----±--------±------±---------------------------±--------±----------------±--------------+


### 11.2 安装cindoer块存储服务节点(存储节点192.168.0.40)


**使用默认的LVM卷方法,之后改为ceph存储**


**安装LVM软件包**



[root@cinder01 ~]# yum install lvm2 device-mapper-persistent-data -y


**启动LVM元数据服务,并设置开机自启**



systemctl enable lvm2-lvmetad.service
systemctl restart lvm2-lvmetad.service


**添加一块100G硬盘,重启节点后创建LVM物理卷/dev/sdb**



[root@cinder01 ~]# pvcreate /dev/sdb
Physical volume “/dev/sdb” successfully created.


**创建LVM物理卷**



[root@cinder01 ~]# vgcreate cinder-volumes /dev/sdb
Volume group “cinder-volumes” successfully created


**编辑配置文件/etc/lvm/lvm.conf**



> 
> 在devices部分,添加一个过滤器,只接受/dev/sdb设备,拒绝其他所有设备
> 
> 
> 



[root@cinder01 ~]# vim /etc/lvm/lvm.conf
devices {
filter = [ “a/sdb/”, “r/.*/” ]


**安装train版yum源和cinder软件包**



yum install centos-release-openstack-train -y
yum install openstack-cinder targetcli python-keystone openstack-utils -y


**编辑配置文件/etc/cinder/cinder.conf**



cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
grep -Ev ‘#|^$’ /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.0.40
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_PASS
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm target_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm target_helper lioadm
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp


**启动块存储卷服务并设置开机自启动**



systemctl restart openstack-cinder-volume.service target.service
systemctl enable openstack-cinder-volume.service target.service


**在控制节点进行验证**



[root@controller ~]# source ~/admin-openrc
[root@controller ~]# openstack volume service list
±-----------------±-------------±-----±--------±------±---------------------------+
| Binary | Host | Zone | Status | State | Updated At |
±-----------------±-------------±-----±--------±------±---------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2020-04-27T02:54:41.000000 |
| cinder-volume | cinder01@lvm | nova | enabled | up | 2020-04-27T02:54:01.000000 |
±-----------------±-------------±-----±--------±------±---------------------------+


**可以到Dashboard界面上进行操作**



> 
> ![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS1lNjA5ZDM0YzVkYTRhMTU3LnBuZw?x-oss-process=image/format,png)
> 
> 
> 


**创建一个1GB的卷**



source ~/demo-openrc
openstack volume create --size 1 volume1



> 
> ![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS1lMzAzNzI4OTcwOTVmMWJhLnBuZw?x-oss-process=image/format,png)
> 
> 
> 


**很短的时间后,卷状态应该从creating 到available**



[root@controller ~]# openstack volume list
±-------------------------------------±--------±----------±-----±------------+
| ID | Name | Status | Size | Attached to |
±-------------------------------------±--------±----------±-----±------------+
| 5e89f544-e204-436c-8d9c-25a77039796f | volume1 | available | 10 | |
±-------------------------------------±--------±----------±-----±------------+



> 
> ![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS05YTg5ZjQwNWUxZDcyNTNjLnBuZw?x-oss-process=image/format,png)
> 
> 
> 


**将卷附加到provider-vm1实例,可以在dashboard界面操作**



openstack server add volume provider-vm1 volume1



> 
> ![](https://imgconvert.csdnimg.cn/aHR0cHM6Ly91cGxvYWQtaW1hZ2VzLmppYW5zaHUuaW8vdXBsb2FkX2ltYWdlcy8xNjk1MjE0OS1mNDA3OWI3YTUwM2IxNmI3LnBuZw?x-oss-process=image/format,png)
> 
> 
> 


**查看卷清单**



[root@controller ~]# openstack volume list

±-------------------------------------±--------±-------±-----±-----------------------------------------+
| ID | Name | Status | Size | Attached to |
±-------------------------------------±--------±-------±-----±-----------------------------------------+
| 75011e60-33fc-4061-98dc-7028e477efc9 | volume1 | in-use | 1 | Attached to selfservice-vm1 on /dev/vdb |
±-------------------------------------±--------±-------±-----±-----------------------------------------+


**使用SSH访问实例**



> 
> 使用fdisk命令验证该卷是否作为/dev/vdb块存储设备
> 
> 
> 



[root@controller ~]# ssh cirros@192.168.0.198
$ sudo fdisk -l


**分区并格式化新添加的/dev/vdb**



$ sudo fdisk /dev/vdb
Command (m for help): n #创建一个新分区
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p #创建一个主分区
Partition number (1-4, default 1): #分区默认编号为1
First sector (2048-2097151, default 2048): #磁盘分区中第一个扇区(从哪里开始) 默认的
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-2097151, default 2097151): #磁盘分区中最后1个扇区的位置 默认全部
Command (m for help): w #保存


**查看创建的主分区**



$ ls /dev/vdb*
/dev/vdb /dev/vdb1



**自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。**

**深知大多数Linux运维工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!**

**因此收集整理了一份《2024年Linux运维全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。**
![img](https://img-blog.csdnimg.cn/img_convert/7d3358ac2c8f79c24f069f127aff6e90.png)
![img](https://img-blog.csdnimg.cn/img_convert/0c61d48c55eb2fc156b456a16709a924.png)
![img](https://img-blog.csdnimg.cn/img_convert/45d78b49ededa946ab3eb6e44799fe2e.png)
![img](https://img-blog.csdnimg.cn/img_convert/1fbdc8c9423559bf4f42835e0e9a552f.png)
![img](https://img-blog.csdnimg.cn/img_convert/372bbf16f644d4173ed7e5894f1b8267.png)

**既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上Linux运维知识点,真正体系化!**

**由于文件比较大,这里只是将部分目录大纲截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且后续会持续更新**

**如果你觉得这些内容对你有帮助,可以添加VX:vip1024b (备注Linux运维获取)**
![img](https://img-blog.csdnimg.cn/img_convert/43cbde81e50c168e44a1957854998ebe.jpeg)



### 最后的话

最近很多小伙伴找我要Linux学习资料,于是我翻箱倒柜,整理了一些优质资源,涵盖视频、电子书、PPT等共享给大家!

### 资料预览

给大家整理的视频资料:

![](https://img-blog.csdnimg.cn/img_convert/18f35d3bd5009e108d4e25cc9ea632b3.png)

给大家整理的电子书资料:

  

![](https://img-blog.csdnimg.cn/img_convert/522a07a5b754e9dd2c994037820eaad4.png)



**如果本文对你有帮助,欢迎点赞、收藏、转发给朋友,让我有持续创作的动力!**


**一个人可以走的很快,但一群人才能走的更远。不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎扫码加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**
![img](https://img-blog.csdnimg.cn/img_convert/e1493499d47d59af73187672fc44171f.jpeg)

看卷清单**



[root@controller ~]# openstack volume list

±-------------------------------------±--------±-------±-----±-----------------------------------------+
| ID | Name | Status | Size | Attached to |
±-------------------------------------±--------±-------±-----±-----------------------------------------+
| 75011e60-33fc-4061-98dc-7028e477efc9 | volume1 | in-use | 1 | Attached to selfservice-vm1 on /dev/vdb |
±-------------------------------------±--------±-------±-----±-----------------------------------------+


**使用SSH访问实例**



> 
> 使用fdisk命令验证该卷是否作为/dev/vdb块存储设备
> 
> 
> 



[root@controller ~]# ssh cirros@192.168.0.198
$ sudo fdisk -l


**分区并格式化新添加的/dev/vdb**



$ sudo fdisk /dev/vdb
Command (m for help): n #创建一个新分区
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p #创建一个主分区
Partition number (1-4, default 1): #分区默认编号为1
First sector (2048-2097151, default 2048): #磁盘分区中第一个扇区(从哪里开始) 默认的
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-2097151, default 2097151): #磁盘分区中最后1个扇区的位置 默认全部
Command (m for help): w #保存


**查看创建的主分区**



$ ls /dev/vdb*
/dev/vdb /dev/vdb1



**自我介绍一下,小编13年上海交大毕业,曾经在小公司待过,也去过华为、OPPO等大厂,18年进入阿里一直到现在。**

**深知大多数Linux运维工程师,想要提升技能,往往是自己摸索成长或者是报班学习,但对于培训机构动则几千的学费,着实压力不小。自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!**

**因此收集整理了一份《2024年Linux运维全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友,同时减轻大家的负担。**
[外链图片转存中...(img-x7Ldh8iD-1712854142683)]
[外链图片转存中...(img-7goAMaAc-1712854142684)]
[外链图片转存中...(img-BzFH2R5M-1712854142684)]
[外链图片转存中...(img-a81k0Nth-1712854142685)]
[外链图片转存中...(img-fRsVfcXE-1712854142685)]

**既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,基本涵盖了95%以上Linux运维知识点,真正体系化!**

**由于文件比较大,这里只是将部分目录大纲截图出来,每个节点里面都包含大厂面经、学习笔记、源码讲义、实战项目、讲解视频,并且后续会持续更新**

**如果你觉得这些内容对你有帮助,可以添加VX:vip1024b (备注Linux运维获取)**
[外链图片转存中...(img-IuflCPFv-1712854142685)]



### 最后的话

最近很多小伙伴找我要Linux学习资料,于是我翻箱倒柜,整理了一些优质资源,涵盖视频、电子书、PPT等共享给大家!

### 资料预览

给大家整理的视频资料:

[外链图片转存中...(img-DOAUEbmg-1712854142685)]

给大家整理的电子书资料:

  

[外链图片转存中...(img-YCPqJ3LG-1712854142686)]



**如果本文对你有帮助,欢迎点赞、收藏、转发给朋友,让我有持续创作的动力!**


**一个人可以走的很快,但一群人才能走的更远。不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎扫码加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**
[外链图片转存中...(img-oYZVQEHu-1712854142686)]

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值