搭建和使用OpenStack

  • 内容及原理

1.0.系统环境

1)生产测试应用的服务器最好是物理机,虚拟目前可以完成搭建测试体验

2)系统选择版本:CentOS7

3)控制节点Controller :192.168.48.165    计算节点Nova:192.168.48.164

1.1.配置域名解析

1)配置主机名
hostname openstack01.zuiyoujie.com
hostname
echo "openstack01.zuiyoujie.com"> /etc/hostname
cat /etc/hostname
2)配置主机名解析
vim /etc/hosts
-----------------------------------
192.168.48.165    openstack01 controller
192.168.48.164    openstack02 compute02 block02 object02
-----------------------------------
1.2.关闭防火墙和selinux
1)关闭iptables
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service
2)关闭selinux
setenforce 0
getenforce
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
grep SELINUX=disabled /etc/sysconfig/selinux
1.3.配置时间同步
1)在控制端配置时间同步服务
yum install chrony y
2)编辑配置文件确认有以下配置
vim /etc/chrony.conf
--------------------------------
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
allow 192.168.48.0/24
--------------------------------
3)重启ntp服务,并配置开机自启动
systemctl restart chronyd.service
systemctl status chronyd.service
systemctl enable chronyd.service
systemctl list-unit-files |grep chronyd.service
4)设置时区,同步时间
timedatectl set-timezone Asia/Shanghai
chronyc sources
timedatectl status
1.4.配置相关yum
1)手动创建OpenStack阿里云yum源地址
vim /etc/yum.repos.d/CentOS-OpenStack-Rocky.repo
----------------------------------
[centos-openstack-rocky]
name=CentOS-7 - OpenStack rocky
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
[centos-openstack-rocky-test]
name=CentOS-7 - OpenStack rocky Testing
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=0
enabled=0
[centos-openstack-rocky-debuginfo]
name=CentOS-7 - OpenStack rocky - Debug
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
[centos-openstack-rocky-source]
name=CentOS-7 - OpenStack rocky - Source
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
[rdo-trunk-rocky-tested]
name=OpenStack rocky Trunk Tested
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/rdo-trunk-rocky-t
ested/
gpgcheck=0
enabled=0
-----------------------------------
2)安装OpenStack-rocky的仓库
yum install centos-release-openstack-rocky -y
 
yum clean all
yum makecache
 
3)更新安装包
yum update -y
 
4)安装openstack客户端相关软件
yum install python-openstackclient openstack-selinux -y
 
 

1.5.在控制节点安装数据库

1)安装mariadb
yum install mariadb mariadb-server MySQL-python python2-PyMySQL -y
2)创建openstack的数据库配置文件
 
vim /etc/my.cnf.d/mariadb_openstack.cnf
 
# [mysqld]添加以下配置
 
-----------------------------------
 
[mysqld]
 
bind-address = 0.0.0.0
 
default-storage-engine = innodb
 
innodb_file_per_table = on
 
max_connections = 4096
 
collation-server = utf8_general_ci
 
character-set-server = utf8
 
init-connect = ’SET NAMES UTF’
 
3)启动数据库设置开机自启
systemctl restart mariadb.service
systemctl status mariadb.service 
systemctl enable mariadb.service 
systemctl list-unit-files |grep mariadb.service
4)初始化数据库并重新启动
/usr/bin/mysql_secure_installation                                                                 /
systemctl restart mariadb.service
5)创建OpenStack相关的数据库,进行授权
mysql -p123456
-----------------------------------
flush privileges;
show databases;
select user,host from mysql.user;
exit
-----------------------------------

 

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA56CB6ZiB,size_20,color_FFFFFF,t_70,g_se,x_16

1.6.在控制节点安装消息队列rabbitmq

1)安装rabbitmq-server
yum install rabbitmq-server -y
2)启动rabbitmq,并设置自启
systemctl start rabbitmq-server.service
systemctl status rabbitmq-server.service
systemctl enable rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service
3)创建消息队列openstack账号及密码
rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
4)启用rabbitmq_mangagement插件实现web管理
rabbitmq-plugins list
rabbitmq-plugins enable rabbitmq_management
systemctl restart rabbitmq-server.service
rabbitmq-plugins list
lsof -i:15672
5)浏览访问RabbitMQ进行测试
访问地址:http://192.168.48.165:15672
# 默认用户名密码都是guest

# web界面可以管理创建用户,管理权限

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA56CB6ZiB,size_20,color_FFFFFF,t_70,g_se,x_16

 

1.7.在控制节点上安装Memcached

1)安装Memcached用于缓存令牌
yum install memcached python-memcached -y
 
2)修改memcached配置文件
Vim  /etc/sysconfig/Memcached
----------------------------------
OPTIONS=”-1 127.0.0.1,controller”
-----------------------------------
3)启动mecached并设置开机自启
systemctl start memcached.service
systemctl status memcached.service
netstat -anptl|grep memcached
systemctl enable memcached.service
systemctl list-unit-files |grep memcached.service

1.8.在控制节点上安装Etcd服务

1)安装etcd服务

yum install etcd -y
 
2)修改etcd配置文件
vim /etc/etcd/etcd.conf
-----------------------------------
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.48.165:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.48.165:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.48.165:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.48.165:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.48.165:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
----------------------------------
3)启动etcd并设置开机自启
systemctl start etcd.service
systemctl status etcd.service
netstat -anptl|grep etcd
systemctl enable etcd.service
systemctl list-unit-files |grep etcd.service
 

 

2.1.在控制节点创建keystone相关数据库

1)创建keystone数据库并授权
mysql -p123456
--------------------------------
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
flush privileges;
show databases;
select user,host from mysql.user;
exit
--------------------------------

watermark,type_d3F5LXplbmhlaQ,shadow_50,text_Q1NETiBA56CB6ZiB,size_20,color_FFFFFF,t_70,g_se,x_16

2.2.在控制节点安装keystone相关安装包

1)安装keystone相关安装包
yum install openstack-keystone httpd mod_wsgi -y

 

yum install openstack-keystone python-keystoneclient openstack-utils -y

 

egrep -v "^#|^$" /etc/keystone/keystone.conf 

 

grep '^[a-z]' /etc/keystone/keystone.conf

 

2.3.初始化同步keystone数据库

1)同步keystone数据库(44张)
su -s /bin/sh -c "keystone-manage db_sync" keystone
2)同步完成进行连接测试
mysql -h192.168.48.165 -ukeystone -pkeystone -e "use keystone;show tables;"
mysql -h192.168.48.165 -ukeystone -pkeystone -e "use keystone;show tables;"|wc -l

 

2.5.配置启动Apache(httpd)

1)修改httpd主配置文件

vim /etc/httpd/conf/httpd.conf +95
----------------------------------
ServerName controller
----------------------------------
2)配置虚拟机
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
3)启动httpd并配置开机自启
systemctl start httpd.service
systemctl status httpd.service
netstat -anptl|grep httpd
systemctl enable httpd.service
systemctl list-unit-files |grep httpd.service

2.6.初始化keystone认证服务

1)创建keystone用户,初始化的服务实体和API端点
 keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
     --bootstrap-admin-url http://controller:5000/v3/ \
     --bootstrap-internal-url http://controller:5000/v3/ \
     --bootstrap-public-url http://controller:5000/v3/ \
     --bootstrap-region-id RegionOne
  keystone-manage bootstrap --bootstrap-password 123456 \
     --bootstrap-admin-url http://controller:5000/v3/ \
     --bootstrap-internal-url http://controller:5000/v3/ \
     --bootstrap-public-url http://controller:5000/v3/ \
     --bootstrap-region-id RegionOne
# 查看声明的变量
env |grep OS_

 

openstack endpoint list

 

openstack project list

 

openstack user list

 

2.7.创建keystone的一般实例

1)创建一个名为example的keystone域

openstack domain create --description "An Example Domain" example

 

2)为keystone系统环境创建名为server的项目提供服务

 

3)创建myproject项目和对应的用户及角色

openstack project create --domain default --description "Demo Project" myproject

 

4)在默认域创建myuser用户

openstack user create --domain default  --password-prompt myuser # 交互式输入密码
openstack user create --domain default  --password=myuser myuser # 直接创建用户和密码

 

5)在role表创建myrole角色

openstack role create myrole

 

6)将myrole角色添加到myproject项目中和myuser用户

openstack role add --project myproject --user myuser myrole

2.8.验证操作keystone是否安装成功

1)去除环境变量

unset OS_AUTH_URL OS_PASSWORD
env |grep OS_
2)作为管理员用户去请求一个认证的token

 

3)使用不同用户获取认证token

 

2.9.创建OpenStack客户端环境脚本

1)创建admin用户的环境管理脚本

# vim admin-openrc
cd /server/tools
vim keystone-admin-pass.sh
----------------------------------
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
----------------------------------
env |grep OS_
2)创建普通用户myuser的客户端环境变量脚本
vim keystone-myuser-pass.sh
-------------------------------
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
-------------------------------
3)测试环境管理脚本
source keystone-admin-pass.sh
4)请求认证令牌
openstack token issue

 

 

3.1.在控制端安装镜像服务glance

1)创建glance数据库

mysql -p123456
----------------------------------
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
flush privileges;
exit
----------------------------------

3.2.keystone上面注册glance

1)在keystone上创建glance用户

cd /server/tools
source keystone-admin-pass.sh
openstack user create --domain default --password=glance glance
openstack user list

 

2)在keystone上将glance用户添加为server项目的admin角色(权限)

openstack role add --project service --user glance admin
3)创建glance镜像服务的实体
openstack service create --name glance --description "OpenStack Image" image
openstack service list

 

4)创建镜像服务的API端点

openstack endpoint create --region RegionOne image public http://192.168.48.165:9292
openstack endpoint create --region RegionOne image internal http://192.168.48.165:9292
openstack endpoint create --region RegionOne image admin http://192.168.48.165:9292
openstack endpoint list

 

 

 

 

3.3.安装glance相关软件

1)检查python版本

python --version
[root@openstack01 tools]# python --version
Python 2.7.5
2)安装glance软件
yum install openstack-glance python-glance python-glanceclient -y
3)执行以下命令可以快速配置glance-api.cnof
openstack-config --set  /etc/glance/glance-api.conf database connection  mysql+pymysql://glance:glance@controller/glance
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken project_name service 
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set  /etc/glance/glance-api.conf keystone_authtoken password glance
openstack-config --set  /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set  /etc/glance/glance-api.conf glance_store stores  file,http
openstack-config --set  /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set  /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/glance-registry.cnof
4)执行以下命令可以快速配置
openstack-config --set  /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:glance@controller/glance
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken project_domain_name Default
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken user_domain_name Default
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set  /etc/glance/glance-registry.conf keystone_authtoken password glance
  #查看生效配置

 

 

4.4同步glance数据库

1)为glance镜像服务初始化同步数据库

su -s /bin/sh -c "glance-manage db_sync" glance

2)同步完成进行连接测试

mysql -h192.168.48.165-uglance -pglance -e "use glance;show tables;"

 

3.5.启动glance镜像服务

1)启动glance镜像服务、并配置开机自启

systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl list-unit-files |grep openstack-glance*
2)其他命令:重启,停止
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl stop openstack-glance-api.service openstack-glance-registry.service

 

3.6.检查确认glance安装正确

1)下载镜像

cd /server/tools
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

2)获取管理员权限

source keystone-admin-pass.sh
3)上传镜像glance
openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare –public
4)查看镜像

 

 

4.1.在控制节点安装nova计算服务

1)chuanjian nova相关数据库

mysql -u root -p123456
-----------------------------------
CREATE DATABASE nova_api; 
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';
flush privileges;
show databases;
select user,host from mysql.user;
exit
---------------------------------------------------

 

4.2.keystone上面注册nova服务

1)在keystone上船舰nova用户

cd /server/tools
source keystone-admin-pass.sh
openstack user create --domain default --password=nova nova
openstack user list

 

2)在keystone上将nova用户配置为admin角色并添加进service项目

Openstack role add --project service --user nova admin
3)创建nova计算服务的实体
openstack service create --name nova --description "OpenStack Compute" compute
openstack service list

 

4)创建计算机服务的API端点(endpoint)

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
openstack endpoint list

 

 

 

5)这个版本的nova增加了piacement项目

# 同样,创建并注册该项目的服务证书

openstack user create --domain default --password=placement placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
# 创建placement项目的endpointAPI端口)
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
openstack endpoint list

 

 

 

4.3.在控制节点安装nova相关服务

1)安装nova相关软件包

yum install openstack-nova-api openstack-nova-conductor \
       openstack-nova-console openstack-nova-novncproxy \
       openstack-nova-scheduler openstack-nova-placement-api -y
2)快速修改配置
openstack-config --set  /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set  /etc/nova/nova.conf DEFAULT my_ip 192.168.48.165
openstack-config --set  /etc/nova/nova.conf DEFAULT use_neutron  true 
openstack-config --set  /etc/nova/nova.conf DEFAULT firewall_driver  nova.virt.firewall.NoopFirewallDriver
openstack-config --set  /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:openstack@controller
openstack-config --set  /etc/nova/nova.conf api_database connection  mysql+pymysql://nova:nova@controller/nova_api
openstack-config --set  /etc/nova/nova.conf database connection  mysql+pymysql://nova:nova@controller/nova
openstack-config --set  /etc/nova/nova.conf placement_database connection  mysql+pymysql://placement:placement@controller/placement
openstack-config --set  /etc/nova/nova.conf api auth_strategy  keystone 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_url  http://controller:5000/v3
openstack-config --set  /etc/nova/nova.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_domain_name  default 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken user_domain_name  default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_name  service 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken username  nova 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken password  nova
openstack-config --set  /etc/nova/nova.conf vnc enabled true
openstack-config --set  /etc/nova/nova.conf vnc server_listen '$my_ip'
openstack-config --set  /etc/nova/nova.conf vnc server_proxyclient_address '$my_ip'
openstack-config --set  /etc/nova/nova.conf glance api_servers  http://controller:9292
openstack-config --set  /etc/nova/nova.conf oslo_concurrency lock_path  /var/lib/nova/tmp 
openstack-config --set  /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set  /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set  /etc/nova/nova.conf placement project_name service
openstack-config --set  /etc/nova/nova.conf placement auth_type password
openstack-config --set  /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set  /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set  /etc/nova/nova.conf placement username placement
openstack-config --set  /etc/nova/nova.conf placement password placement
openstack-config --set  /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300
# 检查生效的nova配置
egrep -v "^#|^$" /etc/nova/nova.conf
3)修改nova的虚拟主机配置文件
vim /etc/httpd/conf.d/00-nova-placement-api.conf
-----------------------------------
Listen 8778
<VirtualHost *:8778>
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
  WSGIScriptAlias / /usr/bin/nova-placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/nova/nova-placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
</VirtualHost>
Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>

# made by zhaoshuai
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>
-------------------------------------
# 修改完毕重启httpd服务
systemctl restart httpd
systemctl status httpd 
4.4.同步nova数据(注意同步顺序)
1)初始化nova-api和plancement数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
# 验证数据库
mysql -h192.168.48.165 -unova -pnova -e "use nova_api;show tables;"
mysql -h192.168.48.165 -uplacement -pplacement -e "use placement;show tables;"

 

2)初始化nova_cell0和nova数据库

# 注册cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# 创建cell1单元
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
# 初始化nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
# 检查确认cell0cell1注册成功
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
# 验证数据库
mysql -h192.168.48.165 -unova -pnova -e "use nova_cell0;show tables;"
mysql -h192.168.48.165 -unova -pnova -e "use nova;show tables;"
5)检查确认cell0和cell1注册成功

 

4.5.启动nova服务

1)启动nova服务并设置开机自启

systemctl start openstack-nova-api.service openstack-nova-consoleauth.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-consoleauth.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service \
  openstack-nova-scheduler.service openstack-nova-conductor.service \
  openstack-nova-novncproxy.service
systemctl list-unit-files |grep openstack-nova* |grep enabled
 
 
 
 

5.1.配置域名解析

1)配置主机名

hostname openstack02
hostname
echo "openstack02"> /etc/hostname
cat /etc/hostname

2)配置主机名解析

vim /etc/hosts
-----------------------------------
192.168.48.165    openstack01  controller
192.168.48.164    openstack02  compute02 block02 object02
-----------------------------------

5.2.关闭防火墙和selinux

1)关闭iptables

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service

2)关闭selinux

setenforce 0
getenforce
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
grep SELINUX=disabled /etc/sysconfig/selinux

5.3.配置时间同步

1)在计算节点配置时间同步服务

yum install chrony -y

2)编辑配置文件确认有以下配置

vim /etc/chrony.conf
-------------------------------------
# 修改引用控制节点openstack01IP
server 192.168.48.165 iburst
-------------------------------------

3)重启chronyd服务,并创建开机自启

systemctl restart chronyd.service
systemctl status chronyd.service
systemctl enable chronyd.service
systemctl list-unit-files |grep chronyd.service

4)设置时区,首次同步时间

timedatectl set-timezone Asia/Shanghai
chronyc sources
timedatectl status

5.4.配置相关yum

1)手动配置yum源

vim /etc/yum.repos.d/CentOS-OpenStack-Rocky.repo
----------------------------------------
[centos-openstack-rocky]
name=CentOS-7 - OpenStack rocky
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
[centos-openstack-rocky-test]
name=CentOS-7 - OpenStack rocky Testing
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=0
enabled=0
[centos-openstack-rocky-debuginfo]
name=CentOS-7 - OpenStack rocky - Debug
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
[centos-openstack-rocky-source]
name=CentOS-7 - OpenStack rocky - Source
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
[rdo-trunk-rocky-tested]
name=OpenStack rocky Trunk Tested
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/rdo-trunk-rocky-tested/
gpgcheck=0
enabled=0
----------------------------------------

2)安装openstack-rocky的仓库

yum install centos-release-openstack-rocky -y
yum clean all
yum makecache

3)更新安装包

yum update -y

4)安装openstack客户端相关安装包

yum install python-openstackclient openstack-selinux -y

5.5.安装nova计算节点相关安装包

1)计算节点安装nova软件安装包

cd /server/tools
yum install openstack-nova-compute python-openstackclient openstack-utils -y

2)快速修改配置文件(/etc/nova/nova.cnof)

openstack-config --set  /etc/nova/nova.conf DEFAULT my_ip 192.168.48.164
openstack-config --set  /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set  /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set  /etc/nova/nova.conf DEFAULT enabled_apis  osapi_compute,metadata
openstack-config --set  /etc/nova/nova.conf DEFAULT transport_url  rabbit://openstack:openstack@controller
openstack-config --set  /etc/nova/nova.conf api auth_strategy  keystone 
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set  /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set  /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set  /etc/nova/nova.conf keystone_authtoken project_name  service
openstack-config --set  /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set  /etc/nova/nova.conf keystone_authtoken password nova
openstack-config --set  /etc/nova/nova.conf vnc enabled True
openstack-config --set  /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set  /etc/nova/nova.conf vnc server_proxyclient_address  '$my_ip'
openstack-config --set  /etc/nova/nova.conf vnc novncproxy_base_url  http://controller:6080/vnc_auto.html
openstack-config --set  /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set  /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set  /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set  /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set  /etc/nova/nova.conf placement project_name service
openstack-config --set  /etc/nova/nova.conf placement auth_type password
openstack-config --set  /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set  /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set  /etc/nova/nova.conf placement username placement
openstack-config --set  /etc/nova/nova.conf placement password placement
# 查看生效的配置:
egrep -v "^#|^$" /etc/nova/nova.conf

3)配置虚拟机的硬件加速

# 首先确定您的计算节点是否支持虚拟机的硬件加速。
egrep -c '(vmx|svm)' /proc/cpuinfo
# 如果返回位0,表示计算节点不支持硬件加速,需要配置libvirt使用QEMU方式管理虚拟机,使用以下命令:
openstack-config --set  /etc/nova/nova.conf libvirt virt_type  qemu
egrep -v "^#|^$" /etc/nova/nova.conf|grep 'virt_type'
# 如果返回为其他值,表示计算节点支持硬件加速且不需要额外的配置,使用以下命令:
openstack-config --set  /etc/nova/nova.conf libvirt virt_type  kvm 
egrep -v "^#|^$" /etc/nova/nova.conf|grep 'virt_type'

4)启动nova相关服务,并配置为开机自启

systemctl start libvirtd.service openstack-nova-compute.service 
systemctl status libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl list-unit-files |grep libvirtd.service
systemctl list-unit-files |grep openstack-nova-compute.service

5)将计算节点增加到cell数据库

# 以下命令在控制节点操作:
cd /server/tools
source keystone-admin-pass.sh 
# 检查确认数据库有新的计算节点
openstack compute service list --service nova-compute

 

手动将新的计算节点添加到openstack集群
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
5.6.在控制节点进行验证
1)应用管理员环境变量发脚本
cd /server/tools
source keystone-admin-pass.sh 
2)列表查看安装的nova服务组件
# 验证是否成功注册并启动了每个进程
openstack compute service list

 

3)在身份认证服务中列出API端点以验证其连接性

openstack catalog list

 

4)在镜像服务中列出以有镜像以检查镜像服务的连接性

openstack image list

 

 

6.1.检测各节点到控制和公网的连通性

# 控制节点
ping -c 4 www.baidu.com
ping -c 4 compute02
ping -c 4 block02
# 计算节点
ping -c 4 www.baidu.com
ping -c 4 controller

6.2.keystone数据库中注册neutron相关服务

1)创建neutron数据库,授予合适的访问权限

mysql -p123456
-----------------------------------
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
exit
-----------------------------------

2)在keystone上创建neutron用户

cd /server/tools
source keystone-admin-pass.sh
openstack user create --domain default --password=neutron neutron
openstack user list

 

3)将neutron添加到service项目并授予admin角色

openstack role add --project service --user neutron admin

4)创建neutron服务实体

openstack service create --name neutron --description "OpenStack Networking" network
openstack service list

 

5)创建neutron网络服务的API端点(endpoint)

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
openstack endpoint list

6.3.在控制节点安装neutron网络组件

1)安装neutron软件包

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

2)快速配置/etc/neutron/neutron.conf

openstack-config --set  /etc/neutron/neutron.conf database connection  mysql+pymysql://neutron:neutron@controller/neutron 
openstack-config --set  /etc/neutron/neutron.conf DEFAULT core_plugin  ml2  
openstack-config --set  /etc/neutron/neutron.conf DEFAULT service_plugins 
openstack-config --set  /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url  http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type  password  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name default  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name  service  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username  neutron  
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password  neutron  
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  True  
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  True  
openstack-config --set  /etc/neutron/neutron.conf nova auth_url  http://controller:5000
openstack-config --set  /etc/neutron/neutron.conf nova auth_type  password 
openstack-config --set  /etc/neutron/neutron.conf nova project_domain_name  default  
openstack-config --set  /etc/neutron/neutron.conf nova user_domain_name  default  
openstack-config --set  /etc/neutron/neutron.conf nova region_name  RegionOne  
openstack-config --set  /etc/neutron/neutron.conf nova project_name  service  
openstack-config --set  /etc/neutron/neutron.conf nova username  nova  
openstack-config --set  /etc/neutron/neutron.conf nova password  nova  
openstack-config --set  /etc/neutron/neutron.conf oslo_concurrency lock_path  /var/lib/neutron/tmp
# 查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/neutron.conf 

3)快速配置etc/neutron/pluging/ml2/ml2_conf.ini

openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types 
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  provider 
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  True 
# 查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/ml2_conf.ini

4)快速配置etc/neutron/plugings/ml2/linuxbridge_agent.ini

openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:eno16777736
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan  enable_vxlan  False
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  enable_security_group  True 
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# 查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# 以下参数在启动neutron-linuxbridge-agent.service的时候会自动设置为1
sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-ip6tables

5)快速配置/etc/neutron/dhcp_agent.ini

openstack-config --set   /etc/neutron/dhcp_agent.ini DEFAULT  interface_driver  linuxbridge
openstack-config --set   /etc/neutron/dhcp_agent.ini DEFAULT  dhcp_driver  neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set   /etc/neutron/dhcp_agent.ini DEFAULT  enable_isolated_metadata  True 
# 查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/dhcp_agent.ini

6)快速配置/etc/neutron/metadata_agent.ini

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret neutron
# 查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/metadata_agent.ini

7)配置计算节点服务使用网络服务

openstack-config --set  /etc/nova/nova.conf  neutron url http://controller:9696
openstack-config --set  /etc/nova/nova.conf  neutron auth_url http://controller:5000
openstack-config --set  /etc/nova/nova.conf  neutron auth_type password
openstack-config --set  /etc/nova/nova.conf  neutron project_domain_name default
openstack-config --set  /etc/nova/nova.conf  neutron user_domain_name default
openstack-config --set  /etc/nova/nova.conf  neutron region_name RegionOne
openstack-config --set  /etc/nova/nova.conf  neutron project_name service
openstack-config --set  /etc/nova/nova.conf  neutron username neutron
openstack-config --set  /etc/nova/nova.conf  neutron password neutron
openstack-config --set  /etc/nova/nova.conf  neutron service_metadata_proxy true
openstack-config --set  /etc/nova/nova.conf  neutron metadata_proxy_shared_secret neutron
# 查看生效的配置
egrep -v '(^$|^#)' /etc/nova/nova.conf
8)初始化安装网络插件
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
9)同步数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
10)重启nova_api服务
systemctl restart openstack-nova-api.service
11)启动neutron服务并设置开机自启
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl list-unit-files |grep neutron* |grep enabled
6.4.计算节点安装neutron网络组件
1)安装neutron组件
yum install openstack-neutron-linuxbridge ebtables ipset -y
2)快速配置/etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url  rabbit://openstack:openstack@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
# 查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/neutron.conf
3)快速配置/etc/neutron/pluging/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:ens33
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# 查看生效的配置
egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/linuxbridge_agent.ini

4)配置nova计算服务与neutron网络服务协同工作

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service 
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron
# 查看生效的配置
egrep -v '(^$|^#)' /etc/nova/nova.conf

5)重启计算节点

systemctl restart openstack-nova-compute.service
systemctl status openstack-nova-compute.service

6)启动neutron网络组件,并设置开机自启

systemctl restart neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service
systemctl enable neutron-linuxbridge-agent.service
systemctl list-unit-files |grep neutron* |grep enabled

6.5.在控制节点检查确认neutron服务安装成功

#以下命令在控制节点执行

1)获取管理权限

cd /server/tools
source keystone-admin-pass.sh

2)列表查看加载的网络插件

openstack extension list --network

3)查看网络代理列表

openstack network agent list
 

7.1.安装dashboard WEB控制台

1)安装dashboard软件安装包

yum install openstack-dashboard -y

2)修改配置文件/etc/openstack-dashbord/local_settings

# 检查确认有以下配置
vim /etc/openstack-dashboard/local_settings
-------------------------------------
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_HOST = "controller"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_fip_topology_check': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
}
TIME_ZONE = "Asia/Shanghai"
--------------------------------------

3)修改/etc/httpd/conf.d/openstack-dashbord.conf

vim /etc/httpd/conf.d/openstack-dashboard.conf
-------------------------------------
WSGIApplicationGroup %{GLOBAL}
-------------------------------------

4)重启web服务器以及会话存储服务

systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service

5)检查dashbord是否可用

http://controller:80/dashboard
uploading.4e448015.gif正在上传…重新上传取消

 

 

8.1.创建provider网络

1)在控制节点上,创建网络接口

cd /server/tools/
source keystone-admin-pass.sh
openstack network create --share --external --provider-physical-network provider  --provider-network-type flat provider

openstack network list

 

 

2)检查网络配置

# 确认 ml2_conf.ini 以下配置选项
# 上面的命令 --provider-network-type flat 网络名称 provider 与此对应
vim /etc/neutron/plugins/ml2/ml2_conf.ini
-----------------------------
[ml2_type_flat]
flat_networks = provider
-----------------------------
# 确认 linuxbridge_agent.ini 以下配置选项
# 上面的命令 --provider-physical-network provider 于此对应,网卡注意要于此对应,控制节点的网卡名称
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
-----------------------------
[linux_bridge]
physical_interface_mappings = provider:eno16777736
-----------------------------

3)创建provider子网

openstack subnet create --network provider --no-dhcp --allocation-pool start=192.168.1.210,end=192.168.1.220 --dns-nameserver 4.4.4.4 --gateway 192.168.1.1 --subnet-range 192.168.1.0/24 provider-subnet01
openstack subnet create --network provider --dhcp --subnet-range 192.168.2.0/24 provider-subnet02
openstack subnet list

 

 

 

8.2.在控制节点使用普通用户myuser创建密钥对

1)使用普通用户myuser的权限

cd /server/tools/
source keystone-demo-pass.sh

2)生成密钥对

ssh-keygen -q -N ""

3)添加公钥到openstack密钥系统

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

4) 查看可用的公钥(验证公钥的添加)

openstack keypair list

 

 

8.3.在控制节点为示例项目myproject增加安全组规则

1)使用普通用户myuser的权限

cd /server/tools/
source keystone-demo-pass.sh

2)允许ICMP(ping)

openstack security group rule create --proto icmp default

 

3)允许安全shell(SSH)的访问

openstack security group rule create --proto tcp --dst-port 22 default

 

4)查看安全组和相关的规则

openstack security group list
openstack security group rule list

 

 

8.4.在控制节点使用普通用户在provider网络创建虚拟机实例

1)控制机:使用admin用户创建主机模板

# 注意:虚拟机模板配置只能由 admin 管理员创建和管理,普通用户 myuser 只能使用已有的虚拟机模板

# 列表查看实例配置模板
cd /server/tools/
source keystone-admin-pass.sh
openstack flavor list
# 使用 admin 用户创建自定义配置的主机模板 flavor
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
openstack flavor create --id 1 --vcpus 1 --ram 1024 --disk 50 m1.tiny
openstack flavor create --id 2 --vcpus 1 --ram 2048 --disk 500 m1.small
openstack flavor create --id 3 --vcpus 2 --ram 4096 --disk 500 m1.medium
openstack flavor create --id 4 --vcpus 4 --ram 8192 --disk 500 m1.large
openstack flavor create --id 5 --vcpus 8 --ram 16384 --disk 500 m1.xlarge
openstack flavor list

 

 

 

 

 

# 以下为常用命令,在此列出下:
## 查看可用的虚拟机配置模板
openstack flavor list
## 查看可用的镜像
openstack image list
# 查看可用的网络
openstack network list
openstack subnet list
## 查看可用的公钥(验证公钥的添加)
openstack keypair list
## 查看可用的安全组
openstack security group list
openstack security group rule list

 

 

 

 

 

 

2)控制及:使用普通用户创建一台虚拟实例
# R 版的可以使用网络名称和 ID 创建虚拟机,如果只有一个网络也可以不使用 --nic 选项
cd /server/tools/
source keystone-demo-pass.sh
openstack server create --flavor m1.nano --image cirros --nic net-id=provider --security-group default --key-name mykey cirros-01
openstack server create --flavor m1.nano --image cirros --nic net-id=25346d04-0f1f-4277-b896-ba3f01425d86 --security-group default --key-name mykey cirros-02
openstack server create --flavor m1.nano --image cirros --security-group default --key-name mykey cirros-03
# 检查实例的状态
openstack server list

3)显示主机的novnc地址(vnc控制台)

openstack console url show cirros-01
# 得出的地址可以直接使用浏览器进行访问,并管理相应用主机
login as 'cirros' user. default password: 'cubswin:)'. use 'sudo' for root.

 

 

 

 

 

9.0.在控制节点安装cinder存储服务

1)创建cinder数据库

# 创建相关数据库,授权访问用户
mysql -u root -p123456
----------------------------------------
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
flush privileges;
show databases;
select user,host from mysql.user;
exit
----------------------------------------

2)在keystone上面注册cinder服务(创建服务认证)

# keystone上创建cinder用户
cd /server/tools
source keystone-admin-pass.sh
openstack user create --domain default --password=cinder cinder
openstack user list
# keystone上将cinder用户配置为admin角色并添加进service项目,以下命令无输出
openstack role add --project service --user cinder admin
# 创建cinder服务的实体
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
openstack service list
# 创建cinder服务的API端点(endpoint
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
openstack endpoint list

3)安装cinder相关软件包

yum install openstack-cinder -y

4)快速修改cinder配置

openstack-config --set  /etc/cinder/cinder.conf database connection  mysql+pymysql://cinder:cinder@controller/cinder
openstack-config --set  /etc/cinder/cinder.conf DEFAULT transport_url  rabbit://openstack:openstack@controller
openstack-config --set  /etc/cinder/cinder.conf DEFAULT auth_strategy  keystone 
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_uri  http://controller:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_url  http://controller:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken project_domain_name  default 
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken user_domain_name  default
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken project_name  service 
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken username  cinder
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken password  cinder
openstack-config --set  /etc/cinder/cinder.conf DEFAULT my_ip 192.168.48.165
openstack-config --set  /etc/cinder/cinder.conf oslo_concurrency lock_path  /var/lib/nova/tmp 
# 检查生效的cinder配置
egrep -v "^#|^$" /etc/cinder/cinder.conf
grep '^[a-z]' /etc/cinder/cinder.conf

5)同步cinder数据库

# 35张表
su -s /bin/sh -c "cinder-manage db sync" cinder
# 验证数据库
mysql -h192.168.48.165 -ucinder -pcinder -e "use cinder;show tables;"

6)修改nova配置文件

# 配置nova调用cinder服务
openstack-config --set  /etc/nova/nova.conf cinder os_region_name  RegionOne
# 检查生效的nova配置
grep '^[a-z]' /etc/nova/nova.conf |grep os_region_name

7)重启nova-api服务

systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service

8)启动cinder存储服务

# 需要启动2个服务
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl list-unit-files |grep openstack-cinder |grep enabled

9.2.在存储节点

1)安装LVM相关软件包

yum install lvm2 device-mapper-persistent-data -y

2)启动LVM的metada服务并配置开机自启

systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service
systemctl enable lvm2-lvmetad.service
systemctl list-unit-files |grep lvm2-lvmetad |grep enabled
3)创建LVM逻辑卷
# 检查磁盘状态
fdisk -l
# 创建LVM 物理卷 /dev/sdb
pvcreate /dev/sdb
# 创建 LVM 卷组 cinder-volumes,块存储服务会在这个卷组中创建逻辑卷
vgcreate cinder-volumes /dev/sdb
4)配置过滤器,防止系统出错
vim /etc/lvm/lvm.conf
-----------------------------
devices {
filter = [ "a/sdb/", "r/.*/"]
}
-----------------------------
5)在存储节点安装配置cinder组件
yum install openstack-cinder targetcli python-keystone -y
6)在存储节点快速修改cinder配置
openstack-config --set  /etc/cinder/cinder.conf database connection  mysql+pymysql://cinder:cinder@controller/cinder
openstack-config --set  /etc/cinder/cinder.conf DEFAULT transport_url  rabbit://openstack:openstack@controller
openstack-config --set  /etc/cinder/cinder.conf DEFAULT auth_strategy  keystone 
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri  http://controller:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_url  http://controller:5000
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken memcached_servers  controller:11211
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken auth_type  password
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken project_domain_name  default 
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken user_domain_name  default
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken project_name  service 
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken username  cinder
openstack-config --set  /etc/cinder/cinder.conf keystone_authtoken password  cinder
openstack-config --set  /etc/cinder/cinder.conf DEFAULT my_ip 192.168.48.165
openstack-config --set  /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set  /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set  /etc/cinder/cinder.conf lvm iscsi_protocol  iscsi
openstack-config --set  /etc/cinder/cinder.conf lvm iscsi_helper  lioadm
openstack-config --set  /etc/cinder/cinder.conf DEFAULT enabled_backends  lvm
openstack-config --set  /etc/cinder/cinder.conf DEFAULT glance_api_servers  http://controller:9292
openstack-config --set  /etc/cinder/cinder.conf oslo_concurrency lock_path  /var/lib/cinder/tmp
# 如果存储节点是双网卡,选项my_ip需要配置存储节点的管理IP,否则配置本机IP

# 检查生效的cinder配置
egrep -v "^#|^$" /etc/cinder/cinder.conf
grep '^[a-z]' /etc/cinder/cinder.conf
# 实例演示:
7)在存储节点启动cinder服务并配置开机自启
# 需要启动2个服务
systemctl start openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service
systemctl enable openstack-cinder-volume.service target.service
systemctl list-unit-files |grep openstack-cinder |grep enabled
systemctl list-unit-files |grep target.service |grep enabled

9.3.在控制节点进行验证

1)获取管理员变量

cd /server/tools/
source keystone-admin-pass.sh 

2)查看存储卷列表

openstack volume service list

 

  • 14
    点赞
  • 28
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值