OepnStack-O版安装部署

一、基础环境配置

1.1、 系统环境

系统安装选择CentOS7.6,我的实验环境为4台服务器分别为1台控制节点,3台计算节点,同时这4台服务器作为ceph存储节点。

服务器名称IP地址CPU内存/G磁盘/G用途
controller192.16.1.314820控制节点,ceph存储节点
computer1192.16.1.324820计算节点,ceph存储节点
computer2192.16.1.344820计算节点,ceph存储节点
computer3192.16.1.434820计算节点,ceph存储节点
1.2、 修改主机名称

分别在对应的服务器执行修改服务器名称命令,修改名称后退出当前shell环境重新登录后生效

hostnamectl --static set-hostname controller
hostnamectl --static set-hostname computer1
hostnamectl --static set-hostname computer2
hostnamectl --static set-hostname computer3

配置主机互信,执行ssh-keygen,直接全部按回车

ssh-keygen
ssh-copy-id 192.16.1.45
ssh-copy-id 192.16.1.32
ssh-copy-id 192.16.1.34
ssh-copy-id 192.16.1.43

在这里插入图片描述

1.3、 配置hosts访问

控制节点及计算几点均需执行以下命令

echo "192.16.1.31 controller" >> /etc/hosts
echo "192.16.1.32 computer1" >> /etc/hosts
echo "192.16.1.34 computer2" >> /etc/hosts
echo "192.16.1.43 computer3" >> /etc/hosts

验证

ping -c 1 controller
ping -c 1 computer1
ping -c 1 computer2
ping -c 1 computer3
1.4、 优化系统
ulimit -SHn 65535
setenforce 0
echo "*  soft  nofile  65535" >> /etc/security/limits.conf
echo "*  hard  nofile  65535" >> /etc/security/limits.conf
echo "net.ipv4.tcp_syncookies = 1" >> /etc/sysctl.conf
echo "net.ipv4.tcp_syn_retries = 1" >> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_recycle = 1" >> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_reuse = 1" >> /etc/sysctl.conf
echo "net.ipv4.tcp_fin_timeout = 1" >> /etc/sysctl.conf
echo "net.ipv4.tcp_keepalive_time = 1200" >> /etc/sysctl.conf
echo "net.ipv4.tcp_max_syn_backlog = 16384" >> /etc/sysctl.conf
echo "net.ipv4.tcp_max_tw_buckets = 36000" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 =1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 =1" >> /etc/sysctl.conf
sysctl -p
sed -i 's@SELINUX=enforcing@SELINUX=disabled@' /etc/sysconfig/selinux
sed -i 's@#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config
echo "# history" >> /etc/profile
echo "export HISTSIZE=100000" >> /etc/profile
echo "export HISTTIMEFORMAT='[%Y-%m-%d %H:%M:%S]'" >> /etc/profile
sed -i 's/#set bell-style none/set bell-style none/' /etc/inputrc
echo "set vb" >> /etc/inputrc
source /etc/profile
systemctl stop firewalld
systemctl disable firewalld
1.5、 配置yum源
sys_time=`date +%Y%m%d`
cd /etc/yum.repos.d/
mv CentOS-Base.repo CentOS-Base.repo."$sys_time"bak
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache 
yum -y install epel-release vim sysstat net-tools wget telnet chrony
1.6、 时间同步

控制节点配置

echo "server ntp.api.bz iburst" >> /etc/chrony.conf
echo "allow 192.16.1.0/24" >> /etc/chrony.conf
systemctl enable chronyd.service
systemctl start chronyd.service

其他节点配置

echo "server controller iburst" >> /etc/chrony.conf
systemctl enable chronyd.service
systemctl start chronyd.service

验证

chronyc sources
1.7、 安装OpenStack包

所有节点均需手动添加并安装,因最新版本无OpenStack-O版包,需要手动添加yun源安装

cat > /etc/yum.repos.d/OpenStack-Ocata.repo << EOF
[OpenStack-Ocata]
name=OpenStack-Ocata
baseurl=https://mirrors.aliyun.com/centos-vault/7.6.1810/cloud/x86_64/openstack-ocata/
gpgcheck=0
enabled=1
EOF
cat > /etc/yum.repos.d/OpenStack-Qemu-Ev.repo << EOF
[OpenStack-Qemu-Ev]
name=OpenStack-Qemu-Ev
baseurl=https://mirrors.aliyun.com/centos-vault/7.6.1810/virt/x86_64/kvm-common/
gpgcheck=0
enabled=1
EOF
yum upgrade -y
yum install python-openstackclient openstack-selinux -y
1.8、 安装数据库

数据库安装到控制节点。

yum -y install mariadb mariadb-server python2-PyMySQL
cat > /etc/my.cnf.d/openstack.cnf << EOF
[mysqld]
bind-address = 192.16.1.45
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF
systemctl enable mariadb.service
systemctl start mariadb.service

初始化数据库,设置登录数据库密码,其他选择默认。

mysql_secure_installation
1.9、 安装消息队列

消息队列安装在控制节点,注意将5W2B+I=xhnEu这个密码换成自己的密码。

yum install rabbitmq-server -y
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
rabbitmqctl add_user openstack 5W2B+I=xhnEu
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmq-plugins enable rabbitmq_management

验证测试
192.16.1.45:15672
username:guest
password:guest

1.10、 安装缓存服务memcached

缓存服务安装在控制节点上。

yum install memcached python-memcached -y
cp /etc/sysconfig/memcached /etc/sysconfig/memcached.bak
sed -i 's/OPTIONS/#OPTIONS/g' /etc/sysconfig/memcached
echo 'OPTIONS="-l controller"' >> /etc/sysconfig/memcached

systemctl enable memcached.service
systemctl start memcached.service

二、认证服务

认证服务glance安装在控制节点,可根据自身需求安装在不同节点。

2.1、创建数据库

根据自身需求更换数据库密码

mysql -u root -p5W2B+I=xhnEu
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '5W2B+I=xhnEu';
FLUSH PRIVILEGES;
2.2、安装配置glance

下载安装glance服务,并备份配置文件

yum -y install openstack-keystone httpd mod_wsgi openstack-utils
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
egrep -v "^$|^#" /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf

修改配置文件,注意更换密码

openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:5W2B+I=xhnEu@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet

初始化数据库,初始化fernet key

su -s /bin/sh -c "keystone-manage db_sync" keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

创建引导认证,注意更换密码

keystone-manage bootstrap --bootstrap-password 5W2B+I=xhnEu --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne
keystone-manage bootstrap --bootstrap-password 5W2B+I=xhnEu --bootstrap-admin-url http://controller:35357/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

配置httpd服务

cp /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.bak
echo "ServerName controller" >> /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl start httpd.service

配置admin用户,注意更换密码

export OS_USERNAME=admin
export OS_PASSWORD=5W2B+I=xhnEu
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
2.3、创建域、项目、用户和角色

创建service项目

openstack project create --domain default --description "Service Project" service

创建demo项目

openstack project create --domain default --description "Demo Project" demo

创建demo用户,需输入密码,注意更换密码

openstack user create --domain default --password-prompt demo
User Password:5W2B+I=xhnEu
Repeat User Password:5W2B+I=xhnEu

创建user角色

openstack role create user

将user角色添加到demo项目的demo用户

openstack role add --project demo --user demo user
2.4、验证操作

关闭临时令牌

cp /etc/keystone/keystone-paste.ini /etc/keystone/keystone-paste.ini.bak

在以下的几项中删除admin_token_auth字段

[pipeline:public_api]
[pipeline:admin_api]
[pipeline:api_v3]

取消环境变量

unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
Password:5W2B+I=xhnEu

openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name sccl --os-username sccl token issue
Password:5W2B+I=xhnEu
2.5、创建使用环境变量脚本
mkdir /openstack-ocata/;cd /openstack-ocata/
cat > admin-openrc << EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=5W2B+I=xhnEu
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

cat > sccl-openrc << EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=5W2B+I=xhnEu
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

. admin-openrc
openstack token issue

三、镜像服务

镜像服务器安装在控制节点上,可根据自身需求安装在不同节点。

3.1、创建数据库

注意更换数据库密码

mysql -u root -p5W2B+I=xhnEu

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '5W2B+I=xhnEu';
FLUSH PRIVILEGES;
3.2、创建服务证书

创建glance用户,注意更换密码

. admin-openrc
openstack user create --domain default --password-prompt glance
User Password:5W2B+I=xhnEu
Repeat User Password:5W2B+I=xhnEu

添加 admin 角色到 glance 用户和 service 项目上

openstack role add --project service --user glance admin

创建glance服务实体

openstack service create --name glance --description "OpenStack Image" image

创建镜像服务的 API 端点

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
3.3、下载安装镜像服务

下载镜像服务,修改api配置文件

yum -y install openstack-glance
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
egrep -v "^$|^#" /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf

注意修改密码

openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:5W2B+I=xhnEu@controller/glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone 
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password 5W2B+I=xhnEu
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

修改registry配置文件,注意修改密码

cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
egrep -v "^$|^#" /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf

openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:5W2B+I=xhnEu@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password 5W2B+I=xhnEu
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

同步数据库,启动服务

su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
3.4、验证操纵
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
openstack image list

四、计算服务

计算服务需在控制节点及计算节点上安装

4.1、安装配置控制节点
4.1.1 创建数据库

注意更换数据库密码

mysql -u root -p5W2B+I=xhnEu

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '5W2B+I=xhnEu';
FLUSH PRIVILEGES;
4.1.2 创建控制节点nova服务

创建nova用户,注意更换密码

. admin-openrc
openstack user create --domain default --password-prompt nova
User Password:5W2B+I=xhnEu
Repeat User Password:5W2B+I=xhnEu

给 nova 用户添加 admin 角色

openstack role add --project service --user nova admin

创建 nova 服务实体

openstack service create --name nova --description "OpenStack Compute" compute

创建nova服务API端点

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

创建放置服务用户placement,注意更换密码

openstack user create --domain default --password-prompt placement
User Password:5W2B+I=xhnEu
Repeat User Password:5W2B+I=xhnEu

将 Placement 用户添加到具有管理员角色的服务项目中

openstack role add --project service --user placement admin

创建 placement服务实体

openstack service create --name placement --description "Placement API" placement

创建placement服务API端点

openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
4.1.3 安装配置控制节点

注意更换密码和IP地址

yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api

cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
egrep -v "^$|^#" /etc/nova/nova.conf.bak > /etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host true
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:5W2B+I=xhnEu@controller
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.16.1.45
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4.0
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:5W2B+I=xhnEu@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:5W2B+I=xhnEu@controller/nova
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password 5W2B+I=xhnEu
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen \$my_ip
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address \$my_ip
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:35357/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password 5W2B+I=xhnEu
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300

配置placement

cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak
echo "
<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>" >> /etc/httpd/conf.d/00-nova-placement-api.conf

同步数据库,启动服务

systemctl restart httpd
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova 
nova-manage cell_v2 list_cells
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
4.2、安装配置计算节点
4.2.1、下载安装计算服务

多台计算节点均参照以下配置,注意更换密码和IP地址

yum -y install openstack-nova-compute openstack-utils
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
egrep -v "^$|^#" /etc/nova/nova.conf.bak > /etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf DEFAULT allow_resize_to_same_host true
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:5W2B+I=xhnEu@controller
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.16.1.32
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT cpu_allocation_ratio 4.0
openstack-config --set /etc/nova/nova.conf DEFAULT block_device_allocate_retries 600
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password 5W2B+I=xhnEu
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address \$my_ip
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement os_region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:35357/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password 5W2B+I=xhnEu

确定计算节点是否支持虚拟化,返回0则不支持,返回任意数则支持

egrep -c '(vmx|svm)' /proc/cpuinfo

不支持时配置

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

重启服务

systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
4.3、配置nova互信

若不配置nova互信,虚拟机无法动态调整规格或者迁移
因nova用户无bash权限无法登录,需开启bash权限
所有节点

usermod  -s /bin/bash nova
su - nova
cp /etc/skel/.bash* .
exit

控制节点

su - nova
ssh-keygen -t rsa -q -N ''
cd .ssh/
cp -fa id_rsa.pub authorized_keys
cd ..
scp -rp .ssh root@192.16.1.32:`pwd`
scp -rp .ssh root@192.16.1.34:`pwd`
scp -rp .ssh root@192.16.1.43:`pwd`
exit

所有nova计算节点

su - nova
chown -R nova.nova /var/lib/nova/.ssh
exit

所有节点

su - nova
ssh 192.16.1.45
ssh 192.16.1.32
ssh 192.16.1.34
ssh 192.16.1.43
exit
4.4、cpu超分配配置

cpu超分只是在配置文件中修改配置后不生效,需手动配置cpu超分配。
nova控制节点
加载环境变量

. admin-openrc

创建可用域名称

nova aggregate-create nova

增加主机

nova aggregate-add-host nova computer1 
nova aggregate-add-host nova computer2
nova aggregate-add-host nova computer3 

增加域(2为ID号)

nova aggregate-update 2 nova nova

增加cpu超分源数据

nova aggregate-set-metadata nova cpu_allocation_ratio=4.0
nova aggregate-set-metadata nova availability_zone=nova
openstack-config --set /etc/nova/nova.conf DEFAULT scheduler_default_filters RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,AggregateCoreFilter
systemctl restart openstack-nova-scheduler.service
4.5、验证操作

控制节点执行以下操作

. admin-openrc
openstack hypervisor list
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

openstack compute service list
openstack catalog list
openstack image list
nova-status upgrade check

五、网络服务

网络服务需安装到控制节点和计算节点上

5.1、安装配置控制节点
5.1.1、创建数据库

注意更换数据库密码

mysql -u root -p5W2B+I=xhnEu

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '5W2B+I=xhnEu';
FLUSH PRIVILEGES;
5.1.2、创建控制节点neutron服务

创建neutron用户

. admin-openrc
openstack user create --domain default --password-prompt neutron
User Password:5W2B+I=xhnEu
Repeat User Password:5W2B+I=xhnEu

添加admin角色到neutron用户

openstack role add --project service --user neutron admin

创建neutron服务实体

openstack service create --name neutron --description "OpenStack Networking" network

创建网络服务API端点

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

配置元数据代理,注意更换密码

cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak
egrep -v "^$|^#" /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret 5W2B+I=xhnEu

配置计算服务使用网络服务,注意更换密码

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password 5W2B+I=xhnEu
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret 5W2B+I=xhnEu
5.1.3、配置云平台外部网络

下载安装配置网络服务,注意更换IP地址和密码
配置neutron.conf文件

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
egrep -v "^$|^#" /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:5W2B+I=xhnEu@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins 
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:5W2B+I=xhnEu@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password 5W2B+I=xhnEu
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password 5W2B+I=xhnEu
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

配置ml2_conf.ini文件

cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
egrep -v "^$|^#" /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

配置linuxbridge_agent.ini文件,注意修改网卡名称

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
egrep -v "^$|^#" /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

配置dhcp_agent.ini文件

cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak
egrep -v "^$|^#" /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
5.1.4、配置云平台内部网络

内部网络需在外部网络基础上配置
配置neutron.conf文件

openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips true

配置ml2_conf.ini文件

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000

配置linuxbridge_agent.ini文件,注意IP地址

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.16.1.45
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true

配置l3_agent.ini文件

cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak
egrep -v "^$|^#" /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge
5.1.5、启动网络服务

同步数据库

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

启动控制节点网络服务

systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service
5.2、安装配置计算节点

下载安装网络服务

yum -y install openstack-neutron-linuxbridge ebtables ipset
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
egrep -v "^$|^#" /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:5W2B+I=xhnEu@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password 5W2B+I=xhnEu
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

配置网络服务使用计算服务

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password 5W2B+I=xhnEu
5.2.1、配置云平台外部网络

配置linuxbridge_agent.ini文件,注意网卡名称

cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
egrep -v "^$|^#" /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eno2
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
5.2.2、配置云平台内部网络

内部网络需在外部网络基础上增加
注意更换IP地址

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 192.16.1.32
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
5.2.3、重启网络服务
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service
5.3、验证操作

控制节点执行

openstack extension list --network
openstack network agent list
5.4、配置多网卡节点
5.4.1、控制节点配置

注意网卡名称

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider,net192_16_0
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth0,net10_0_0:eth1
systemctl restart neutron-linuxbridge-agent.service neutron-server.service
5.4.2、计算节点配置

注意网卡名称

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth2,net192_16_0:eth1
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eth2,net192_16_0:eth1
systemctl restart neutron-linuxbridge-agent.service

此服务安装完成后,可以启动一个实例测试个功能模块是否正常

六、安装仪表盘

yum -y install openstack-dashboard

cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak
vim /etc/openstack-dashboard/local_settings
...
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*',]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
TIME_ZONE  =  "TIME_ZONE"
...

systemctl restart httpd.service memcached.service

验证操作
密码为admin设置密码

http://controller/dashboard

七、块存储服务

块存储服务需安装在控制节点上,同时需对接ceph集群

7.1、控制节点安装cinder服务
7.1.1、创建数据库

注意更换数据库密码

mysql -u root -p5W2B+I=xhnEu
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '5W2B+I=xhnEu';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '5W2B+I=xhnEu';
7.1.2、创建控制节点cinder服务

创建一个 cinder 用户

. admin-openrc

openstack user create --domain default --password-prompt cinder
User Password:5W2B+I=xhnEu
Repeat User Password:5W2B+I=xhnEu

添加 admin 角色到 cinder 用户上

openstack role add --project service --user cinder admin

创建cinder2和cinder3服务实体

openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

创建块设备存储服务的 API 端点

openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
7.1.3、安装配置控制节点

注意更换IP地址

yum -y install openstack-cinder 

cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
egrep -v "^$|^#" /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf

openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:5W2B+I=xhnEu@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:5W2B+I=xhnEu@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.16.1.45
openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_clear none
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password 5W2B+I=xhnEu
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

同步数据库,启动服务

su -s /bin/sh -c "cinder-manage db sync" cinder
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
7.2、安装ceph集群
7.2.1、配置ceph的yum源

注意查看ceph.repo是否正确写入yum源

cat > /etc/yum.repos.d/ceph.repo << EOF
[ceph]
name=Ceph packages for
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-noarch]
name=Ceph noarch packages 
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages 
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
EOF
7.2.2、安装配置ceph集群

ceph集群的控制节点执行(我的实验环境为192.16.1.45同时为open stack的控制节点和ceph的控制节点)
ceph控制节点执行

yum install ceph-deploy ceph python-setuptools -y

ceph其他节点执行

yum install ceph python-setuptools -y

所有节点执行

mkdir /etc/ceph
cd /etc/ceph

配置集群,ceph控制节点执行,执行完后可以网页登录查看ceph,注意更换ceph的密码

ceph-deploy new controller computer1 computer2 computer3
ceph-deploy mon create-initial

ceph-deploy osd create --data /dev/sdb controller
ceph-deploy osd create --data /dev/sdb computer1
ceph-deploy osd create --data /dev/sdb computer2
ceph-deploy osd create --data /dev/sdb computer3
ceph-deploy admin controller computer1 computer2 computer3

ceph-deploy mgr create controller computer1 computer2 computer3
ceph mgr module enable dashboard 
ceph dashboard create-self-signed-cert
ceph dashboard set-login-credentials admin test@123
ceph mgr services

创建ceph存储池,ceph控制节点执行

ceph osd pool create volumes 64
ceph osd pool create vms 64
ceph osd pool create images 64

ceph osd pool application enable images rbd
ceph osd pool application enable vms rbd
ceph osd pool application enable volumes rbd
7.2.3、验证ceph集群

验证ceph状态,密码为test@123

ceph osd lspools
ceph osd status
ceph osd tree
ceph -s
ceph df
https://192.16.1.45:8443
7.3、配置ceph为云平台后端存储

ceph集群配置cinder和glance密钥生成key,ceph控制节点执行

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' -o /etc/ceph/ceph.client.glance.keyring
chown glance:glance /etc/ceph/ceph.client.glance.keyring
chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

将密钥文件分别传入集群节点

scp ceph.client.cinder.keyring 192.16.1.32:/etc/ceph/
scp ceph.client.glance.keyring 192.16.1.32:/etc/ceph/
scp ceph.client.cinder.keyring 192.16.1.34:/etc/ceph/
scp ceph.client.glance.keyring 192.16.1.34:/etc/ceph/
scp ceph.client.cinder.keyring 192.16.1.43:/etc/ceph/
scp ceph.client.glance.keyring 192.16.1.43:/etc/ceph/

在所有节点执行生成key,secret.xml中的uuid为uuidgen生成,uuid只需生成一个即可,所有节点共用一个uuid

ceph auth get-key client.cinder | tee client.cinder.key
uuidgen
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>01f0c4e9-63e6-4970-935e-adc1d9f7bd79</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
virsh secret-define --file secret.xml
virsh secret-set-value --secret 01f0c4e9-63e6-4970-935e-adc1d9f7bd79 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml

验证操作

virsh secret-list
virsh secret-undefine
7.4、ceph对接glance

镜像服务安装在控制节点,因此需在控制节点执行(controller)

openstack-config --set /etc/glance/glance-api.conf DEFAULT show_image_direct_url false
openstack-config --set /etc/glance/glance-api.conf glance_store stores glance.store.filesystem.Store,glance.store.http.Store,glance.store.rbd.Store
openstack-config --set /etc/glance/glance-api.conf glance_store default_store rbd
openstack-config --set /etc/glance/glance-api.conf glance_store rbd_store_user glance
openstack-config --set /etc/glance/glance-api.conf glance_store rbd_store_pool images
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

重启控制节点glance服务

systemctl restart openstack-glance-registry openstack-glance-api

验证是否对接成功
ceph存储池中占用空间则对接成功

glance image-create --name cirrosceph --disk-format qcow2 --container-format bare < cirros-0.3.5-x86_64-disk.img
ceph df
7.5、ceph对接nova

需在nova计算节点配置(computer1、computer2、computer3)
查看是否下载libvirtd服务,未下载则需要下载libvirtd服务

yum -y install libvirt

需配置nova配置文件,注意配置uuid

openstack-config --set /etc/nova/nova.conf libvirt inject_password False
openstack-config --set /etc/nova/nova.conf libvirt inject_key False
openstack-config --set /etc/nova/nova.conf libvirt inject_partition -2
openstack-config --set /etc/nova/nova.conf libvirt disk_cachemodes \"network=writeback\"
openstack-config --set /etc/nova/nova.conf libvirt images_type rbd
openstack-config --set /etc/nova/nova.conf libvirt images_rbd_pool vms
openstack-config --set /etc/nova/nova.conf libvirt images_rbd_ceph_conf /etc/ceph/ceph.conf
openstack-config --set /etc/nova/nova.conf libvirt hw_disk_discard unmap
openstack-config --set /etc/nova/nova.conf libvirt rbd_user cinder
openstack-config --set /etc/nova/nova.conf libvirt rbd_secret_uuid 01f0c4e9-63e6-4970-935e-adc1d9f7bd79
openstack-config --set /etc/nova/nova.conf libvirt live_migration_flag \"VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED\"

配置ceph配置文件

echo "
[client]
rbd cache = true
rbd cache writethrough until flush = true
log file = /var/log/ceph/qemu-guest.\$pid.log
admin socket=/var/log/ceph/rbd-\$pid.asok
rbd concurrent management ops = 20
" >> /etc/ceph/ceph.conf

mkdir -p /var/log/qemu/
chown 777 -R /var/log/qemu/

重启服务

systemctl restart libvirtd
systemctl enable libvirtd
systemctl restart openstack-nova-compute

验证操作
下发一台虚拟机查看是否对接成功

ceph df
7.6、ceph对接cinder

需在控制节点(controller)配置cinder文件,注意更换uuid

openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends ceph
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_version 2
openstack-config --set /etc/cinder/cinder.conf ceph default_volume_type ceph
openstack-config --set /etc/cinder/cinder.conf ceph glance_api_version 2
openstack-config --set /etc/cinder/cinder.conf ceph volume_driver cinder.volume.drivers.rbd.RBDDriver
openstack-config --set /etc/cinder/cinder.conf ceph volume_backend_name ceph
openstack-config --set /etc/cinder/cinder.conf ceph rbd_pool volumes
openstack-config --set /etc/cinder/cinder.conf ceph rbd_ceph_conf /etc/ceph/ceph.conf
openstack-config --set /etc/cinder/cinder.conf ceph rbd_flatten_volume_from_snapshot false
openstack-config --set /etc/cinder/cinder.conf ceph rbd_max_clone_depth 5
openstack-config --set /etc/cinder/cinder.conf ceph rbd_store_chunk_size 4
openstack-config --set /etc/cinder/cinder.conf ceph rados_connect_timeout -1
openstack-config --set /etc/cinder/cinder.conf ceph rbd_user cinder
openstack-config --set /etc/cinder/cinder.conf ceph rbd_secret_uuid 01f0c4e9-63e6-4970-935e-adc1d9f7bd79

设置后端存储类型

cinder type-create ceph
cinder type-key ceph set volume_backend_name=ceph

重启服务

systemctl restart openstack-cinder-volume

验证是否对接成功
ceph存储池中占用空间则对接成功

openstack volume service list
openstack volume create --size 1 volume1
ceph df

八、启动一个实例

控制节点
创建外网网络,创建双网卡时必须和定义的网络名称一致

. admin-openrc
openstack network create  --share --external --provider-physical-network provider --provider-network-type flat provider
openstack subnet create --network provider --allocation-pool start=192.16.1.100,end=192.16.1.150 --dns-nameserver 61.139.2.69 --gateway 192.16.1.1 --subnet-range 192.16.1.0/24 provider

创建内网网络

openstack network create selfservice
openstack subnet create --network selfservice --dns-nameserver 8.8.4.4 --gateway 192.16.1.1 --subnet-range 192.16.1.0/24 selfservice
openstack router create router
neutron router-interface-add router selfservice
eutron router-gateway-set router provider
ip netns
neutron router-port-list router

创建虚拟机规格

openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano

创建密钥

openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
openstack keypair list

放通安全组

openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default

创建云主机,注意net-id为自己查询的id

openstack flavor list
openstack image list
openstack network list
openstack security group list
openstack server create --flavor m1.nano --image cirros --nic net-id=00f2dc8e-afca-4445-9ed4-aeff45d5765e --security-group default --key-name mykey provider-instance
openstack server list

九、常用命令及服务

9.1、组件服务
systemctl enable chronyd.service
systemctl enable mariadb.service
systemctl enable rabbitmq-server.service
systemctl enable memcached.service
systemctl enable httpd.service
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl enable neutron-linuxbridge-agent.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart chronyd.service
systemctl restart mariadb.service
systemctl restart rabbitmq-server.service
systemctl restart memcached.service
systemctl restart httpd.service
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl restart libvirtd.service openstack-nova-compute.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-l3-agent.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart ceph-mgr* ceph-crash* ceph-mon* ceph-osd*
9.2、命令

验证命令

openstack token issue
openstack image list
openstack hypervisor list
openstack compute service list
openstack catalog list
openstack image list
nova-status upgrade check
openstack extension list --network
openstack network agent list
openstack volume service list

ceph命令

ceph osd lspools
ceph osd status
ceph osd tree
ceph -s
ceph df

ceph删除节点

ceph stop osd.7
ceph osd out osd.7
ceph osd crush remove osd.7
ceph osd rm osd.7
ceph auth del osd.7

virsh删除

virsh secret-list
virsh secret-undefine id

aggregate命令

nova aggregate-list
# Print a list of all aggregates.

nova aggregate-create <name> <availability-zone>
# Create a new aggregate named <name> in availability zone <availability-zone>. Returns the ID of the newly created aggregate.

nova aggregate-delete <id>
# Delete an aggregate with id <id>.

nova aggregate-details <id>
# Show details of the aggregate with id <id>.

nova aggregate-add-host <id> <host>
# Add host with name <host> to aggregate with id <id>.

nova aggregate-remove-host <id> <host>
# Remove the host with name <host> from the aggregate with id <id>.

nova aggregate-set-metadata <id> <key=value> [<key=value> ...]
# Add or update metadata (key-value pairs) associated with the aggregate with id <id>.

nova aggregate-update <id> <name> [<availability_zone>]
# Update the aggregate's name and optionally availability zone.

nova host-list
# List all hosts by service.

nova host-update --maintenance [enable | disable]

十、安装问题处理

10.1、安装openstack报错

在这里插入图片描述
访问阿里云库https://developer.aliyun.com/packageSearch?:搜索下载python2-qpid-proton-0.26.0-2.el7.x86_64和qpid-proton-c-0.26.0-1.el7.x86_64
在这里插入图片描述

10.2、连接数据库报错

在这里插入图片描述
修改数据连接

mysql -uroot-p
show variables like 'max_connections'
set global max_connections=1500;
exit
vim /etc/my.cnf
[mysqld]
max_connections=1000
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值