openstack P版稳定集群分布式部署-手动

openstack P版稳定集群分布式部署-手动

心血来潮,最全的部署步骤,网上绝对没有,你们自己搞最快也要一周,按照我的方式搞 1-2天吧

坑已经排干净了, 一些概念自己百度学习,部署照着做没问题,绝对原创,我写这个我都写吐血了快;

相关文件链接:https://pan.baidu.com/s/1LhY74nyRtATVhkDS84OKhQ 提取码:f56t

1, 静态IP(NetworkManager服务可以关闭)

2,主机名与绑定  vi /etc/hosts
192.168.1.11 controller  // 控制节点
192.168.1.12 compute   // 计算节点
192.168.1.13 cinder    // 存储节点

3, 关闭防火墙和selinux 
# systemctl stop firewalld 
# systemctl disable firewalld 
# yum install iptables-services -y 
# systemctl restart iptables 
# systemctl enable iptables 
# iptables -F 
# iptables -F -t nat 
# iptables -F -t mangle 
# iptables -F -t raw

4, 时间同步
1. yum install ntp
2. ntpdate 

5. 所有节点准备yum源(在centos默认源的基础上再加上以下两个yum源)
a. yum install yum-plugin-priorities -y
b. cd /home/openstack  
// 这个文件夹下需要有几个文件,从我分享的百度网盘下载centos-release-openstack-pike-1-1.el7.x86_64.rpm  、cirros-0.3.5-x86_64-disk.img 、 openstack-newton.tar
c. rpm -ivh centos-release-openstack-pike-1-1.el7.x86_64.rpm  --nodeps --force
d. vi /etc/yum.repos.d/CentOS-OpenStack-pike.repo  把
baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-pike/ 替换成
baseurl=https://mirror.tuna.tsinghua.edu.cn/cc/7/cloud/x86_64/openstack-pike/
只替换前两个 baseurl,其他的不用换,也用不到
e.  yum repolist

6. 所有节点安装Openstack基础工具
a. yum install python-openstackclient openstack-selinux openstack-utils -y

7.计算节点安装基本软件包,以下所有的操作没有特意说明 都是在控制节点进行,如果在计算、cinder节点我会标注,切记;
yum install qemu-kvm libvirt bridge-utils -y
ln -sv /usr/libexec/qemu-kvm /usr/bin/

8. 安装支撑性服务,在控制节点安装
 yum install mariadb mariadb-server python2-PyMySQL -y
  vim /etc/my.cnf.d/openstack.cnf  // 增加配置文件
[mysqld]
bind-address = # ip为控制节 点管理网段IP
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动服务:systemctl restart mariadb    
systemctl enable mariadb

密码初始化: mysql_secure_installation   自己记住就行,我暂时用123456

9. rabbitmq部署  在控制节点安装
yum install erlang socat rabbitmq-server -y
systemctl restart rabbitmq-server
systemctl enable rabbitmq-server
netstat -ntlup |grep 

10. 给rabbitmq 新建用户给openstack
rabbitmqctl list_users  // 查看当前用户
rabbitmqctl add_user openstack 123456  // 用户名Openstack 密码123456
rabbitmqctl set_user_tags openstack administrator  // 给openstack这个用户授权超管
rabbitmqctl set_permissions openstack ".*" ".*" ".*"  // 设置读写权限
rabbitmq-plugins enable rabbitmq_management   // 开启rabbitmq_management插件
netstat -ntlup |grep 15672 // 查看是否开通

11. 安装memcache
yum install memcached python-memcached -y
vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.122.11,::1"  // 这里的IP改为控制节点的内网IP

systemctl restart memcached
systemctl enable memcached
netstat -ntlup |grep :11211

12. 安装认证服务 keystone,keystone以后在其他项目里用于权限也是OK的
a. 创建数据库并创建keystone的专属用户
mysql -p123456
create database keystone;
grant all on keystone.* to 'keystone'@'localhost' identified by '123456';
grant all on keystone.* to 'keystone'@'%' identified by '123456';
flush privileges;
mysql -h controller -u keystone -p123456 -e 'show databases'   // 进行验证 有输出代表OK
yum install openstack-keystone httpd mod_wsgi -y   // keystone基于httpd启动 httpd需要mod_wsgi模块才能运行python开发的程序
cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak  // 配置keystone的配置文件
vim /etc/keystone/keystone.conf
405行 改transport_url =rabbit://openstack:123456@controller:5672   // 连接rabbitmq
661行 改connection =mysql+pymysql://keystone:123456@controller/keystone  // 连接Maria 数据库
2774行  打开注释  provider = fernet
grep -n '^[a-Z]' /etc/keystone/keystone.conf //列出刚才所改的配置 最终确认 要细心
b. 初始化Keystone数据库数据
mysql -h controller -u keystone -p123456 -e 'use keystone;show tables;' //没有输出就对了
su -s /bin/sh -c "keystone-manage db_sync" keystone  // 把keystone自带的数据导进去,需要一定的时间 不要中断 就等着
// su -s表示给bash环境,因为keystone默认不是/bin/bash
// su -c keystone表示以keystone用户身份执行命令
mysql -h controller -u keystone -p123456 -e 'use keystone;show tables;' |wc -l  // 大概有39个表就对了
c. 初始化keystone的认证信息
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
// 在/etc/keystone/目录产生以下两个目录表示初始化成功 credential-keys 和 fernet-keys
d. 初始化openstack管理账号的API数据
keystone-manage bootstrap --bootstrap-password 123456 \
--bootstrap-admin-url http://controller:35357/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
e. vi /etc/httpd/conf/httpd.conf 第95行 改成 ServerName controller:80
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl restart httpd
systemctl enable httpd
netstat -ntlup |grep http  // 确认下是否启动,有输出代表OK 5000 80 35357端口

13 创建domain,project,user和role角色
a. vim admin-openstack.sh // 创建临时admin用户的变量脚本
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

执行 source admin-openstack.sh

b. 创建Project项目
openstack project list
输出代表正常:+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| e07563a130a24dad9ed862cb06857f26 | admin |
+----------------------------------+-------+

c.创建service项目
openstack project create --domain default --description "Service Project" service

d. 创建demo项目
openstack project create --domain default --description "Demo Project" demo

e.创建demo用户
openstack user create --domain default --password 123456 demo
openstack user list

f.创建role
openstack role list
openstack role create user
openstack role list
//把demo用户加入到user角色中
openstack role add --project demo --user demo user

g. 验证之前的步骤是否生效
unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue
//如果提示输入密码  输入 123456 并且有输出就代表正常了
//使用demo用户验证
openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue
//如果提示输入密码  输入 123456 并且有输出就代表正常了
// 用浏览器输入  http://控制节点的内网IP:35357  能访问就代表现阶段的配置都是正确的

14. 修改用户环境变量脚本
vim demo-openstack.sh
export OS_USERNAME=demo
export OS_PASSWORD=daniel.com
export OS_PROJECT_NAME=demo
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

执行  source demo-openstack.sh
openstack token issue

15. 镜像服务glance, 这个比较重要,创建ecs虚拟机的时候选择的系统镜像就来自这里
a. 创建数据库
mysql -p123456
create database glance;
grant all on glance.* to'glance'@'localhost' identified by '123456';
grant all on glance.* to'glance'@'%' identified by '123456';
flush privileges;
mysql -h controller -u glance -p123456 -e 'show databases'

b.权限配置
source admin-openstack.sh
openstack user create --domain default --password 123456 glance
openstack user list
openstack role add --project service --user glance admin // 把glance用户加入到Service项目的admin角色组
openstack service create --name glance --description "OpenStack Image" image  // 创建 glance服务
openstack service list
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
openstack endpoint list

c.glance安装流程
yum install openstack-glance -y
cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak

e.修改配置文件
vim /etc/glance/glance-api.conf
1823行 connection =mysql+pymysql://glance:123456@controller/glance
1943行 stores = file,http  //解除注释
1975行     default_store = file // 解除注释
2294行 filesystem_store_datadir = /var/lib/glance/images //解除注释
3283行 [keystone_authtoken] 这句下面添加一段配置
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456

4235行 flavor = keystone  //解除注释:
配置完成保存退出
执行 grep -Ev '#|^$' /etc/glance/glance-api.conf  //查看自己刚才改的配置 认真对比一下

vim /etc/glance/glance-registry.conf
1141行 修改 connection =mysql+pymysql://glance:123456@controller/glance
1234行 [keystone_authtoken] 后面添加一段
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123456

2160行左右 flavor = keystone //解除注释

grep -Ev '#|^$' /etc/glance/glance-registry.conf  // 检查自己的配置

f. 初始化并导入数据到glance数据库
su -s /bin/sh -c "glance-manage db_sync" glance
mysql -h controller -u glance -p123456 -e 'use glance; show tables' //大约有15张表

g.启动服务
systemctl restart openstack-glance-api
systemctl enable openstack-glance-api
systemctl restart openstack-glance-registry
systemctl enable openstack-glance-registry
netstat -ntlup |grep -E '9191|9292'  // 9191 9292端口起来了 就代表现阶段正常

h.上传镜像到glance, iso或者是img文件
source admin-openstack.sh
openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
//上面这条命令是把一个系统镜像上传到glance,最后的public代表是可用
openstack image list // 验证是否上传成功,看到有就代表OK

16. 计算组件nova,在控制节点部署 Nova
mysql -p123456
create database nova_api;
create database nova;
create database nova_cell0;
grant all on nova_api.* to'nova'@'localhost' identified by '123456';
grant all on nova_api.* to 'nova'@'%'identified by '123456';
grant all on nova.* to'nova'@'localhost' identified by '123456';
grant all on nova.* to 'nova'@'%'identified by '123456';
grant all on nova_cell0.* to'nova'@'localhost' identified by '123456';
grant all on nova_cell0.* to'nova'@'%' identified by '123456';
flush privileges;
quit

mysql -h controller -u nova -p123456 -e 'show databases'  //验证

b.权限配置
openstack user create --domain default --password 123456 nova
openstack user list
openstack role add --project service --user nova admin //把nova用户加入到Service项目的admin角色组
openstack service create --name nova --description "OpenStack Compute" compute   //创建nova服务
openstack service list

c.配置nova服务的api记录地址,仔细配置很重要
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
openstack endpoint list

d.创建placement用户,用于资源的追踪记录
openstack user create --domain default --password 123456 placement
openstack user list
openstack role add --project service --user placement admin  //把placement用户加入到Service项目的admin角色组
openstack service create --name placement --description "Placement API" placement  //创建placement服务
openstack service list

e.创建placement服务的api地址记录
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
openstack endpoint list

f.在控制节点安装nova相关软件
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak

g.修改相关的配置文件
vim /etc/nova/nova.conf
2753行 enabled_apis=osapi_compute,metadata  //解除注释
3479行 connection=mysql+pymysql://nova:123456@controller/nova_api
4453行 connection=mysql+pymysql://nova:123456@controller/nova
3130行 transport_url=rabbit://openstack:123456@controller
3193 auth_strategy=keystone 解除注释
5771行 [keystone_authtoken] 下面 添加一段
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456

1817行 use_neutron=true //解除注释
2479  firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver //解除注释

9897行上下吧 找到[vnc] 下面
enabled=true  //解除注释
vncserver_listen=控制节点内网IP
vncserver_proxyclient_address=控制节点内网IP
5067行 api_servers=http://controller:9292
7489 lock_path=/var/lib/nova/tmp
8304行上下 找到 [placemement] 在下面加一段
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = 123456

保存退出执行 grep -Ev '^#|^$' /etc/nova/nova.conf

h. 配置00-nova-placement-api.conf配置文件
vi /etc/httpd/conf.d/00-nova-placement-api.conf
将下面的一段加到 16行 </VirtualHost> 上面
<Directory /usr/bin>
  <IfVersion >= 2.4>
    Require all granted
  </IfVersion>
  <IfVersion < 2.4>
    Order allow,deny
    Allow from all
  </IfVersion>
</Directory>

systemctl restart httpd  // 重启httpd服务

i. 导入数据
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova  //这个命令会有一些警告,直接忽略,我已经尝试解决警告 比较繁琐
nova-manage cell_v2 list_cells  // 验证
mysql -h controller -u nova -p123456 -e 'use nova;show tables;' |wc -l   //111个表左右就对了
mysql -h controller -u nova -p123456 -e 'use nova_api;show tables;' |wc -l  //33个表左右就对接了
mysql -h controller -u nova -p123456 -e 'use nova_cell0;show tables;' |wc -l   // 111个表左右就对了

j. 启动服务
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
openstack catalog list


17. 在计算节点compute部署,以下操作默认都在compute节点执行
vi /etc/yum.repos.d/CentOS-Base.repo  最后面加下面的一段配置:
[Virt]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
baseurl=http://mirrors.sohu.com/centos/7.5.1804/virt/x86_64/kvm-common/
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

执行 yum install openstack-nova-compute sysfsutils -y
cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
把控制节点下的 /etc/nova/nova.conf 复制到 计算节点下的 /etc/nova/nova.conf
然后在计算节点上 vi /etc/nova/nova.conf 改几处地方:
a. [vnc]下的几个参数有所不同
vncserver_proxyclient_address接的IP为compute节点管理网络IP
enabled = True
vncserver_listen = 0.0.0.0
novncproxy_base_url =http://192.168.122.11:6080/vnc_auto.html

b.[libvirt]参数组下面找到virt_type 改为: 
virt_type=qemu
不能使用kvm,因为我们本来就在kvm里面搭建的云平台,cat /proc/cpuinfo |egrep 'vmx|svm'是查不出来的 
但如果是生产环境用物理服务器搭建就应该为virt_type=kvm,此处要注意

c.启动服务
systemctl start libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service

d.在控制controller节点上操作
openstack compute service list
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova  //新增计算节点记录,增加到nova数据库中
nova-status upgrade check     //检验API是否正常


18. 在控制节点 安装网络组件 neutron
mysql -p123456
create database neutron;
grant all on neutron.* to'neutron'@'localhost' identified by '123456';
grant all on neutron.* to'neutron'@'%' identified by '123456';
flush privileges;
quit
mysql -h controller -u neutron -p123456 -e 'show databases';
source admin-openstack.sh
openstack user create --domain default --password 123456 neutron
openstack user list
openstack role add --project service --user neutron admin  //把neutron用户加到Service项目的admin角色组
openstack service create --name neutron --description "OpenStack Networking" network  //创建Neutron服务
openstack service list

b.配置neutron服务的api地址记录
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
openstack endpoint list

c.在控制节点安装 neutron
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
vi /etc/neutron/neutron.conf
27行 auth_strategy = keystone //解除注释
30  core_plugin = ml2
33  service_plugins = router
85 allow_overlapping_ips = true //解除注释
98 notify_nova_on_port_status_changes = true  //解注
102 notify_nova_on_port_data_changes = true  //解注
553 transport_url =rabbit://openstack:123456@controller
560 rpc_backend = rabbit   //解注
710 connection =mysql+pymysql://neutron:123456@controller/neutron
794  [keystone_authtoken]  这句不改,下面加一段
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

1022 [nova] 这句不改,添加下面的一段 到[nova]下面
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456

1141行上下  lock_path = /var/lib/neutron/tmp
保存退出
grep -Ev '#|^$' /etc/neutron/neutron.conf


vi /etc/neutron/plugins/ml2/ml2_conf.ini
132  type_drivers = flat,vlan,vxlan
137 tenant_network_types = vxlan
141 mechanism_drivers = linuxbridge,l2population 
146 extension_drivers = port_security
182 flat_networks = provider
235 vni_ranges = 1:1000
259 enable_ipset = true
保存退出
grep -Ev '#|^$' /etc/neutron/plugins/ml2/ml2_conf.ini

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
142 physical_interface_mappings = provider:eth1   //注 改为eth1 或者eth0 能上外网的网卡
175 enable_vxlan = true
196 local_ip = 192.168.122.11  // 控制节点的内网IP
220 l2_population = true
155 firewall_driver =neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
160 enable_security_group = true
保存退出
grep -Ev '#|^$' /etc/neutron/plugins/ml2/linuxbridge_agent.ini

vi /etc/neutron/l3_agent.ini
16 interface_driver = linuxbridge
保存退出

vi  /etc/neutron/dhcp_agent.ini
16 interface_driver = linuxbridge
37 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq  //解注
46 enable_isolated_metadata = true
保存退出

vi /etc/neutron/metadata_agent.ini
23 nova_metadata_host = controller
35 metadata_proxy_shared_secret = metadata_daniel
保存退出

vi /etc/nova/nova.conf
[neutron]  在[neutron]配置段下添加下面一段
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata_daniel
保存退出

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron   //这一步时间会长一些
systemctl restart openstack-nova-api.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service


19.在计算节点compute部署neutron服务
yum install openstack-neutron-linuxbridge ebtables ipset -y
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

vi /etc/neutron/neutron.conf
27 auth_strategy = keystone
553 transport_url = rabbit://openstack:123456@controller

794 [keystone_authtoken] 在 [keystone_authtoken]下添加下面一段配置
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

1135左右 lock_path = /var/lib/neutron/tmp
保存退出
grep -Ev '#|^$' /etc/neutron/neutron.conf

vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
142 physical_interface_mappings = provider:eth0  // 注意 这个eth0 或者 eth1要看下你能上外网的网卡
175 enable_vxlan = true
196 local_ip = 192.168.122.12  //本机管理网络的IP(重点注意)
220 l2_population = true
155 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
160 enable_security_group = true
保存退出
grep -Ev '#|^$' /etc/neutron/plugins/ml2/linuxbridge_agent.ini

vi  /etc/nova/nova.conf
[neutron] 在[neutron]下添 加下面一段
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
保存退出
grep -Ev '#|^$' /etc/nova/nova.conf

systemctl restart openstack-nova-compute.service
systemctl start neutron-linuxbridge-agent.service
systemctl enable neutron-linuxbridge-agent.service

在控制controller节点上验证
openstack network agent list


20. 在控制节点安装dashbord horizon
yum install openstack-dashboard -y
cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak

vi /etc/openstack-dashboard/local_settings
38  ALLOWED_HOSTS = ['*',] 允许所有,方便测试,生产环境只允许特定IP
64 OPENSTACK_API_VERSIONS = {
     "identity": 3,
     "image": 2,
     "volume": 2,
     "compute": 2,
 }
 
75 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True  // true代表多域支持 比如阿里云的华北1 华北2
97 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'  //默认域 的名称
153 SESSION_ENGINE = 'django.contrib.sessions.backends.cache'  加这 一句

154 CACHES = {
     'default': {
        'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211', 表示 把会话给controller的memcache
     },
 }
 
161 #CACHES = { 
配置了上面一段,则注释这一段
#   'default': {
#       'BACKEND':'django.core.cache.backends.locmem.LocMemCache',
#   },
#}

183 OPENSTACK_HOST = "controller" 
184 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST   // 改为v3版,而不是v3.0版
185 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"   //默认角色

313 OPENSTACK_NEUTRON_NETWORK = {
     'enable_router': True,
     'enable_quotas': True,
     'enable_ipv6': True,
     'enable_distributed_router': True,
     'enable_ha_router': True,
     'enable_fip_topology_check': True, 
// 全打开,我们用的是第2种网络类型

453 TIME_ZONE = "Asia/Shanghai"
保存退出


vi  /etc/httpd/conf.d/openstack-dashboard.conf
4  WSGIApplicationGroup %{GLOBAL} 
// 第4行加上这一句,否则后面dashboard访问不了
保存 退出

systemctl restart httpd memcached

浏览器访问:http://控制节点的IP/dashboard/auth/login/?next=/dashboard/
域 default  账号 admin 密码 123456


到此 一个基本版的 集群版的 openstack搭建好了 ,后续会继续更新 部署Openstack 分布式存储cinder

==================================================================================================================================================

=====================================
接下来安装cinder存储

1. 在控制节点执行
mysql -p123456
create database cinder;
grant all on cinder.* to'cinder'@'localhost' identified by '123456';
grant all on cinder.* to 'cinder'@'%'identified by '123456';
flush privileges;
quit
mysql -h controller -u cinder -p123456 -e 'show databases';
source admin-openstack.sh
openstack user create --domain default --password 123456 cinder
openstack user list
openstack role add --project service --user cinder admin     //把cinder用户添加到service项目中,并赋予admin角色
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2   //创建cinderv2和cinderv3服务
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3   // 创建cinderv2和cinderv3服务
openstack service list

b. 创建cinder相关endpoint记录
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
openstack endpoint list  //验证

c. 在控制节点安装openstack-cinder包
yum install openstack-cinder -y
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
vi  /etc/cinder/cinder.conf
283行改 my_ip = 192.168.122.11  //控制节点的内网IP
288 glance_api_servers = http://controller:9292
400 auth_strategy = keystone   // 在400行左右 新增这个配置,原先是没有的
1212 transport_url = rabbit://openstack:daniel.com@controller
1219 rpc_backend = rabbit  // 解注
3782 connection = mysql+pymysql://cinder:123456@controller/cinder

4009 [keystone_authtoken] 在这下面加一段配置:
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456

4298 lock_path = /var/lib/cinder/tmp
保存退出
grep -Ev '#|^$' /etc/cinder/cinder.conf

vi /etc/nova/nova.conf
找到 [cinder] 在这个下面加一个配置
os_region_name = RegionOne
保存退出

 systemctl restart openstack-nova-api.service   //重启服务

d. 同步数据库
su -s /bin/sh -c "cinder-manage db sync" cinder
mysql -h controller -u cinder -p123456 -e 'use cinder;show tables' |wc -l   //36个表和是对的

e.启动服务
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
netstat -ntlup |grep :8776  //验证端口是否正常启动
openstack volume service list

2. 在cinder节点部署,cinder节点需要挂载一块硬盘,50G左右测试就OK
lsblk  // 确认有vdb这块硬盘
yum install lvm2 device-mapper-persistent-data -y
systemctl start lvm2-lvmetad.service
systemctl enable lvm2-lvmetad.service
pvcreate /dev/vdb  // 创建LVM
vgcreate cinder_lvm /dev/vdb
//查看pv与vg(注意:如果cinder存储节点安装系统时用的lvm,这里会显示多个,要区分清楚)
pvs
vgs
vi /etc/lvm/lvm.conf
142行       filter = [ "a/vdb/", "r/.*/" ]   //增加这句,a代表允许访问accept, r代表拒绝reject

b.安装cinder服务并配置
yum install openstack-cinder targetcli python-keystone -y
cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
vi /etc/cinder/cinder.conf
283 my_ip = 192.168.122.13  //存储cinder节点的管理网络的IP
288 glance_api_servers = http://controller:9292
400 auth_strategy = keystone
404 enabled_backends = lvm
1212 transport_url = rabbit://openstack:123456@controller
1219 rpc_backend = rabbit
3782 connection = mysql+pymysql://cinder:123456@controller/cinder
4009 [keystone_authtoken] 在下加上一段配置
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456

4298 lock_path = /var/lib/cinder/tmp

// 在文档最后面添加[lvm] [lvm]这一段不存在,手动在配置文件最后加上这5行
[lvm]
volume_driver =cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder_lvm   // 一定要和 前面创建的vg名一致
iscsi_protocol = iscsi
iscsi_helper = lioadm
保存退出
grep -Ev '#|^$' /etc/cinder/cinder.conf  //检查下刚才改的

c. 启动服务
systemctl start openstack-cinder-volume.service target.service
systemctl enable openstack-cinder-volume.service target.service

d. 在controller节点上验证
openstack volume service list

e. openstack dashboard 退出重新登录

以上 存储节点搭建完成

3. 云平台简单使用
// 在控制controller节点执行
openstack network list
// 创建网络
openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
openstack network list //验证
// 为网络添加子网,执行完这个命令有一定的几率服务器会掉线,原因是之前搭建的网桥和你的eth0或者eth1的MAC不一致,通过设置就能搞定
openstack subnet create --network provider --allocation-pool start=192.168.0.100,end=192.168.0.250 --dns-nameserver 114.114.114.114 --gateway 192.168.0.1 --subnet-range 192.168.0.0/24 provider
openstack network list  // 验证
openstack subnet list   // 验证

4. 创建虚拟机规格,比如4核8G,8核16G
openstack flavor list
openstack flavor create --id 0 --vcpus 1 --ram 512 --disk 1 m1.nano
openstack flavor list    //再次验证

5. 创建虚拟机实例,正常管理虚拟机不应该使用admin用户 ,我们在这里简单创建测试一下  
openstack image list      
 openstack network list    // 控制台输出的ID下的 字符串 复制到
openstack server create --flavor m1.nano --image cirros --nic net-id=(openstack network list 后的ID字符串    ) admin_instance1
openstack console url show admin_instance1           

  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值