openstack学习记录

说明

openstack-train版本学习,以下资料来源于网络,仅仅用于学习

基础架构图(图片来源于网络)

在这里插入图片描述

基础服务组件介绍

计算服务compute

组件:nova
负责实例生命周期的管理,计算资源的单位,支持多种虚拟化技术

网络服务networking

组件:neutron
负责虚拟网络的管理,为实例创建网络拓扑结构,可以自定义网络

镜像服务image service

组件:glance
提供虚拟机镜像模板的注册与管理,将做好的操作系统拷贝为镜像模板,在创建虚拟机时直接使用,支持多格式的镜像

块存储服务block storage

组件:cinder
负责为运行实例提供持久化的块存储设备,可以进行方便的扩展,支持多种后端存储

认证服务identity service

组件:keystone
对用户租户和角色、服务进行认证与授权,支持多认证机制

核心组件之间的通讯流程(图片来源于网络)

在这里插入图片描述

部署记录(基于两台机器)

控制节点controller192.168.5.102
计算节点compute192.168.5.103

安装数据库,基于mysql5.7

vi /etc/my.cnf
[mysqld]
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci

统一创建数据库

CREATE DATABASE keystone;
CREATE DATABASE glance;
CREATE DATABASE placement;
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE neutron;
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

基础环境environment

yum -y install centos-release-openstack-train
yum -y install python-openstackclient openstack-selinux python2-PyMySQL rabbitmq-server memcached python-memcached etcd

mv /etc/etcd/etcd.conf{,.bak}

cat > /etc/etcd/etcd.conf << "EOF"
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.5.102:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.5.102:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.5.102:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.5.102:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.5.102:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

sed -i 's/::1/::1,controller/' /etc/sysconfig/memcached

systemctl enable --now etcd memcached rabbitmq-server

rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmq-plugins enable rabbitmq_management
rabbitmqctl change_password guest 'openstack'

Keystone

yum -y install openstack-keystone httpd mod_wsgi

cp -a /etc/keystone/keystone.conf{,.bak}

cat > /etc/keystone/keystone.conf << "EOF"
[database]
connection = mysql+pymysql://keystone:keystone@controller/keystone
[token]
provider = fernet
EOF

su -s /bin/sh -c "keystone-manage db_sync" keystone

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

keystone-manage bootstrap --bootstrap-password adminpass \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

sed -i '/#ServerName/a\ServerName controller' /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable --now httpd

export OS_USERNAME=admin
export OS_PASSWORD=adminpass
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_DOMAIN_NAME=default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

# openstack domain create --description "An Example Domain" example
openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" myproject
# openstack user create --domain default --password-prompt myuser
openstack user create --domain default --password mypass myuser
openstack role create myrole
openstack role add --project myproject --user myuser myrole

# unset OS_AUTH_URL OS_PASSWORD
# 
# openstack --os-auth-url http://controller:5000/v3 \
#   --os-project-domain-name default --os-user-domain-name default \
#   --os-project-name admin --os-username admin token issue
# Password: adminpass
# 
# openstack --os-auth-url http://controller:5000/v3 \
#   --os-project-domain-name default --os-user-domain-name default \
#   --os-project-name myproject --os-username myuser token issue
# Password: mypass

cat > ~/admin-openrc << "EOF"
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=adminpass
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

cat > ~/demo-openrc << "EOF"
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=mypass
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF

. ~/admin-openrc
openstack token issue

统一创建用户认证、创建角色、

openstack user create --domain default --password glance glance
#绑定用户和项目权限
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

openstack user create --domain default --password placement placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

openstack user create --domain default --password nova nova
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

openstack user create --domain default --password neutron neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

openstack user create --domain default --password cinder cinder
openstack role add --project service --user cinder admin
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

Glance

yum -y install openstack-glance

cp -a /etc/glance/glance-api.conf{,.bak}

cat > /etc/glance/glance-api.conf << "EOF"
[database]
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]
www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
EOF

su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable --now openstack-glance-api

# wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-2009.qcow2
# wget http://download.cirros-cloud.net/0.3.6/cirros-0.3.6-x86_64-disk.img
# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

glance image-create --name "cirros3" \
  --file cirros-0.3.6-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility public

glance image-create --name "cirros4" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility public

openstack image create "centos7" \
  --file CentOS-7-x86_64-GenericCloud-2009.qcow2 \
  --disk-format qcow2 --container-format bare \
  --public

openstack image list
glance 可以使用以下参数:       ps:这些参数不是100%都需要的我们在上传镜像更加我们需求选择相对应的参数就好了

--id <IMAGE_ID>                   #镜像的ID
--name <NAME>                    #镜像的名称
--store <STORE>                   #储存的镜像上传到
--disk-format <DISK_FORMAT>               #镜像的格式。可以接受的格式包含: ami,ari, aki, vhd, vmdk, raw, qcow2, vdi, and iso.
--container-format <CONTAINER_FORMAT>          #镜像容器的格式。可以接受的格式包含:ami,ari, aki, bare, and ovf.
--owner <TENANT_ID>                # 拥有该镜像的租户
--size <SIZE>                     #镜像的大小(以bytes表示). 一般只与'--location'和'--copy_from'一起使用。
--min-disk <DISK_GB>                 #启动镜像所需的最小硬盘空间(用gigabytes表示).
--min-ram <DISK_RAM>                 #启动镜像所需的最小内存数量(用megabytes表示).
--location <IMAGE_URL>                #镜像所在位置的URL。例如,如果镜像储存在swift中,
--file <FILE>                        #在创建过程中将要被上传的本地文件(包括硬盘镜像)。另外,镜像也可以通过stdin传递给客户端。
--checksum <CHECKSUM>             #被Glance使用的可用于认证的镜像数据的哈希值,在此请提供一个md5校验值。
--copy-from <IMAGE_URL>                #用法和'--location'参数相似,但表明Glance服务器应该能立即从镜像所储存的地方拷贝数据并储存。
--is-public [True|False]                 #表示镜像是否能被公众访问。
--is-protected [True|False]              #用于避免镜像被删除。
--property <key=value>               #与镜像有关的任意的属性。可以使用很多次。
--human-readable                     #用对人友好的格式打印镜像的尺寸。
--progress                       #显示上传的进度条

Placement

yum -y install openstack-placement-api

cp -a /etc/placement/placement.conf{,.bak}

cat > /etc/placement/placement.conf << "EOF"
[placement_database]
connection = mysql+pymysql://placement:placement@controller/placement
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = placement
EOF

su -s /bin/sh -c "placement-manage db sync" placement

cp -a /etc/httpd/conf.d/00-placement-api.conf{,.bak}

cat > /etc/httpd/conf.d/00-placement-api.conf << "EOF"
Listen 8778

<VirtualHost *:8778>
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess placement-api processes=3 threads=1 user=placement group=placement
  WSGIScriptAlias / /usr/bin/placement-api
  <IfVersion >= 2.4>
    ErrorLogFormat "%M"
  </IfVersion>
  ErrorLog /var/log/placement/placement-api.log
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...
  <Directory /usr/bin>
    <IfVersion >= 2.4>
      Require all granted
    </IfVersion>
    <IfVersion < 2.4>
      Order allow,deny
      Allow from all
    </IfVersion>
  </Directory>
</VirtualHost>

Alias /placement-api /usr/bin/placement-api
<Location /placement-api>
  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
</Location>
EOF

systemctl restart httpd

placement-status upgrade check

Nova(控制节点)

yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

cp -a /etc/nova/nova.conf{,.bak}

cat > /etc/nova/nova.conf << "EOF"
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller:5672/
my_ip = 192.168.5.102
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:nova@controller/nova_api
[database]
connection = mysql+pymysql://nova:nova@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = mysecret
[vnc]
enabled = true
server_listen = 192.168.5.102
server_proxyclient_address = 192.168.5.102
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = default
project_name = service
auth_type = password
user_domain_name = default
auth_url = http://controller:5000/v3
username = placement
password = placement
[scheduler]
discover_hosts_in_cells_interval = 300
[cinder]
os_region_name = RegionOne
EOF

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

systemctl enable --now openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

Nova(计算节点)

yum -y install centos-release-openstack-train
yum -y install python-openstackclient openstack-selinux openstack-nova-compute

cp -a /etc/nova/nova.conf{,.bak}

cat > /etc/nova/nova.conf << "EOF"
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.5.103
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.5.103
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
#needed by cirros for higher libvirtd version or hardware acceleration not supported
[libvirt]
virt_type = qemu
inject_password = true
EOF

# configure libvirt to use QEMU instead of KVM if hardware acceleration not supported
# 
# if [ $(egrep -c '(vmx|svm)' /proc/cpuinfo) -eq 0 ]; then
#   echo -e "[libvirt]\nvirt_type = qemu" >> /etc/nova/nova.conf
# fi

systemctl enable --now libvirtd openstack-nova-compute

# Run the following commands on the controller node.
# . ~/admin-openrc
# openstack compute service list --service nova-compute
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
# 
# vi /etc/nova/nova.conf:
# [scheduler]
# discover_hosts_in_cells_interval = 300
# 
# openstack compute service list
# openstack catalog list
# openstack image list
# openstack host list
# nova service-list
# nova-status upgrade check

Neutron(控制节点)

# Networking Option 1: Provider networks

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

cp -a /etc/neutron/neutron.conf{,.bak}
cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
cp -a /etc/neutron/dhcp_agent.ini{,.bak}

cat > /etc/neutron/neutron.conf << "EOF"
[database]
connection = mysql+pymysql://neutron:neutron@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
EOF

cat > /etc/neutron/plugins/ml2/ml2_conf.ini << "EOF"
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
EOF

cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini << "EOF"
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF

cat > /etc/neutron/dhcp_agent.ini << "EOF"
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
EOF

cp -a /etc/neutron/metadata_agent.ini{,.bak}

cat > /etc/neutron/metadata_agent.ini << "EOF"
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = mysecret
EOF

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

systemctl enable --now neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent

# Networking Option 2: Self-service networks
# 
# vi /etc/neutron/neutron.conf
# [DEFAULT]
# service_plugins = router
# allow_overlapping_ips = true
# 
# vi /etc/neutron/plugins/ml2/ml2_conf.ini
# [ml2]
# type_drivers = flat,vlan,vxlan
# tenant_network_types = vxlan
# mechanism_drivers = linuxbridge,l2population
# [ml2_type_vxlan]
# vni_ranges = 1:1000
# 
# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# [vxlan]
# enable_vxlan = true
# local_ip = 192.168.5.102
# l2_population = true
# 
# vi /etc/neutron/l3_agent.ini
# [DEFAULT]
# interface_driver = linuxbridge
# 
# systemctl restart neutron-server neutron-linuxbridge-agent && systemctl enable --now neutron-l3-agent

. ~/admin-openrc
neutron agent-list

Neutron(计算节点)

yum -y install openstack-neutron-linuxbridge ebtables ipset

cp -a /etc/neutron/neutron.conf{,.bak}

cat > /etc/neutron/neutron.conf << "EOF"
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
EOF

# Networking Option 1: Provider networks

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}

cat > /etc/neutron/plugins/ml2/linuxbridge_agent.ini << "EOF"
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
EOF

systemctl enable --now neutron-linuxbridge-agent

# Networking Option 2: Self-service networks
# 
# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# [vxlan]
# enable_vxlan = true
# local_ip = 192.168.5.152
# l2_population = true
# 
# systemctl restart neutron-linuxbridge-agent

Cinder(控制节点)

# creating instances will not use local disk in dashboard after cinder installed, but this can resize os disk size
yum -y install openstack-cinder

cp -a /etc/cinder/cinder.conf{,.bak}

cat > /etc/cinder/cinder.conf << "EOF"
[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.5.102
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
EOF

su -s /bin/sh -c "cinder-manage db sync" cinder

systemctl enable --now openstack-cinder-api openstack-cinder-scheduler

. ~/admin-openrc

cinder service-list
openstack volume service list

# different cinder types
# cinder type-create lvm
# cinder type-create nfs
# cinder type-key lvm set volume_backend_name=iSCSI-Storage
# cinder type-key lvm set volume_backend_name=NFS-Storage

Cinder-storage(计算节点)

yum -y install lvm2 device-mapper-persistent-data
systemctl enable --now lvm2-lvmetad

# need another disk /dev/sdb
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb

sed -i '/^devices/a\\n\tfilter = [ "a/sda/", "a/sdb/", "r/.*/"]' /etc/lvm/lvm.conf

yum -y install openstack-cinder targetcli python-keystone

cp -a /etc/cinder/cinder.conf{,.bak}

cat > /etc/cinder/cinder.conf << "EOF"
[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.5.103
enabled_backends = lvm
glance_api_servers = http://controller:9292
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
volume_backen_name = iSCSI-Storage
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
EOF

systemctl enable --now openstack-cinder-volume target

Dashboard

yum -y install openstack-dashboard

cp -a /etc/openstack-dashboard/local_settings{,.bak}

sed -i \
  -e 's@TIME_ZONE =.*@TIME_ZONE = "Asia/Shanghai"@' \
  -e 's@OPENSTACK_HOST =.*@OPENSTACK_HOST = "controller"@' \
  -e "s@ALLOWED_HOSTS =.*@ALLOWED_HOSTS = ['*']@" \
  /etc/openstack-dashboard/local_settings

# networking option 1: disable support for layer-3 networking services
sed -i \
  -e "s@'enable_distributed_router':.*@'enable_distributed_router': False,@" \
  -e "s@'enable_fip_topology_check':.*@'enable_fip_topology_check': False,@" \
  -e "s@'enable_ha_router':.*@'enable_ha_router': False,@" \
  -e "s@'enable_quotas':.*@'enable_quotas': False,@" \
  -e "s@'enable_router':.*@'enable_router': False,\n    'enable_lb': False,\n    'enable_firewall': False,\n    'enable_vpn': False,@" \
  /etc/openstack-dashboard/local_settings

# networking option 2: use default
# vi /etc/openstack-dashboard/local_settings
# OPENSTACK_NEUTRON_NETWORK = {
#     'enable_auto_allocated_network': False,
#     'enable_distributed_router': False,
#     'enable_fip_topology_check': True,
#     'enable_ha_router': False,
#     'enable_ipv6': True,
#     # TODO(amotoki): Drop OPENSTACK_NEUTRON_NETWORK completely from here.
#     # enable_quotas has the different default value here.
#     'enable_quotas': True,
#     'enable_rbac_policy': True,
#     'enable_router': True,
# systemctl restart httpd

cat >> /etc/openstack-dashboard/local_settings << "EOF"
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
WEBROOT = '/dashboard'
EOF

cp -a /etc/httpd/conf.d/openstack-dashboard.conf{,.bak}
sed -i '1i\WSGIApplicationGroup %{GLOBAL}' /etc/httpd/conf.d/openstack-dashboard.conf

systemctl restart httpd memcached

# http://192.168.5.102/dashboard
# default/admin/adminpass
# default/myuser/mypass

启动实例

. ~/admin-openrc

openstack network create --share --external \
  --provider-physical-network provider \
  --provider-network-type flat provider

openstack subnet create --network provider \
  --allocation-pool start=192.168.5.2,end=192.168.5.254 \
  --dns-nameserver 114.114.114.114 --gateway 192.168.5.1 \
  --subnet-range 192.168.5.0/24 provider

neutron net-list
neutron subnet-list

openstack flavor create --id 0 --vcpus 1 --ram 128 --disk 1 m1.nano
openstack flavor create --id 1 --vcpus 1 --ram 1024 --disk 10 m1.tiny
openstack flavor create --id 2 --vcpus 1 --ram 2048 --disk 20 m1.small
openstack flavor create --id 3 --vcpus 2 --ram 4096 --disk 40 m1.medium
openstack flavor create --id 4 --vcpus 4 --ram 8192 --disk 80 m1.large
openstack flavor create --id 5 --vcpus 8 --ram 16384 --disk 160 m1.xlarge

openstack compute service list --service nova-compute
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

. ~/demo-openrc

ssh-keygen -q -N "" -f ~/.ssh/id_rsa
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
openstack keypair list

openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default

# Launch an instance on the provider network

. ~/demo-openrc

openstack flavor list
openstack image list
openstack network list
openstack security group list

openstack server create --flavor m1.nano --image cirros3 \
  --nic net-id=$(openstack network list | grep provider | awk '{print $2}') --security-group default \
  --key-name mykey cirros3

openstack server create --flavor m1.nano --image cirros4 \
  --nic net-id=$(openstack network list | grep provider | awk '{print $2}') --security-group default \
  --key-name mykey cirros4

openstack server create --flavor m1.tiny --image centos7 \
  --nic net-id=$(openstack network list | grep provider | awk '{print $2}') --security-group default \
  --key-name mykey centos7

openstack server list
openstack console url show cirros3
openstack console url show centos7

# Launch an instance on the self-service network

# . ~/demo-openrc
# 
# openstack network create selfservice
# 
# openstack subnet create --network selfservice \
#   --dns-nameserver 114.114.114.114 --gateway 172.16.1.1 \
#   --subnet-range 172.16.1.0/24 selfservice
# 
# openstack router create router
# openstack router add subnet router selfservice
# openstack router set router --external-gateway provider
# 
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
学习要求: 熟悉基本的linux命令 具备基本的网络知识 掌握一门编程语言 课程特点: 深刻理解:OpenStack的设计原理,体系构架和关键技术,构建一个OpenStack环境所需的核心组件以及核心组件间的联系; 全面掌握:如何通过不同的部署工具比如packstack,部署OpenStack环境;如何通过单独部署OpenStack核心组件逐渐搭建出OpenStack环境; 定制研发:在学习openstack源码级深度解析培训后,你能了解openstack源码的体系结构,并能根据需要进行定制开发,满足您在实际生产环境中OpenStack的各种疑问和不满足的功能。 ------------------------课程内容------------------------ 课时1、课前学习环境准备 课时2、课程介绍 课时3、OpenStack概论 课时4、实例:OpenStack自动安装(Fuel) 课时5、作业:OpenStack Fuel 课时6、OpenStack安装部署答问 课时7、Keystone 详解 课时8、实例:OpenStack 手动安装 - 环境准 课时9、实例:Keystone 手动安装 课时10、实例:Keystone CLI 使用 课时11、实例:Keystone API使用 课时12、Glance详解 课时13、实例:Glance手动安装以及CLI、API 课时14、实例:Glance镜像制作 课时15、实例:Glance镜像修改 课时16、作业:Keystone手动练习 课时17、Keystone答问 课时18、作业:Glance手动练习 课时19、Nova架构及原理详解 课时20、实例:Nova手动安装 课时21、实例:Instance启动过程回顾 课时22、网络基础知识盘点 课时23、Neutron原理详解 课时24、实例:Neutron手动安装 课时25、实例:网络命名空间 课时26、实例:物理机连接openvswitch的虚 课时27、实例:租户私有网络创建 课时28、Neutron SDN 实现详解 课时29、实例:Neutron SDN 手动实现 课时30、作业:Nova、Neuron手动安装练习 课时31、作业:Neutron 相关实例练习 课时32、Neutron 答问 课时33、Cinder 原理详解 课时34、Cinder iSCSI实现原理详解 课时35、实例:Cinder 手动安装 课时36、Swift 架构与原理详解 课时37、实例:Swift 手动安装 课时38、Dashboard 介绍与演示 课时39、实例:Dashboard 手动安装 课时40、实例:Dashboard 浮动IP访问实例 课时41、实例:Dashboard 块存储的使用 课时42、实例:Dashboard 对象存储的使用 课时43、实例:OpenStack 命令行接口使用 课时44、OpenStack HA与性能调优 课时45、OpenStack Devstack 自动安 课时46、配置 OpenStack Eclipse 开发环境 课时47、配置 OpenStack Eclipse 开发环境 课时48、OpenStack 自动化测试 - 单元测试 课时49、OpenStack 自动化测试 - 集成测试 课时50、Nova 源码结构 课时51、Nova 调用流程源码解析 课时52、Nova 分层架构与业务模型剖析 课时53、Nova 自定义 API 扩展编码实现 课时54、Django 介绍与快速开始 课时55、Django view 和 urls 的用法 课时56、Django Templates 模板的用法(一) 课时57、Django Templates 模板的用法(二) 课时58、Django Form 表单的用法 课时59、Horizon 结构源码剖析(1) 课时60、Horizon 自定义 Panel 编码实现 课时61、Horizon 自定义 DataView 编码实 课时62、企业部署案例:企业私有云规划与案 课时63、OpenStack 和其他开源云平台比较

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值