OpenStack双网卡、多节点搭建

1、新建三台虚拟机(controller、computer、object)

一、配置三个源:
    1、Base源  CentOS-Base.repo
        阿里源
        wget -O /etc/yum.repos.d/CentOS-Base.repo   http://mirrors.aliyun.com/repo/Centos-6.repo
        网易源
        wget -O /etc/yum.repos.d/CentOS-Base.repo   http://mirrors.163.com/.help/CentOS6-Base-163.repo
    2、epel源(企业源) epel.repo
    3、OpenStack源(优先级应为最高priority=1) rdo-release.repo
           yum install   http://repos.fedorapeople.org/repos/openstack/

icehouse/rdo-release-icehouse-3.noarch.rpm
二、安装yum的优先级插件
    yum install yum-plugin-priorities -y
三、关闭防火墙
    service iptables stop
    chkconfig iptables off
    关闭selinux
    vim /etc/selinux/config
    setenforce 0
      sestatus

CentOs7最小化安装


命令  红色
配置 绿色

controller
computer
object
NAT网络
BOOTPROTO=none
DEVICE=ens33
ONBOOT=yes
IPADDR= 192.168.1.3
PREFIX=24
GATEWAY= 192.168.1.2

nmcli connection up ens33

BOOTPROTO=none
DEVICE=ens33
ONBOOT=yes
IPADDR= 192.168.1.10
PREFIX=24
GATEWAY= 192.168.1.2

BOOTPROTO=none
DEVICE=ens33
ONBOOT=yes
IPADDR= 192.168.1.20
PREFIX=24
GATEWAY= 192.168.1.2

源安装
 yum install wget -y
 yum install vim -y
 yum install net-tools -y
网易源
         wget -O /etc/yum.repos.d/CentOS-Base.repo   http://mirrors.163.com/.help/CentOS6-Base-163.repo
OpenStack源(优先级应为最高priority=1) rdo-release.repo
/etc/hosts
# controller
192.168.1.3 controller
# compute
192.168.1.10 compute
# object
192.168.1.20 object

ping测试
ping   qq.com
相互ping
安装时间服务器(同步时间)
yum install chrony -y
配置时间服务器
You should install Chrony, an implementation of NTP, to properly
recommend that you configure the controller node to reference more accurate
nodes to reference the controller node.
vim  /etc/chrony.conf
Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP
server. The configuration supports multiple server keys.
server   NTP_SERVER   iburst
To enable other nodes to connect to the chrony daemon on the controller node, add this key to the   /etc/
chrony.conf  file :
allow 10.0.0.0/24
systemctl enable chronyd.service
systemctl start chronyd.service
server controller iburst


systemctl enable chronyd.service
systemctl start chronyd.service
测试时间服务器
chronyc sources
开启OpenStack repository
yum install centos-release-openstack-ocata
yum upgrade
yum install python-openstackclient
yum install openstack-selinux
安装数据库
Most OpenStack services use an SQL database to store information.   The
troller node.  The procedures in this guide use MariaDB or MySQL depending
services also support other SQL databases including PostgreSQL.
yum install mariadb mariadb-server python2-PyMySQL
Create and edit the   /etc/my.cnf.d/openstack.cnf  file and complete the following actions:
Create a [mysqld] section, and set the bind-address key to the management IP address of the
controller node to enable access by other nodes via the management network. Set additional keys
to enable useful options and the UTF-8 character set:
[mysqld]
bind-address =   192.168.1.3
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
systemctl enable mariadb.service
systemctl start mariadb.service
2. Secure the database service by running the mysql_secure_installation script. In particular, choose
a suitable password for the database root account:(执行脚本,进行安全设置,和重设密码)
初始密码为空格,什么都没有
# mysql_secure_installation  ???????


Message queue(消息队列)
OpenStack uses a message queue to coordinate operations and status information among services.   The message
queue service typically runs on the controller node.  OpenStack supports several message queue services includ-
ing RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular
message queue service. This guide implements the RabbitMQ message queue service because most distri-
butions support it. If you prefer to implement a different message queue service, consult the documentation
associated with it.
The message queue runs on the controller node.
1、yum install rabbitmq-server
2、systemctl enable rabbitmq-server.service
      systemctl start rabbitmq-server.service

3. Add the openstack user:
Replace RABBIT_PASS with a suitable password.
rabbitmqctl add_user openstack   RABBIT_PASS
Creating user "openstack" ...

4. Permit configuration, write, and read access for the openstack user:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...


Memcached(缓存)
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached
service typically runs on the controller node. For production deployments, we recommend enabling a combi-
nation of firewalling, authentication, and encryption to secure it.
yum install memcached python-memcached
Edit the   /etc/sysconfig/memcached  file and complete the following actions:
1、Configure the service to use the management IP address of the controller node. This is to enable
access by other nodes via the management network:
OPTIONS="-l 127.0.0.1,::1,controller"
systemctl enable memcached.service
systemctl start memcached.service



Identity service:

Prerequisites


This section describes how to install and configure   the OpenStack Identity service, code-named keystone, on
the controller  node. For scalability purposes, this configuration deploys Fernet tokens and the Apache HTTP
server to handle requests.
Prerequisites:Before you install and configure the Identity service, you must create a database.

mysql -u root -p
MariaDB [(none)]> CREATE DATABASE keystone;
Grant proper access to the keystone database:
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY ' KEYSTONE_DBPASS ';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY ' KEYSTONE_DBPASS ';




Identity service:

Install and configure components


Note: Default configuration files vary by distribution. You might need to add these sections and options rather
than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates
potential default configuration options that you should retain.
Note: This guide uses the Apache HTTP server with mod_wsgi to serve Identity service requests on ports
5000 and 35357. By default, the keystone service still listens on these ports. Therefore, this guide manually
disables the keystone service.
1、yum install openstack-keystone httpd mod_wsgi
2、Edit the   /etc/keystone/keystone.conf  file and complete the following actions:

Note: Comment out or remove any other connection options in the [database] section.
[database]
# ...
connection = mysql+pymysql://keystone: KEYSTONE_DBPASS @controller/keystone

[token]
# ...
provider = fernet

3、Populate the Identity service database:
#   su -s /bin/sh -c "keystone-manage db_sync" keystone
4. Initialize Fernet key repositories:
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
5. Bootstrap the Identity service:
# keystone-manage bootstrap --bootstrap-password   ADMIN_PASS   \
--bootstrap-admin-url   http://controller:35357/v3/   \
--bootstrap-internal-url   http://controller:5000/v3/   \
--bootstrap-public-url   http://controller:5000/v3/   \
--bootstrap-region-id RegionOne



Identity service:

Configure the Apache HTTP server


 
1. Edit the   /etc/httpd/conf/httpd.conf  file and configure the ServerName option to reference the
controller node:
ServerName controller
Create a link to the   /usr/share/keystone/wsgi-keystone.conf  file:
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/


Finalize the installation:

# systemctl enable httpd.service
# systemctl start httpd.service

Configure the administrative account
$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL= http://controller:35357/v3
$ export OS_IDENTITY_API_VERSION=3


Identity service:

Create a domain, projects, users, and roles:

The Identity service provides authentication services for each OpenStack service. The authentication service
uses a combination of domains, projects, users, and roles.

Note: You can repeat this procedure to create additional projects and users.
1. This guide uses a service project that contains a unique user for each service that you add to your envi-
ronment.   Create the service   project:
openstack project create --domain default \ 
--description "Service Project" service
2. Regular (non-admin) tasks should use an unprivileged project and user. As an example, this guide creates
the demo project and user.
openstack project create --domain default \
--description "Demo Project" demo
Note: Do not repeat this step when   creating additional users  for this project.
openstack user create --domain default \
--password-prompt demo
User Password:
Repeat User Password:
3、Create the user role:
openstack role create user
4、Add the user role to the demo user of the demo project:
openstack role add --project demo --user demo user
Note: This command provides no output.



Identity service:

Verify operation
Perform these commands on the controller node.
1. For security reasons, disable the temporary authentication token mechanism:(出于安全考虑,禁用身份认证令牌机制)
Edit the   /etc/keystone/keystone-paste.ini   file and   remove admin_token_auth   from the
[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3]   sections.
2. Unset the temporary OS_AUTH_URL and OS_PASSWORD environment variable:
$ unset OS_AUTH_URL OS_PASSWORD
3. As the admin user, request an authentication token:
openstack --os-auth-url   http://controller:35357/v3   \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:
Note: This command uses the password for the admin user.(这个密码是admin用户的密码)
4. As the demo user, request an authentication token:
openstack --os-auth-url   http://controller:5000/v3   \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password:
这个是demo普通用户的密码


Identity service:

Create OpenStack client environment scripts
1. Create and edit the admin-openrc file and add the following content:
Note: The OpenStack client also supports using a clouds.yaml file. For more information, see the
os-client-config.
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD= ADMIN_PASS
export OS_AUTH_URL= http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
2. Create and edit the demo-openrc file and add the following content:
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD= DEMO_PASS
export OS_AUTH_URL= http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

1. Load the admin-openrc file to populate environment variables with the location of the Identity service
and the admin project and user credentials:
$ . admin-openrc
2. Request an authentication token:
$ openstack token issue


Image service:

Important: For simplicity, this guide describes configuring the Image service to use the file back end, which
uploads and stores in a directory   on the controller node  hosting the Image service. By default, this directory is
/var/lib/glance/images/.
Before you proceed, ensure that the controller node has at least several gigabytes of space available in this
directory. Keep in mind that since the file back end is often local to a controller node, it is not typically
suitable for a multi-node glance deployment.
For information on requirements for other back ends, see Configuration Reference.

Prerequisites:  




Before you install and configure the Image service, you must create a database, service credentials, and API
endpoints.
1. To create the database, complete these steps:
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY ' GLANCE_DBPASS ';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY ' GLANCE_DBPASS ';
2. Source the admin credentials to gain access to admin-only CLI commands:
. admin-openrc
3. To create the service credentials, complete these steps:
    • Create the glance user:
       $ openstack user create --domain default --password-prompt glance
     User Password:
     Repeat User Password:
   • Add the admin role to the glance user and service project:
       $ openstack role add --project service --user glance admin
        Note: This command provides no output.
   • Create the glance service entity:
       $ openstack service create --name glance \
     --description "OpenStack Image" image
4、Create the Image service API endpoints:
openstack endpoint create --region RegionOne \
image public   http://controller:9292

openstack endpoint create --region RegionOne \
image internal   http://controller:9292

openstack endpoint create --region RegionOne \
image admin   http://controller:9292


Image service:

The OpenStack Image service includes the following components:
glance-api  Accepts Image API calls for image discovery, retrieval, and storage.
glance-registry  Stores, processes, and retrieves metadata about images. Metadata includes items such as size
and type.
Database  Stores image metadata and you can choose your database depending on your preference. Most
deployments use MySQL or SQLite.
Storage repository for image files  Various repository types are supported including normal file systems (or
any filesystem mounted on the glance-api controller node), Object Storage, RADOS block devices,
VMware datastore, and HTTP. Note that some repositories will only support read-only usage.
Metadata definition service  A common API for vendors, admins, services, and users to meaningfully define
their own custom metadata. This metadata can be used on different types of resources like images,
artifacts, volumes, flavors, and aggregates. A definition includes the new property’s key, description,
constraints, and the resource types which it can be associated with.
1、yum install openstack-glance

2、 Edit the   /etc/glance/glance-api.conf  file and complete the following actions:
• In the [database] section, configure database access :
[database]
# ...
connection = mysql+pymysql://glance: GLANCE_DBPASS @controller/glance
•In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access:
Note: Comment out or remove any other options in the [keystone_authtoken] section.
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password =   GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
• In the [glance_store] section, configure the local file system store and location of image files:
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
3、 Edit the   /etc/glance/glance-registry.conf   file and complete the following actions:
• In the [database] section, configure database access:
[database]
# ...
connection = mysql+pymysql://glance: GLANCE_DBPASS @controller/glance
• In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access:
Note: Comment out or remove any other options in the [keystone_authtoken] section.
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
4、 Populate the Image service database:
# su -s /bin/sh -c "glance-manage db_sync" glance
• Start the Image services and configure them to start when the system boots:(启动系统开启镜像服务)
# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service


Verify operation(上传cirros)
Verify operation of the Image service using CirrOS, a small Linux image that helps you test your OpenStack
deployment.
Note: Perform these commands   on the controller node.
1. Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
2. Download the source image:
3. Upload the image to the Image service using the QCOW2 disk format, bare container format, and public
visibility so all projects can access it:
openstack image create "cirros" \
--file cirros-0.3.5-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
4. Confirm upload of the image and validate attributes:
openstack image list


Compute service
Install and configure controller node
This section describes how to install and configure the Compute service, code-named nova, on the controller
node.

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY ' 123456 ';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY ' 123456 ';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY '123456 ';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY ' 123456 ';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY '123456 ';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY '123456 ';
1、mysql -u root -p
     MariaDB [(none)]> CREATE DATABASE nova_api;
     MariaDB [(none)]> CREATE DATABASE nova;
     MariaDB [(none)]> CREATE DATABASE nova_cell0;

     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY ' NOVA_DBPASS ';
2、Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
3、 Create the Compute service credentials:
    • Create the nova user:
   $ openstack user create --domain default --password-prompt nova
   User Password:
   Repeat User Password:
    • Add the admin role to the nova user:
    Note: This command provides no output.
   $ openstack role add --project service --user nova admin
    • Create the nova service entity:
  $ openstack service create --name nova \
  --description "OpenStack Compute" compute
4.、Create the Compute API service endpoints:
$ openstack endpoint create --region RegionOne \
compute public   http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne \
compute internal   http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne \
compute admin   http://controller:8774/v2.1
5、 Create a Placement service user using your chosen PLACEMENT_PASS:
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
6、 Add the Placement user to the service project with the admin role:
Note: This command provides no output.
$ openstack role add --project service --user placement admin
7、Create the Placement API entry in the service catalog:
openstack service create --name placement --description "Placement API" placement
8.、Create the Placement API service endpoints:
$ openstack endpoint create --region RegionOne placement public   http://controller:8778
$ openstack endpoint create --region RegionOne placement internal http:// controller:8778
$ openstack endpoint create --region RegionOne placement admin   http://controller:8778


Compute service:

OpenStack Compute consists of the following areas and their components:
nova-api service  Accepts and responds to end user compute API calls. The service supports the OpenStack
Compute API, the Amazon EC2 API, and a special Admin API for privileged users to perform admin-
istrative actions. It enforces some policies and initiates most orchestration activities, such as running an
instance.
nova-api-metadata service  Accepts metadata requests from instances. The nova-api-metadata service
is generally used when you run in multi-host mode with nova-network installations. For details, see
Metadata service in the OpenStack Administrator Guide.
nova-compute service  A worker daemon that creates and terminates virtual machine instances through hy-
pervisor APIs. For example:
• XenAPI for XenServer/XCP
• libvirt for KVM or QEMU
• VMwareAPI for VMware
Processing is fairly complex. Basically, the daemon accepts actions from the queue and performs a series
of system commands such as launching a KVM instance and updating its state in the database.
nova-placement-api service  Tracks the inventory and usage of each provider. For details, see Placement
API.
nova-scheduler service  Takes a virtual machine instance request from the queue and determines on which
compute server host it runs.
nova-conductor module  Mediates interactions between the nova-compute service and the database. It
eliminates direct accesses to the cloud database made by the nova-compute service. The nova-
conductor module scales horizontally. However, do not deploy it on nodes where the nova-compute
service runs. For more information, see Configuration Reference Guide.
nova-cert module  A server daemon that serves the Nova Cert service for X509 certificates. Used to generate
certificates for euca-bundle-image. Only needed for the EC2 API.
nova-consoleauth daemon  Authorizes tokens for users that console proxies provide. See nova-
novncproxy and nova-xvpvncproxy. This service must be running for console proxies to work. You
can run proxies of either type against a single nova-consoleauth service in a cluster configuration. For
information, see About nova-consoleauth.
nova-novncproxy daemon  Provides a proxy for accessing running instances through a VNC connection.
Supports browser-based novnc clients.
nova-spicehtml5proxy daemon  Provides a proxy for accessing running instances through a SPICE connec-
tion. Supports browser-based HTML5 client.
nova-xvpvncproxy daemon  Provides a proxy for accessing running instances through a VNC connection.
Supports an OpenStack-specific Java client.
The queue  A central hub for passing messages between daemons. Usually implemented with RabbitMQ, also
can be implemented with another AMQP message queue, such as ZeroMQ.
SQL database  Stores most build-time and run-time states for a cloud infrastructure, including:
• Available instance types
• Instances in use
• Available networks
• Projects
Theoretically, OpenStack Compute can support any database that SQLAlchemy supports. Common
databases are SQLite3 for test and development work, MySQL, MariaDB, and PostgreSQL.
1、yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api

2. Edit the   /etc/nova/nova.conf  file and complete the following actions:
• In the [DEFAULT] section, enable only the compute and metadata APIs:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
• In the [api_database] and [database] sections, configure database access:
[api_database]
# ...
connection = mysql+pymysql://nova: NOVA_DBPASS @controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova: NOVA_DBPASS @controller/nova
• In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]
# ...
transport_url = rabbit://openstack: RABBIT_PASS @controller
• In the [api] and [keystone_authtoken] sections, configure Identity service access:
Note: Comment out or remove any other options in the [keystone_authtoken] section.
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password =   NOVA_PASS
• In the [DEFAULT] section, configure the my_ip option to use the management interface IP address
of the controller node:
[DEFAULT]
# ...
my_ip = 10.0.0.11
• In the [DEFAULT] section, enable support for the Networking service:
Note: By default, Compute uses an internal firewall driver. Since the Networking service includes
a firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
• In the [vnc] section, configure the VNC proxy to use the management interface IP address of the con-
troller node:
[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
• In the [glance] section, configure the location of the Image service API:
[glance]
# ...
api_servers =   http://controller:9292
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
• In the [placement] section, configure the Placement API:
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password =   PLACEMENT_PASS

• Due to a packaging bug, you must enable access to the Placement API by adding the following configu-
ration to   /etc/httpd/conf.d/00-nova-placement-api.conf:
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
• Restart the httpd service:
systemctl restart httpd
3、 Populate the nova-api database:
Note: Ignore any deprecation messages in this output.
# su -s /bin/sh -c "nova-manage api_db sync" nova
4、Register the cell0 database:
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
5、Create the cell1 cell:
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650
6、Populate the nova database:
# su -s /bin/sh -c "nova-manage db sync" nova
7、Verify nova cell0 and cell1 are registered correctly:
# nova-manage cell_v2 list_cells
Finalize installation
• Start the Compute services and configure them to start when the system boots:
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
1、# yum install openstack-nova-compute
2、Edit the   /etc/nova/nova.conf  file and complete the following actions:
• In the [DEFAULT] section, enable only the compute and metadata APIs:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
• In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]
# ...
transport_url = rabbit://openstack: RABBIT_PASS @controller
• In the [api] and [keystone_authtoken] sections, configure Identity service access:
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password =   NOVA_PASS
• In the [DEFAULT] section, configure the my_ip option:
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network
interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
• In the [DEFAULT] section, enable support for the Networking service:
Note: By default, Compute uses an internal firewall service. Since Networking includes a fire-
wall service, you must disable the Compute firewall service by using the nova.virt.firewall.
NoopFirewallDriver firewall driver.
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
• In the [vnc] section, enable and configure remote console access:
The server component listens on all IP addresses and the proxy component only listens on the
management interface IP address of the compute node. The base URL indicates the location where
you can use a web browser to access remote consoles of instances on this compute node.
Note: If the web browser to access remote consoles resides on a host that cannot resolve the
controller hostname, you must replace controller with the management interface IP address
of the controller node.
[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url =   http://controller:6080/vnc_auto.html
• In the [glance] section, configure the location of the Image service API:
[glance]
# ...
api_servers =   http://controller:9292
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...

lock_path = /var/lib/nova/tmp
• In the [placement] section, configure the Placement API:
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password = PLACEMENT_PASS

Finalize installation

1. Determine whether your compute node supports hardware acceleration for virtual machines:(确保支持硬件虚拟化)
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your compute node supports hardware acceleration
which typically requires no additional configuration.
If this command returns a value of zero, your compute node does not support hardware acceleration and
you must configure libvirt to use QEMU instead of KVM.(返回值为1支持,返回值为0则不支持需如下配置 )
• Edit the [libvirt] section in the   /etc/nova/nova.conf  file as follows:
[libvirt]
# ...
virt_type = qemu
2. Start the Compute service including its dependencies and configure them to start automatically when the
system boots:
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

Note: If the nova-compute service fails to start, check   /var/log/nova/nova-compute.log. The error
message AMQP server on controller:5672 is unreachable likely indicates that the firewall on the con-
troller node is preventing access to port 5672.   Configure the firewall to open port 5672 on the controller node(开启控制节点的端口)
sudo firewall-cmd --zone=public --add-port=3000/tcp --permanent
sudo firewall-cmd --reload
and restart nova-compute service on the compute node.

Add the compute node to the cell database
Important: Run   the following commands on the controller node.
1. Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts
in the database:
$ . admin-openrc
$ openstack hypervisor list
2. Discover compute hosts:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Note: When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts
on the controller node to register those new compute nodes. Alternatively, you can set an appropriate
interval in   /etc/nova/nova.conf:
[scheduler]
discover_hosts_in_cells_interval = 300

Note: Perform these commands on the controller node.

1. Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
2. List service components to verify successful launch and registration of each process:
Note: This output should indicate three service components enabled on the controller node and one
service component enabled on the compute node.
$ openstack compute service list
3. List API endpoints in the Identity service to verify connectivity with the Identity service:
Note: Below endpoints list may differ depending on the installation of OpenStack components.
Note: Ignore any warnings in this output.
$ openstack catalog list
4. List images in the Image service to verify connectivity with the Image service:
$ openstack image list
5. Check the cells and placement API are working successfully:
# nova-status upgrade check


Networking service

(概念可略)
Networking (neutron) concepts
OpenStack Networking (neutron) manages all networking facets for the Virtual Networking Infrastructure
(VNI) and the access layer aspects of the Physical Networking Infrastructure (PNI) in your OpenStack en-
vironment. OpenStack Networking enables projects to create advanced virtual network topologies which may
include services such as a firewall, a load balancer, and a virtual private network (VPN).
Networking provides networks, subnets, and routers as object abstractions. Each abstraction has functional-
ity that mimics its physical counterpart: networks contain subnets, and routers route traffic between different
subnets and networks.
Any given Networking set up has at least one external network. Unlike the other networks, the external network
is not merely a virtually defined network. Instead, it represents a view into a slice of the physical, external
network accessible outside the OpenStack installation. IP addresses on the external network are accessible by
anybody physically on the outside network.
In addition to external networks, any Networking set up has one or more internal networks. These software-
defined networks connect directly to the VMs. Only the VMs on any given internal network, or those on subnets
connected through interfaces to a similar router, can access VMs connected to that network directly.
For the outside network to access VMs, and vice versa, routers between the networks are needed. Each router
has one gateway that is connected to an external network and one or more interfaces connected to internal
networks. Like a physical router, subnets can access machines on other subnets that are connected to the same
router, and machines can access the outside network through the gateway for the router.
Additionally, you can allocate IP addresses on external networks to ports on the internal network. Whenever
something is connected to a subnet, that connection is called a port. You can associate external network IP
addresses with ports to VMs. This way, entities on the outside network can access VMs.
Networking also supports security groups. Security groups enable administrators to define firewall rules in
groups. A VM can belong to one or more security groups, and Networking applies the rules in those security
groups to block or unblock ports, port ranges, or traffic types for that VM.
Each plug-in that Networking uses has its own concepts. While not vital to operating the VNI and OpenStack
environment, understanding these concepts can help you set up Networking. All Networking installations use
a core plug-in and a security group plug-in (or just the No-Op security group plug-in). Additionally, Firewall-
as-a-Service (FWaaS) and Load-Balancer-as-a-Service (LBaaS) plug-ins are available.
Prerequisites:
1、
mysql -u root -p
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY ' NEUTRON_DBPASS ';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY ' NEUTRON_DBPASS ';
2. Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
3. To create the service credentials, complete these steps:
• Create the neutron user:
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
• Add the admin role to the neutron user:
Note: This command provides no output.
$ openstack role add --project service --user neutron admin
• Create the neutron service entity:
$ openstack service create --name neutron \
--description "OpenStack Networking" network
4. Create the Networking service API endpoints:
$ openstack endpoint create --region RegionOne \
network public   http://controller:9696
$ openstack endpoint create --region RegionOne \
network internal   http://controller:9696
$ openstack endpoint create --region RegionOne \
network admin   http://controller:9696


Networking service

It includes the following components:
neutron-server  Accepts and routes API requests to the appropriate OpenStack Networking plug-in for action.
OpenStack Networking plug-ins and agents  Plug and unplug ports, create networks or subnets, and provide
IP addressing. These plug-ins and agents differ depending on the vendor and technologies used in the
particular cloud. OpenStack Networking ships with plug-ins and agents for Cisco virtual and physical
switches, NEC OpenFlow products, Open vSwitch, Linux bridging, and the VMware NSX product.
The common agents are L3 (layer 3), DHCP (dynamic host IP addressing), and a plug-in agent.
Messaging queue  Used by most OpenStack Networking installations to route information between the
neutron-server and various agents. Also acts as a database to store networking state for particular plug-
ins.
OpenStack Networking mainly interacts with OpenStack Compute to provide networks and connectivity for its
instances.
Networking Option 1: Provider networks
Install and configure the Networking components   on the controller node.
Install the components
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
Configure the server component
• Edit the   /etc/neutron/neutron.conf  file and complete the following actions:
– In the [database] section, configure database access:
[database]
# ...
connection = mysql+pymysql://neutron: NEUTRON_DBPASS @controller/neutron
– In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in and disable additional plug-
ins:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
– In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]
# ...
transport_url = rabbit://openstack: RABBIT_PASS @controller
– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
– In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of network topol-
ogy changes:
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url =   http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
– In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

Configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking
infrastructure for instances.
• Edit the   /etc/neutron/plugins/ml2/ml2_conf.ini  file and complete the following actions:
– In the [ml2] section, enable flat and VLAN networks:
[ml2]
# ...
type_drivers = flat,vlan
– In the [ml2] section, disable self-service networks:
[ml2]
# ...
tenant_network_types =
– In the [ml2] section, enable the Linux bridge mechanism:
[ml2]
# ...
mechanism_drivers = linuxbridge
Warning: After you configure the ML2 plug-in, removing values in the type_drivers
option can lead to database inconsistency.
– In the [ml2] section, enable the port security extension driver:
[ml2]
# ...
extension_drivers = port_security
– In the [ml2_type_flat] section, configure the provider virtual network as a flat network:
[ml2_type_flat]
# ...
flat_networks = provider
– In the [securitygroup] section, enable ipset to increase efficiency of security group rules:
[securitygroup]
# ...
enable_ipset = true
Configure the Linux bridge agent
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances
and handles security groups.
• Edit the   /etc/neutron/plugins/ml2/linuxbridge_agent.ini  file and complete the following ac-
tions:
– In the [linux_bridge] section, map the provider virtual network to the provider physical network
interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network
interface. See Host networking for more information.
– In the [vxlan] section, disable VXLAN overlay networks:
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge iptables
firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure the DHCP agent
The DHCP agent provides DHCP services for virtual networks.
• Edit the   /etc/neutron/dhcp_agent.ini  file and complete the following actions:
– In the [DEFAULT] section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and
enable isolated metadata so instances on provider networks can access metadata over the network:
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Return to Networking controller node configuration.

Configure the metadata agent
The metadata agent provides configuration information such as credentials to instances.
• Edit the   /etc/neutron/metadata_agent.ini  file and complete the following actions:
– In the [DEFAULT] section, configure the metadata host and shared secret:
[DEFAULT]
# ...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
Replace METADATA_SECRET with a suitable secret for the metadata proxy.
Configure the Compute service to use the Networking service
• Edit the   /etc/nova/nova.conf  file and perform the following actions:
– In the [neutron] section, configure access parameters, enable the metadata proxy, and configure
the secret:
[neutron]
# ...
auth_url =   http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password =   NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret =   METADATA_SECRET
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
Replace METADATA_SECRET with the secret you chose for the metadata proxy.
Finalize installation
1. The Networking service initialization scripts expect a symbolic link   /etc/neutron/plugin.ini  point-
ing to the ML2 plug-in configuration file,   /etc/neutron/plugins/ml2/ml2_conf.ini.  If this sym-
bolic link does not exist, create it using the following command:
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2. Populate the database:
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Note: Database population occurs later for Networking because the script requires complete server and
plug-in configuration files.
If you receive the following Python exception, Could not parse rfc1738 URL from string, move
the connection option from the [default] section to the [database] section. Then, remove the single
quotes from the value in the neutron.conf file.
3. Restart the Compute API service:
# systemctl restart openstack-nova-api.service
4. Start the Networking services and configure them to start when the system boots.
For both networking options:
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
For networking option 2, also enable and start the layer-3 service:
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
# yum install openstack-neutron-linuxbridge ebtables ipset

Configure the common component
• Edit the   /etc/neutron/neutron.conf  file and complete the following actions:
– In the [database] section, comment out any connection options because compute nodes do not  directly access the database.
– In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]
# ...
transport_url = rabbit://openstack: RABBIT_PASS @controller

– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

– In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

Networking Option 1: Provider networks
Configure the Networking components   on a compute node.
Configure the Linux bridge agent
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances
and handles security groups.
• Edit the   /etc/neutron/plugins/ml2/linuxbridge_agent.ini  file and complete the following ac-
tions:
– In the [linux_bridge] section, map the provider virtual network to the provider physical network
interface:
[linux_bridge]
physical_interface_mappings =provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network
interface. See Host networking for more information.
– In the [vxlan] section, disable VXLAN overlay networks:
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge iptables
firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Return to Networking compute node configuration.

Configure the Compute service to use the Networking service
• Edit the   /etc/nova/nova.conf  file and complete the following actions:
– In the [neutron] section, configure access parameters:
[neutron]
# ...
auth_url =   http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password =   NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

Finalize installation
1. Restart the Compute service:
# systemctl restart openstack-nova-compute.service
2. Start the Linux bridge agent and configure it to start when the system boots:
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service

Verify operation
Note: Perform these commands on the controller node.
1. Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
2. List loaded extensions to verify successful launch of the neutron-server process:
Note: Actual output may differ slightly from this example.
$ openstack extension list --network
Networking Option 1: Provider networks
• List agents to verify successful launch of the neutron agents:
$ openstack network agent list
The output should indicate three agents on the controller node and one agent on each compute node.


Dashboard

Install and configure
This section describes how to install and configure   the dashboard on the controller node.
The only core service required by the dashboard is the Identity service. You can use the dashboard in combi-
nation with other services, such as Image service, Compute, and Networking. You can also use the dashboard
in environments with stand-alone services such as Object Storage.
Note: This section assumes proper installation, configuration, and operation of the Identity service using the
Apache HTTP server and Memcached service as described in the Install and configure the Identity service
section.
# yum install openstack-dashboard
2. Edit the   /etc/openstack-dashboard/local_settings  file and complete the following actions:
• Configure the dashboard to use OpenStack services on the controller node:
OPENSTACK_HOST = "controller"
• Allow your hosts to access the dashboard:
Note: ALLOWED_HOSTS can also be [’*’] to accept all hosts. This may be useful for de-
velopment work, but is potentially insecure and should not be used in production. See https:
ALLOWED_HOSTS = [' one.example.com ', ' two.example.com ']
B. 允许所有客户端访问 dashboard:
ALLOWED_HOSTS = ['*', ]
• Configure the memcached session storage service:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

• Enable the Identity API version 3:
OPENSTACK_KEYSTONE_URL = " http://%s:5000/v3" ; % OPENSTACK_HOST
• Enable support for domains:
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
• Configure API versions:
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
• Configure Default as the default domain for users that you create via the dashboard:
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
• Configure user as the default role for users that you create via the dashboard:
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
• If you chose networking option 1, disable support for layer-3 networking services:
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
• Optionally, configure the time zone:
TIME_ZONE = "TIME_ZONE"
TIME_ZONE = "Asia/Shanghai"
Replace TIME_ZONE with an appropriate time zone identifier. For more information, see the list of
time zones .
Finalize installation
• Restart the web server and session storage service:
# systemctl restart httpd.service memcached.service


Verify operation
Verify operation of the dashboard.
Access the dashboard using a web browser at   http://controller/dashboard .
Authenticate using admin or demo user and default domain credentials.








可能遇到的问题:
1、 python-oslo-privsep-lang-1.16. FAILED                                          
包无法安装

设置网络的DNS1=8.8.8.8
                DNS2=114.114.114.114
重启网卡 service network restart

2、httpd服务无法启动
    进入httpd的配置文件,检查ServerName 有没有配错误,还有端口号

3、访问不到dashboard界面
    关闭controller的防火墙 systemctl stop firewalld

4、网络无法创建
    修改 /etc/neutron/plugins/ml2/ml2_conf.ini
    配置
     tenant_network_types = vlan
然后指定 vlan 的范围:
上面配置定义了 label 为 “default” 的 vlan network,vlan id 的范围是 3001 - 4000。 这个范围是针对普通用户在自己的租户里创建 network 的范围。 因为普通用户创建 network 时并不能指定 vlan id,Neutron 会按顺序自动从这个范围中取值。
对于 admin 则没有 vlan id 的限制,admin 可以创建 id 范围为 1-4094 的 vlan network。
接着需要指明 vlan network 与物理网卡的对应关系:
network_vlan_ranges =1:4000
重启neutron服务
systemctl  restart   neutron-server

/etc/neutron/plugins/ml2/ml2_conf.ini

type_drivers = flat,vlan,vxlan
vlan
vxlan
flat
tenant_network_types
= vlan
=vxlan
=flat
ml2_type_vxlan

vni_ranges =1:1000

ml2_type_vlan
inetwork_vlan_ranges =physnet1:1000:2999,physnet2






1、新建三台虚拟机(controller、computer、object)
一、基础环境配置
也可以创建一台虚拟机安装好系统,其它两台进行拷贝,需做以下处理    
1、修改网络 vim /etc/sysconfig/network-scrips/ifcfg-eth0
2、删除 rm -rf /etc/udev/rules.d/70-persistent-net.rules
3、修改主机名 vim /etc/sysconfig/network
4、修改/etc/resolv.conf
5、init 6 

一、配置三个源:
    1、Base源  CentOS-Base.repo
        阿里源
        wget -O /etc/yum.repos.d/CentOS-Base.repo   http://mirrors.aliyun.com/repo/Centos-6.repo
        网易源
        wget -O /etc/yum.repos.d/CentOS-Base.repo   http://mirrors.163.com/.help/CentOS6-Base-163.repo
    2、epel源(企业源) epel.repo
    3、OpenStack源(优先级应为最高priority=1) rdo-release.repo
           yum install   http://repos.fedorapeople.org/repos/openstack/

icehouse/rdo-release-icehouse-3.noarch.rpm
二、安装yum的优先级插件
    yum install yum-plugin-priorities -y
三、关闭防火墙
    service iptables stop
    chkconfig iptables off
    vim /etc/selinux/config
    setenforce 0
         sestatus  

CentOs7最小化安装


命令  红色
配置 绿色

controller
computer
object
NAT网络
BOOTPROTO=none
DEVICE=ens33
ONBOOT=yes
IPADDR= 192.168.1.3
PREFIX=24
GATEWAY= 192.168.1.2

nmcli connection up ens33

BOOTPROTO=none
DEVICE=ens33
ONBOOT=yes
IPADDR= 192.168.1.10
PREFIX=24
GATEWAY= 192.168.1.2

BOOTPROTO=none
DEVICE=ens33
ONBOOT=yes
IPADDR= 192.168.1.20
PREFIX=24
GATEWAY= 192.168.1.2

源安装
 yum install wget -y
 yum install vim -y
 yum install net-tools -y
网易源
         wget -O /etc/yum.repos.d/CentOS-Base.repo   http://mirrors.163.com/.help/CentOS6-Base-163.repo
OpenStack源(优先级应为最高priority=1) rdo-release.repo
/etc/hosts
# controller
192.168.1.3 controller
# compute
192.168.1.10 compute
# object
192.168.1.20 object

ping测试
ping   qq.com
相互ping
安装时间服务器(同步时间)
yum install chrony -y
配置时间服务器
You should install Chrony, an implementation of NTP, to properly
recommend that you configure the controller node to reference more accurate
nodes to reference the controller node.
vim  /etc/chrony.conf
Replace NTP_SERVER with the hostname or IP address of a suitable more accurate (lower stratum) NTP
server. The configuration supports multiple server keys.
server   NTP_SERVER   iburst
To enable other nodes to connect to the chrony daemon on the controller node, add this key to the   /etc/
chrony.conf  file :
allow 10.0.0.0/24
systemctl enable chronyd.service
systemctl start chronyd.service
server controller iburst


systemctl enable chronyd.service
systemctl start chronyd.service
测试时间服务器
chronyc sources
开启OpenStack repository
yum install centos-release-openstack-ocata
yum upgrade
yum install python-openstackclient
yum install openstack-selinux
安装数据库
Most OpenStack services use an SQL database to store information.   The
troller node.  The procedures in this guide use MariaDB or MySQL depending
services also support other SQL databases including PostgreSQL.
yum install mariadb mariadb-server python2-PyMySQL
Create and edit the   /etc/my.cnf.d/openstack.cnf  file and complete the following actions:
Create a [mysqld] section, and set the bind-address key to the management IP address of the
controller node to enable access by other nodes via the management network. Set additional keys
to enable useful options and the UTF-8 character set:
[mysqld]
bind-address =   192.168.1.3
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
systemctl enable mariadb.service
systemctl start mariadb.service
2. Secure the database service by running the mysql_secure_installation script. In particular, choose
a suitable password for the database root account:(执行脚本,进行安全设置,和重设密码)
初始密码为空格,什么都没有
# mysql_secure_installation  ???????


Message queue(消息队列)
OpenStack uses a message queue to coordinate operations and status information among services.   The message
queue service typically runs on the controller node.  OpenStack supports several message queue services includ-
ing RabbitMQ, Qpid, and ZeroMQ. However, most distributions that package OpenStack support a particular
message queue service. This guide implements the RabbitMQ message queue service because most distri-
butions support it. If you prefer to implement a different message queue service, consult the documentation
associated with it.
The message queue runs on the controller node.
1、yum install rabbitmq-server
2、systemctl enable rabbitmq-server.service
      systemctl start rabbitmq-server.service

3. Add the openstack user:
Replace RABBIT_PASS with a suitable password.
rabbitmqctl add_user openstack   RABBIT_PASS
Creating user "openstack" ...

4. Permit configuration, write, and read access for the openstack user:
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...


Memcached(缓存)
The Identity service authentication mechanism for services uses Memcached to cache tokens. The memcached
service typically runs on the controller node. For production deployments, we recommend enabling a combi-
nation of firewalling, authentication, and encryption to secure it.
yum install memcached python-memcached
Edit the   /etc/sysconfig/memcached  file and complete the following actions:
1、Configure the service to use the management IP address of the controller node. This is to enable
access by other nodes via the management network:
OPTIONS="-l 127.0.0.1,::1,controller"
systemctl enable memcached.service
systemctl start memcached.service



Identity service:

Prerequisites


This section describes how to install and configure   the OpenStack Identity service, code-named keystone, on
the controller  node. For scalability purposes, this configuration deploys Fernet tokens and the Apache HTTP
server to handle requests.
Prerequisites:Before you install and configure the Identity service, you must create a database.

mysql -u root -p
MariaDB [(none)]> CREATE DATABASE keystone;
Grant proper access to the keystone database:
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY ' KEYSTONE_DBPASS ';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY ' KEYSTONE_DBPASS ';




Identity service:

Install and configure components


Note: Default configuration files vary by distribution. You might need to add these sections and options rather
than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates
potential default configuration options that you should retain.
Note: This guide uses the Apache HTTP server with mod_wsgi to serve Identity service requests on ports
5000 and 35357. By default, the keystone service still listens on these ports. Therefore, this guide manually
disables the keystone service.
1、yum install openstack-keystone httpd mod_wsgi
2、Edit the   /etc/keystone/keystone.conf  file and complete the following actions:

Note: Comment out or remove any other connection options in the [database] section.
[database]
# ...
connection = mysql+pymysql://keystone: KEYSTONE_DBPASS @controller/keystone

[token]
# ...
provider = fernet

3、Populate the Identity service database:
#   su -s /bin/sh -c "keystone-manage db_sync" keystone
4. Initialize Fernet key repositories:
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
5. Bootstrap the Identity service:
# keystone-manage bootstrap --bootstrap-password   ADMIN_PASS   \
--bootstrap-admin-url   http://controller:35357/v3/   \
--bootstrap-internal-url   http://controller:5000/v3/   \
--bootstrap-public-url   http://controller:5000/v3/   \
--bootstrap-region-id RegionOne



Identity service:

Configure the Apache HTTP server


 
1. Edit the   /etc/httpd/conf/httpd.conf  file and configure the ServerName option to reference the
controller node:
ServerName controller
Create a link to the   /usr/share/keystone/wsgi-keystone.conf  file:
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/


Finalize the installation:

# systemctl enable httpd.service
# systemctl start httpd.service

Configure the administrative account
$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL= http://controller:35357/v3
$ export OS_IDENTITY_API_VERSION=3


Identity service:

Create a domain, projects, users, and roles:

The Identity service provides authentication services for each OpenStack service. The authentication service
uses a combination of domains, projects, users, and roles.

Note: You can repeat this procedure to create additional projects and users.
1. This guide uses a service project that contains a unique user for each service that you add to your envi-
ronment.   Create the service   project:
openstack project create --domain default \ 
--description "Service Project" service
2. Regular (non-admin) tasks should use an unprivileged project and user. As an example, this guide creates
the demo project and user.
openstack project create --domain default \
--description "Demo Project" demo
Note: Do not repeat this step when   creating additional users  for this project.
openstack user create --domain default \
--password-prompt demo
User Password:
Repeat User Password:
3、Create the user role:
openstack role create user
4、Add the user role to the demo user of the demo project:
openstack role add --project demo --user demo user
Note: This command provides no output.



Identity service:

Verify operation
Perform these commands on the controller node.
1. For security reasons, disable the temporary authentication token mechanism:(出于安全考虑,禁用身份认证令牌机制)
Edit the   /etc/keystone/keystone-paste.ini   file and   remove admin_token_auth   from the
[pipeline:public_api], [pipeline:admin_api], and [pipeline:api_v3]   sections.
2. Unset the temporary OS_AUTH_URL and OS_PASSWORD environment variable:
$ unset OS_AUTH_URL OS_PASSWORD
3. As the admin user, request an authentication token:
openstack --os-auth-url   http://controller:35357/v3   \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue
Password:
Note: This command uses the password for the admin user.(这个密码是admin用户的密码)
4. As the demo user, request an authentication token:
openstack --os-auth-url   http://controller:5000/v3   \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password:
这个是demo普通用户的密码


Identity service:

Create OpenStack client environment scripts
1. Create and edit the admin-openrc file and add the following content:
Note: The OpenStack client also supports using a clouds.yaml file. For more information, see the
os-client-config.
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD= ADMIN_PASS
export OS_AUTH_URL= http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
2. Create and edit the demo-openrc file and add the following content:
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD= DEMO_PASS
export OS_AUTH_URL= http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

1. Load the admin-openrc file to populate environment variables with the location of the Identity service
and the admin project and user credentials:
$ . admin-openrc
2. Request an authentication token:
$ openstack token issue


Image service:

Important: For simplicity, this guide describes configuring the Image service to use the file back end, which
uploads and stores in a directory   on the controller node  hosting the Image service. By default, this directory is
/var/lib/glance/images/.
Before you proceed, ensure that the controller node has at least several gigabytes of space available in this
directory. Keep in mind that since the file back end is often local to a controller node, it is not typically
suitable for a multi-node glance deployment.
For information on requirements for other back ends, see Configuration Reference.

Prerequisites:  




Before you install and configure the Image service, you must create a database, service credentials, and API
endpoints.
1. To create the database, complete these steps:
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY ' GLANCE_DBPASS ';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY ' GLANCE_DBPASS ';
2. Source the admin credentials to gain access to admin-only CLI commands:
. admin-openrc
3. To create the service credentials, complete these steps:
    • Create the glance user:
       $ openstack user create --domain default --password-prompt glance
     User Password:
     Repeat User Password:
   • Add the admin role to the glance user and service project:
       $ openstack role add --project service --user glance admin
        Note: This command provides no output.
   • Create the glance service entity:
       $ openstack service create --name glance \
     --description "OpenStack Image" image
4、Create the Image service API endpoints:
openstack endpoint create --region RegionOne \
image public   http://controller:9292

openstack endpoint create --region RegionOne \
image internal   http://controller:9292

openstack endpoint create --region RegionOne \
image admin   http://controller:9292


Image service:

The OpenStack Image service includes the following components:
glance-api  Accepts Image API calls for image discovery, retrieval, and storage.
glance-registry  Stores, processes, and retrieves metadata about images. Metadata includes items such as size
and type.
Database  Stores image metadata and you can choose your database depending on your preference. Most
deployments use MySQL or SQLite.
Storage repository for image files  Various repository types are supported including normal file systems (or
any filesystem mounted on the glance-api controller node), Object Storage, RADOS block devices,
VMware datastore, and HTTP. Note that some repositories will only support read-only usage.
Metadata definition service  A common API for vendors, admins, services, and users to meaningfully define
their own custom metadata. This metadata can be used on different types of resources like images,
artifacts, volumes, flavors, and aggregates. A definition includes the new property’s key, description,
constraints, and the resource types which it can be associated with.
1、yum install openstack-glance

2、 Edit the   /etc/glance/glance-api.conf  file and complete the following actions:
• In the [database] section, configure database access :
[database]
# ...
connection = mysql+pymysql://glance: GLANCE_DBPASS @controller/glance
•In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access:
Note: Comment out or remove any other options in the [keystone_authtoken] section.
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password =   GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
• In the [glance_store] section, configure the local file system store and location of image files:
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
3、 Edit the   /etc/glance/glance-registry.conf   file and complete the following actions:
• In the [database] section, configure database access:
[database]
# ...
connection = mysql+pymysql://glance: GLANCE_DBPASS @controller/glance
• In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access:
Note: Comment out or remove any other options in the [keystone_authtoken] section.
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
4、 Populate the Image service database:
# su -s /bin/sh -c "glance-manage db_sync" glance
• Start the Image services and configure them to start when the system boots:(启动系统开启镜像服务)
# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service


Verify operation(上传cirros)
Verify operation of the Image service using CirrOS, a small Linux image that helps you test your OpenStack
deployment.
Note: Perform these commands   on the controller node.
1. Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
2. Download the source image:
3. Upload the image to the Image service using the QCOW2 disk format, bare container format, and public
visibility so all projects can access it:
openstack image create "cirros" \
--file cirros-0.3.5-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
4. Confirm upload of the image and validate attributes:
openstack image list


Compute service
Install and configure controller node
This section describes how to install and configure the Compute service, code-named nova, on the controller
node.

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY ' 123456 ';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY ' 123456 ';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY '123456 ';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY ' 123456 ';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY '123456 ';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY '123456 ';
1、mysql -u root -p
     MariaDB [(none)]> CREATE DATABASE nova_api;
     MariaDB [(none)]> CREATE DATABASE nova;
     MariaDB [(none)]> CREATE DATABASE nova_cell0;

     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY ' NOVA_DBPASS ';
     MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY ' NOVA_DBPASS ';
2、Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
3、 Create the Compute service credentials:
    • Create the nova user:
   $ openstack user create --domain default --password-prompt nova
   User Password:
   Repeat User Password:
    • Add the admin role to the nova user:
    Note: This command provides no output.
   $ openstack role add --project service --user nova admin
    • Create the nova service entity:
  $ openstack service create --name nova \
  --description "OpenStack Compute" compute
4.、Create the Compute API service endpoints:
$ openstack endpoint create --region RegionOne \
compute public   http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne \
compute internal   http://controller:8774/v2.1
$ openstack endpoint create --region RegionOne \
compute admin   http://controller:8774/v2.1
5、 Create a Placement service user using your chosen PLACEMENT_PASS:
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
6、 Add the Placement user to the service project with the admin role:
Note: This command provides no output.
$ openstack role add --project service --user placement admin
7、Create the Placement API entry in the service catalog:
openstack service create --name placement --description "Placement API" placement
8.、Create the Placement API service endpoints:
$ openstack endpoint create --region RegionOne placement public   http://controller:8778
$ openstack endpoint create --region RegionOne placement internal http:// controller:8778
$ openstack endpoint create --region RegionOne placement admin   http://controller:8778


Compute service:

OpenStack Compute consists of the following areas and their components:
nova-api service  Accepts and responds to end user compute API calls. The service supports the OpenStack
Compute API, the Amazon EC2 API, and a special Admin API for privileged users to perform admin-
istrative actions. It enforces some policies and initiates most orchestration activities, such as running an
instance.
nova-api-metadata service  Accepts metadata requests from instances. The nova-api-metadata service
is generally used when you run in multi-host mode with nova-network installations. For details, see
Metadata service in the OpenStack Administrator Guide.
nova-compute service  A worker daemon that creates and terminates virtual machine instances through hy-
pervisor APIs. For example:
• XenAPI for XenServer/XCP
• libvirt for KVM or QEMU
• VMwareAPI for VMware
Processing is fairly complex. Basically, the daemon accepts actions from the queue and performs a series
of system commands such as launching a KVM instance and updating its state in the database.
nova-placement-api service  Tracks the inventory and usage of each provider. For details, see Placement
API.
nova-scheduler service  Takes a virtual machine instance request from the queue and determines on which
compute server host it runs.
nova-conductor module  Mediates interactions between the nova-compute service and the database. It
eliminates direct accesses to the cloud database made by the nova-compute service. The nova-
conductor module scales horizontally. However, do not deploy it on nodes where the nova-compute
service runs. For more information, see Configuration Reference Guide.
nova-cert module  A server daemon that serves the Nova Cert service for X509 certificates. Used to generate
certificates for euca-bundle-image. Only needed for the EC2 API.
nova-consoleauth daemon  Authorizes tokens for users that console proxies provide. See nova-
novncproxy and nova-xvpvncproxy. This service must be running for console proxies to work. You
can run proxies of either type against a single nova-consoleauth service in a cluster configuration. For
information, see About nova-consoleauth.
nova-novncproxy daemon  Provides a proxy for accessing running instances through a VNC connection.
Supports browser-based novnc clients.
nova-spicehtml5proxy daemon  Provides a proxy for accessing running instances through a SPICE connec-
tion. Supports browser-based HTML5 client.
nova-xvpvncproxy daemon  Provides a proxy for accessing running instances through a VNC connection.
Supports an OpenStack-specific Java client.
The queue  A central hub for passing messages between daemons. Usually implemented with RabbitMQ, also
can be implemented with another AMQP message queue, such as ZeroMQ.
SQL database  Stores most build-time and run-time states for a cloud infrastructure, including:
• Available instance types
• Instances in use
• Available networks
• Projects
Theoretically, OpenStack Compute can support any database that SQLAlchemy supports. Common
databases are SQLite3 for test and development work, MySQL, MariaDB, and PostgreSQL.
1、yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api

2. Edit the   /etc/nova/nova.conf  file and complete the following actions:
• In the [DEFAULT] section, enable only the compute and metadata APIs:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
• In the [api_database] and [database] sections, configure database access:
[api_database]
# ...
connection = mysql+pymysql://nova: NOVA_DBPASS @controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova: NOVA_DBPASS @controller/nova
• In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]
# ...
transport_url = rabbit://openstack: RABBIT_PASS @controller
• In the [api] and [keystone_authtoken] sections, configure Identity service access:
Note: Comment out or remove any other options in the [keystone_authtoken] section.
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password =   NOVA_PASS
• In the [DEFAULT] section, configure the my_ip option to use the management interface IP address
of the controller node:
[DEFAULT]
# ...
my_ip = 10.0.0.11
• In the [DEFAULT] section, enable support for the Networking service:
Note: By default, Compute uses an internal firewall driver. Since the Networking service includes
a firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
• In the [vnc] section, configure the VNC proxy to use the management interface IP address of the con-
troller node:
[vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
• In the [glance] section, configure the location of the Image service API:
[glance]
# ...
api_servers =   http://controller:9292
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
• In the [placement] section, configure the Placement API:
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password =   PLACEMENT_PASS

• Due to a packaging bug, you must enable access to the Placement API by adding the following configu-
ration to   /etc/httpd/conf.d/00-nova-placement-api.conf:
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
• Restart the httpd service:
systemctl restart httpd
3、 Populate the nova-api database:
Note: Ignore any deprecation messages in this output.
# su -s /bin/sh -c "nova-manage api_db sync" nova
4、Register the cell0 database:
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
5、Create the cell1 cell:
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650
6、Populate the nova database:
# su -s /bin/sh -c "nova-manage db sync" nova
7、Verify nova cell0 and cell1 are registered correctly:
# nova-manage cell_v2 list_cells
Finalize installation
• Start the Compute services and configure them to start when the system boots:
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
1、# yum install openstack-nova-compute
2、Edit the   /etc/nova/nova.conf  file and complete the following actions:
• In the [DEFAULT] section, enable only the compute and metadata APIs:
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
• In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]
# ...
transport_url = rabbit://openstack: RABBIT_PASS @controller
• In the [api] and [keystone_authtoken] sections, configure Identity service access:
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password =   NOVA_PASS
• In the [DEFAULT] section, configure the my_ip option:
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network
interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
• In the [DEFAULT] section, enable support for the Networking service:
Note: By default, Compute uses an internal firewall service. Since Networking includes a fire-
wall service, you must disable the Compute firewall service by using the nova.virt.firewall.
NoopFirewallDriver firewall driver.
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
• In the [vnc] section, enable and configure remote console access:
The server component listens on all IP addresses and the proxy component only listens on the
management interface IP address of the compute node. The base URL indicates the location where
you can use a web browser to access remote consoles of instances on this compute node.
Note: If the web browser to access remote consoles resides on a host that cannot resolve the
controller hostname, you must replace controller with the management interface IP address
of the controller node.
[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url =   http://controller:6080/vnc_auto.html
• In the [glance] section, configure the location of the Image service API:
[glance]
# ...
api_servers =   http://controller:9292
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...

lock_path = /var/lib/nova/tmp
• In the [placement] section, configure the Placement API:
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
username = placement
password = PLACEMENT_PASS

Finalize installation

1. Determine whether your compute node supports hardware acceleration for virtual machines:(确保支持硬件虚拟化)
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your compute node supports hardware acceleration
which typically requires no additional configuration.
If this command returns a value of zero, your compute node does not support hardware acceleration and
you must configure libvirt to use QEMU instead of KVM.(返回值为1支持,返回值为0则不支持需如下配置 )
• Edit the [libvirt] section in the   /etc/nova/nova.conf  file as follows:
[libvirt]
# ...
virt_type = qemu
2. Start the Compute service including its dependencies and configure them to start automatically when the
system boots:
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

Note: If the nova-compute service fails to start, check   /var/log/nova/nova-compute.log. The error
message AMQP server on controller:5672 is unreachable likely indicates that the firewall on the con-
troller node is preventing access to port 5672.   Configure the firewall to open port 5672 on the controller node(开启控制节点的端口)
sudo firewall-cmd --zone=public --add-port=3000/tcp --permanent
sudo firewall-cmd --reload
and restart nova-compute service on the compute node.

Add the compute node to the cell database
Important: Run   the following commands on the controller node.
1. Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts
in the database:
$ . admin-openrc
$ openstack hypervisor list
2. Discover compute hosts:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Note: When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts
on the controller node to register those new compute nodes. Alternatively, you can set an appropriate
interval in   /etc/nova/nova.conf:
[scheduler]
discover_hosts_in_cells_interval = 300

Note: Perform these commands on the controller node.

1. Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
2. List service components to verify successful launch and registration of each process:
Note: This output should indicate three service components enabled on the controller node and one
service component enabled on the compute node.
$ openstack compute service list
3. List API endpoints in the Identity service to verify connectivity with the Identity service:
Note: Below endpoints list may differ depending on the installation of OpenStack components.
Note: Ignore any warnings in this output.
$ openstack catalog list
4. List images in the Image service to verify connectivity with the Image service:
$ openstack image list
5. Check the cells and placement API are working successfully:
# nova-status upgrade check


Networking service

(概念可略)
Networking (neutron) concepts
OpenStack Networking (neutron) manages all networking facets for the Virtual Networking Infrastructure
(VNI) and the access layer aspects of the Physical Networking Infrastructure (PNI) in your OpenStack en-
vironment. OpenStack Networking enables projects to create advanced virtual network topologies which may
include services such as a firewall, a load balancer, and a virtual private network (VPN).
Networking provides networks, subnets, and routers as object abstractions. Each abstraction has functional-
ity that mimics its physical counterpart: networks contain subnets, and routers route traffic between different
subnets and networks.
Any given Networking set up has at least one external network. Unlike the other networks, the external network
is not merely a virtually defined network. Instead, it represents a view into a slice of the physical, external
network accessible outside the OpenStack installation. IP addresses on the external network are accessible by
anybody physically on the outside network.
In addition to external networks, any Networking set up has one or more internal networks. These software-
defined networks connect directly to the VMs. Only the VMs on any given internal network, or those on subnets
connected through interfaces to a similar router, can access VMs connected to that network directly.
For the outside network to access VMs, and vice versa, routers between the networks are needed. Each router
has one gateway that is connected to an external network and one or more interfaces connected to internal
networks. Like a physical router, subnets can access machines on other subnets that are connected to the same
router, and machines can access the outside network through the gateway for the router.
Additionally, you can allocate IP addresses on external networks to ports on the internal network. Whenever
something is connected to a subnet, that connection is called a port. You can associate external network IP
addresses with ports to VMs. This way, entities on the outside network can access VMs.
Networking also supports security groups. Security groups enable administrators to define firewall rules in
groups. A VM can belong to one or more security groups, and Networking applies the rules in those security
groups to block or unblock ports, port ranges, or traffic types for that VM.
Each plug-in that Networking uses has its own concepts. While not vital to operating the VNI and OpenStack
environment, understanding these concepts can help you set up Networking. All Networking installations use
a core plug-in and a security group plug-in (or just the No-Op security group plug-in). Additionally, Firewall-
as-a-Service (FWaaS) and Load-Balancer-as-a-Service (LBaaS) plug-ins are available.
Prerequisites:
1、
mysql -u root -p
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY ' NEUTRON_DBPASS ';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY ' NEUTRON_DBPASS ';
2. Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
3. To create the service credentials, complete these steps:
• Create the neutron user:
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
• Add the admin role to the neutron user:
Note: This command provides no output.
$ openstack role add --project service --user neutron admin
• Create the neutron service entity:
$ openstack service create --name neutron \
--description "OpenStack Networking" network
4. Create the Networking service API endpoints:
$ openstack endpoint create --region RegionOne \
network public   http://controller:9696
$ openstack endpoint create --region RegionOne \
network internal   http://controller:9696
$ openstack endpoint create --region RegionOne \
network admin   http://controller:9696


Networking service

It includes the following components:
neutron-server  Accepts and routes API requests to the appropriate OpenStack Networking plug-in for action.
OpenStack Networking plug-ins and agents  Plug and unplug ports, create networks or subnets, and provide
IP addressing. These plug-ins and agents differ depending on the vendor and technologies used in the
particular cloud. OpenStack Networking ships with plug-ins and agents for Cisco virtual and physical
switches, NEC OpenFlow products, Open vSwitch, Linux bridging, and the VMware NSX product.
The common agents are L3 (layer 3), DHCP (dynamic host IP addressing), and a plug-in agent.
Messaging queue  Used by most OpenStack Networking installations to route information between the
neutron-server and various agents. Also acts as a database to store networking state for particular plug-
ins.
OpenStack Networking mainly interacts with OpenStack Compute to provide networks and connectivity for its
instances.
Networking Option 1: Provider networks
Install and configure the Networking components   on the controller node.
Install the components
# yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
Configure the server component
• Edit the   /etc/neutron/neutron.conf  file and complete the following actions:
– In the [database] section, configure database access:
[database]
# ...
connection = mysql+pymysql://neutron: NEUTRON_DBPASS @controller/neutron
– In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in and disable additional plug-
ins:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
– In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]
# ...
transport_url = rabbit://openstack: RABBIT_PASS @controller
– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
– In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of network topol-
ogy changes:
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url =   http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
– In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

Configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking
infrastructure for instances.
• Edit the   /etc/neutron/plugins/ml2/ml2_conf.ini  file and complete the following actions:
– In the [ml2] section, enable flat and VLAN networks:
[ml2]
# ...
type_drivers = flat,vlan
– In the [ml2] section, disable self-service networks:
[ml2]
# ...
tenant_network_types =
– In the [ml2] section, enable the Linux bridge mechanism:
[ml2]
# ...
mechanism_drivers = linuxbridge
Warning: After you configure the ML2 plug-in, removing values in the type_drivers
option can lead to database inconsistency.
– In the [ml2] section, enable the port security extension driver:
[ml2]
# ...
extension_drivers = port_security
– In the [ml2_type_flat] section, configure the provider virtual network as a flat network:
[ml2_type_flat]
# ...
flat_networks = provider
– In the [securitygroup] section, enable ipset to increase efficiency of security group rules:
[securitygroup]
# ...
enable_ipset = true
Configure the Linux bridge agent
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances
and handles security groups.
• Edit the   /etc/neutron/plugins/ml2/linuxbridge_agent.ini  file and complete the following ac-
tions:
– In the [linux_bridge] section, map the provider virtual network to the provider physical network
interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network
interface. See Host networking for more information.
– In the [vxlan] section, disable VXLAN overlay networks:
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge iptables
firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Configure the DHCP agent
The DHCP agent provides DHCP services for virtual networks.
• Edit the   /etc/neutron/dhcp_agent.ini  file and complete the following actions:
– In the [DEFAULT] section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and
enable isolated metadata so instances on provider networks can access metadata over the network:
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Return to Networking controller node configuration.

Configure the metadata agent
The metadata agent provides configuration information such as credentials to instances.
• Edit the   /etc/neutron/metadata_agent.ini  file and complete the following actions:
– In the [DEFAULT] section, configure the metadata host and shared secret:
[DEFAULT]
# ...
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA_SECRET
Replace METADATA_SECRET with a suitable secret for the metadata proxy.
Configure the Compute service to use the Networking service
• Edit the   /etc/nova/nova.conf  file and perform the following actions:
– In the [neutron] section, configure access parameters, enable the metadata proxy, and configure
the secret:
[neutron]
# ...
auth_url =   http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password =   NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret =   METADATA_SECRET
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
Replace METADATA_SECRET with the secret you chose for the metadata proxy.
Finalize installation
1. The Networking service initialization scripts expect a symbolic link   /etc/neutron/plugin.ini  point-
ing to the ML2 plug-in configuration file,   /etc/neutron/plugins/ml2/ml2_conf.ini.  If this sym-
bolic link does not exist, create it using the following command:
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
2. Populate the database:
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
Note: Database population occurs later for Networking because the script requires complete server and
plug-in configuration files.
If you receive the following Python exception, Could not parse rfc1738 URL from string, move
the connection option from the [default] section to the [database] section. Then, remove the single
quotes from the value in the neutron.conf file.
3. Restart the Compute API service:
# systemctl restart openstack-nova-api.service
4. Start the Networking services and configure them to start when the system boots.
For both networking options:
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
For networking option 2, also enable and start the layer-3 service:
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
# yum install openstack-neutron-linuxbridge ebtables ipset

Configure the common component
• Edit the   /etc/neutron/neutron.conf  file and complete the following actions:
– In the [database] section, comment out any connection options because compute nodes do not  directly access the database.
– In the [DEFAULT] section, configure RabbitMQ message queue access:
[DEFAULT]
# ...
transport_url = rabbit://openstack: RABBIT_PASS @controller

– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri =   http://controller:5000
auth_url =   http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS

– In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

Networking Option 1: Provider networks
Configure the Networking components   on a compute node.
Configure the Linux bridge agent
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances
and handles security groups.
• Edit the   /etc/neutron/plugins/ml2/linuxbridge_agent.ini  file and complete the following ac-
tions:
– In the [linux_bridge] section, map the provider virtual network to the provider physical network
interface:
[linux_bridge]
physical_interface_mappings =provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network
interface. See Host networking for more information.
– In the [vxlan] section, disable VXLAN overlay networks:
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge iptables
firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Return to Networking compute node configuration.

Configure the Compute service to use the Networking service
• Edit the   /etc/nova/nova.conf  file and complete the following actions:
– In the [neutron] section, configure access parameters:
[neutron]
# ...
auth_url =   http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password =   NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

Finalize installation
1. Restart the Compute service:
# systemctl restart openstack-nova-compute.service
2. Start the Linux bridge agent and configure it to start when the system boots:
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service

Verify operation
Note: Perform these commands on the controller node.
1. Source the admin credentials to gain access to admin-only CLI commands:
$ . admin-openrc
2. List loaded extensions to verify successful launch of the neutron-server process:
Note: Actual output may differ slightly from this example.
$ openstack extension list --network
Networking Option 1: Provider networks
• List agents to verify successful launch of the neutron agents:
$ openstack network agent list
The output should indicate three agents on the controller node and one agent on each compute node.


Dashboard

Install and configure
This section describes how to install and configure   the dashboard on the controller node.
The only core service required by the dashboard is the Identity service. You can use the dashboard in combi-
nation with other services, such as Image service, Compute, and Networking. You can also use the dashboard
in environments with stand-alone services such as Object Storage.
Note: This section assumes proper installation, configuration, and operation of the Identity service using the
Apache HTTP server and Memcached service as described in the Install and configure the Identity service
section.
# yum install openstack-dashboard
2. Edit the   /etc/openstack-dashboard/local_settings  file and complete the following actions:
• Configure the dashboard to use OpenStack services on the controller node:
OPENSTACK_HOST = "controller"
• Allow your hosts to access the dashboard:
Note: ALLOWED_HOSTS can also be [’*’] to accept all hosts. This may be useful for de-
velopment work, but is potentially insecure and should not be used in production. See https:
ALLOWED_HOSTS = [' one.example.com ', ' two.example.com ']
B. 允许所有客户端访问 dashboard:
ALLOWED_HOSTS = ['*', ]
• Configure the memcached session storage service:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}

• Enable the Identity API version 3:
OPENSTACK_KEYSTONE_URL = " http://%s:5000/v3" ; % OPENSTACK_HOST
• Enable support for domains:
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
• Configure API versions:
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
• Configure Default as the default domain for users that you create via the dashboard:
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
• Configure user as the default role for users that you create via the dashboard:
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
• If you chose networking option 1, disable support for layer-3 networking services:
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
• Optionally, configure the time zone:
TIME_ZONE = "TIME_ZONE"
TIME_ZONE = "Asia/Shanghai"
Replace TIME_ZONE with an appropriate time zone identifier. For more information, see the list of
time zones .
Finalize installation
• Restart the web server and session storage service:
# systemctl restart httpd.service memcached.service


Verify operation
Verify operation of the dashboard.
Access the dashboard using a web browser at   http://controller/dashboard .
Authenticate using admin or demo user and default domain credentials.








可能遇到的问题:
1、 python-oslo-privsep-lang-1.16. FAILED                                          
包无法安装

设置网络的DNS1=8.8.8.8
                DNS2=114.114.114.114
重启网卡 service network restart

2、httpd服务无法启动
    进入httpd的配置文件,检查ServerName 有没有配错误,还有端口号

3、访问不到dashboard界面
    关闭controller的防火墙 systemctl stop firewalld

4、网络无法创建
    修改 /etc/neutron/plugins/ml2/ml2_conf.ini
    配置
     tenant_network_types = vlan
然后指定 vlan 的范围:
上面配置定义了 label 为 “default” 的 vlan network,vlan id 的范围是 3001 - 4000。 这个范围是针对普通用户在自己的租户里创建 network 的范围。 因为普通用户创建 network 时并不能指定 vlan id,Neutron 会按顺序自动从这个范围中取值。
对于 admin 则没有 vlan id 的限制,admin 可以创建 id 范围为 1-4094 的 vlan network。
接着需要指明 vlan network 与物理网卡的对应关系:
network_vlan_ranges =1:4000
重启neutron服务
systemctl  restart   neutron-server

/etc/neutron/plugins/ml2/ml2_conf.ini

type_drivers = flat,vlan,vxlan
vlan
vxlan
flat
tenant_network_types
= vlan
=vxlan
=flat
ml2_type_vxlan

vni_ranges =1:1000

ml2_type_vlan
inetwork_vlan_ranges =physnet1:1000:2999,physnet2






  • 1
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
目录 前言 1 一、openstack部署准备 1 1.安装操作系统 1 2.设置root权限 4 3.设置网络 5 4.安装其他工具 6 5.安装bridge 6 6.时间同步 7 7.设置iscsi 7 8. 安装rabbitmq 8 二、安装mysql数据库 8 1.安装mysql 8 2.安装phpmyadmin 9 3.创建nova,glance,keystone数据库 9 三、安装keystone 10 1.安装keystone 10 2.配置keystone 10 3.创建租户、用户、角色 12 4.创建服务 14 5.验证安装 15 四、安装glance 16 1.安装软件 16 2.配置/etc/glance/glance-api-paste.ini 16 3.设置 /etc/glance/glance-registry-paste.ini 16 4.配置/etc/glance/glance-registry.conf 16 5.配置/etc/glance/glance-api.conf 17 6.同步数据库 17 7.验证glance服务是否正常 17 8.下载镜像并上传 17 五、安装配置nova 18 1.安装nova相关组件 18 2.配置 /etc/nova/nova.conf(重点) 18 3.配置/etc/nova/api-paste.ini 20 4.nova-volume分区 20 5.停止和重启nova相关服务 20 6.同步数据库 21 7.检查nova服务 21 六、安装和配置Dashboard 22 1.安装dashboard 22 2.配置/etc/openstack-dashboard/local_settings.py 22 3.重启服务 22 七、Xshell辅助软件的使用 25 1. 下载Xshell 25 2.使用教程 25 八.总结 28 前言 OpenStack是一个由NASA(美国国家航空航天局)和Rackspace合作研发并发起的,以Apache许可证授权的自由软件和开放源代码项目。 OpenStack支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。OpenStack通过各种互补的服务提供了基础设施即服务(IaaS)的解决方案,每个服务提供API以进行集成。 OpenStack云计算平台,帮助服务商和企业内部实现类似于 Amazon EC2 和 S3 的云基础架构服务(Infrastructure as a Service, IaaS)。OpenStack 包含两个主要模块:Nova 和 Swift,前者是 NASA 开发的虚拟服务器部署和业务计算模块;后者是 Rackspace开发的分布式云存储模块,两者可以一起用,也可以分开单独用。OpenStack除了有 Rackspace 和 NASA 的大力支持外,还有包括 Dell、Citrix、 Cisco、 Canonical等重量级公司的贡献和支持,发展速度非常快,有取代另一个业界领先开源云平台 Eucalyptus 的态势。 对于OpenStack的快速发展,许多学者也开始学习、部署OpenStack,以便研究OpenStack这个开源平台。我是云计算专业的,在专业老师的指导下,我也学习部署了众多版本中的Grizzly版本的OpenStack,以下是我部署的步骤。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值