物理机上使用fuel5.1部署openstackHA存储使用ceph

How to Install Mirantis Fuel 5.1 Openstack wihceph

作者:@法不荣情  [原文链接] http://weibo.com/p/2304189cacdb3d0102v55r

本人刚开始接触openstack,对一切还不是很熟悉,刚开始时是使用rdo 快速部署单节点openstack,之后手动安装了次openstack,是安装文档来敲命令,有些地方又看不懂,非常麻烦,更别说部署一个多节点的openstack HA高可用环境了,还好openstack社区中,mirantis openstack出了Fuel这个工具,可以快速部署一套openstack。除了使用之前在vmware workstation 10上使用fuel5.0快速部署了openstack HA高可用,感觉还不错,很快就装好了一个openstack HA高可用的环境。        最近看到5.1版本的出来了,看了相关文档,现在来在实际物理环境中部署一套openstack HA环境,其中使用ceph作为统一存储,另外添加两个存储节点。

感谢罗勇老师等人的文档,写的很好,当然也感谢mirantis的贡献,以下是个人在部署过程中的一些记录,以此作为笔记,若有错误,还望指出。

1、关于mirantis

Mirantis,一家很牛逼的openstack服务集成商,他是社区贡献排名前5名中唯一一个靠软件和服务吃饭的公司(其他分别是Red Hat, HP, IBM,Rackspace)。相对于其他几个社区发行版,Fuel的版本节奏很快,平均每两个月就能提供一个相对稳定的社区版。

2、关于FUEL

Fuel 是一个为openstack端到端一键部署设计的工具,其功能含盖自动的PXE方式的操作系统安装,DHCP服务,Orchestration服务 和puppet 配置管理相关服务等,此外还有openstack 关键业务健康检查和log 实时查看等非常好用的服务。

FUEL5.1是基于icehouse版本的openstack,其中系统为centos6.5Ubuntu12.04.4

Fuel的优点如下:

·        节点的自动发现和预校验

·        配置简单、快速

·        支持多种操作系统和发行版,支持HA部署

·        对外提供API对环境进行管理和配置,例如动态添加计算/存储节点

·        自带健康检查工具

·        支持Neutron,例如GREnamespace都做进来了,子网能配置具体使用哪个物理网卡等

Fuel的架构



图片来源于http://www.openstack.cn/p692.html

使用虚拟机采用fuel来部署openstack可以看这个文档,写的非常好,很详细

http://www.openstack.cn/p692.html

3、环境拓扑图



但在部署时因为是测试环境,所以网卡有限每个服务器只有两张网卡,所以只用到两台交换机,交换机是DELL PowerConnect 5448DELL PowerConnect 5448

4、交换机配置

   配置所需要的VLAN(此处用到的VLAN101102),以及在交换机端口上开启流量控(flowcontrol),所有交换机包括Private, Management, Storage networks都需允许所需要的VLAN通过即在使用端口上配置为trunk模式,并允许VLAN。配置如下(其他交换机设备的配置可能会有所不同)

switch > enable

switch # configure

switch (config) #vlandatabase

switch (config)# vlan 101-102

 

switch (config) # interfacerange ethernet all

switch (config) # switchportmode trunk

switch (config) # switchporttrunk allowed vlan add all

 

如果交换机没有配置的话,在fuel网络验证的时候会出现问题。因为使用到了VLAN标记。

5、安装fuel master

这个就是单纯装系统在加点配置,如下图所示进入安装欢迎界面,按提示按“Tab”键可以修改ip信息,也可以将showmenu=no修改为showmenu=yes,然后回车进入详细配置界面,此处是使用默认安装,直接回车即可一步安装完成。



安装完成后的界面如下图所示



该界面提示了root用户登录的密码,以及fuel web登录的方式以及用户名和密码,使用网页登录界面如下所示

 



6、部署过程

6.1 新建openstack环境

使用用户名admin,密码admin登录后见如下图界面



点击“新建openstack环境”开始建立openstack环境,点击“前进”进入下一步;



输入openstack环境名车,选择openstack版本,此处其实是选择系统,因为openstack版本固定为icehouse版本了,点击“前进”进入下一步。



选择环境的部署模式,有HA多节点和openstack多节点两个模式,HA多节点需要至少3个控制节点来部署,此处选择“HA多节点”,点击“前进”进入下一步;



因为环境部署在物理机上,所以选择KVM,如果是在虚拟机上则选择QEMU,若是使用vCenter环境的话,则选择vCenter,点击“前进”进入下一步;



此处选择GRE网络模式,点击“前进”进入下一步;



后端存储选择“ceph”,此处要注意的是选择这个选项时,需要另外两个或两个以上节点作为存储节点,点击“前进”进入下一步;



附加服务,此处不选择使用,点击“前进”进入下一步;



点击“新建”,完成openstack环境的建立。

6.2 发现节点

此测试环境中使用两张网卡,不过最好是三张,且必须要有PXE功能,在BIOS中启动服务器的“虚拟技术”功能,且设置为从pxe网络启动。

pxe启动后进入界面,默认会自动进入bootstrap启动,画面出现bootstrap login后,fuel web才会发现此节点



Fuel web发现节点时,提示如下



发现节点之后,接下来就是增加节点,进入刚创建的openstack环境,点击右上角的“增加节点”,然后勾选“controller”角色,在选择此角色的服务器,建议在这之前最好记好这么服务器的网卡的MAC地址,因为此处没办法判断那台服务器是哪台,或者可以这样处理,选择控制节点时,就是开启要作为控制节点的服务器至少三台从网络PXE启动,然后增加节点完成之后,在进行计算节点或存储节点服务器的选择


增加节点完之后,如下图所示,但状态是“等待增加”,下图是部署好的;



6.3 部署与配置

勾选某台服务器进行磁盘配置和网络配置

如下,磁盘配置,此处使用默认;

 

如下使用网络配置,更改如下;

接下来进入整个网络配置,点击 “网络”,设置如图所示

 

最后验证网络,如果在交换机环节没有配置好的话,此处会提示错误,如果强制部署的话,部署过程可能会产生错误。

点击“设置”,进行openstack设置和存储设置,其他保持默认

 

存储使用ceph



都设置完成之后,点击“部署变更”开始部署

部署完成之后如下,会提示web登录的信息



参考资料

1、 http://community.mellanox.com/docs/DOC-1474

This post shows how to set up and configure Mirantis Fuel ver. 5.1/5.1.1 (OpenStack Icehouse based on CentOS 6.5) to support Mellanox ConnectX-3 adapters to work in SR-IOV mode for the VMs on the compute nodes, and in iSER (iSCSI over RDMA) transport mode for the storage nodes.
Related references:

 

Before reading this post, Make  sure  you are familiar with Mirantis Fuel 5.1/5.1.1 installation procedures.

It is also recommended to watch the movie HowTo Install Mirantis Fuel 5.1 OpenStack with Mellanox Adapters

 

frameborder="0" height="513" scrolling="no" src="https://www.youtube.com/embed/5Ga28Rp7K_I?wmode=transparent" width="684" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-family: inherit; vertical-align: baseline;">

 

Setup Diagram:

Setup Diagram.jpg
Note :  Besides the Fuel Master node, all nodes should be connected to all five networks.

 

Note: Server’s IPMI and the switches management interfaces wiring and configuration are out of scope.

You need to ensure that there is management access (SSH) to Mellanox Ethernet switch SX1036 to perform the configuration.

 

 

 

 

Setup BOM:

Component Quantity Description
Fuel Master server 1

DELL PowerEdge R620

  • CPU: 2 x E5-2650 @ 2.00GHz
  • MEM: 128 GB
  • HD: 2 x 900GB SAS 10k in RAID-1

Cloud Controllers and Compute servers

  • 3 x Controllers
  • 3 x Computes
6

DELL PowerEdge R620

  • CPU: 2 x E5-2650 @ 2.00GHz
  • MEM: 128 GB
  • HD: 2 x 900GB SAS 10k in RAID-1
  • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
Cloud Storage server 1

Supermicro X9DR3-F

  • CPU: 2 x E5-2650 @ 2.00GHz
  • MEM: 128 GB
  • HD: 24 x 6Gb/s SATA Intel SSD DC S3500 Series 480GB (SSDSC2BB480G4)
  • RAID Ctrl: LSI Logic MegaRAID SAS 2208 with battery
  • NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
Admin  (PXE) and Public switch 1 1Gb switch with VLANs configured to support both networks
Cloud Ethernet Switch 1 Mellanox SX1036 40/56Gb 36 port Ethernet
Cables  

16 x 1Gb CAT-6e for Admin (PXE) and Public networks

7 x 56GbE copper cables up to 2m (MC2207130-XXX)


Note:
 You can use Mellanox ConnectX-3 PRO EN (MCX313A-BCCT or Mellanox ConnectX-3 Pro VPI (MCX353-FCCT) adapter cards.
Storage server RAID Setup:
  • 2 SSD drives in bays 0-1 configured in RAID-1 (Mirror): The OS will be installed on it.
  • 22 SSD drives in bays 3-24 configured in RAID-10: The Cinder volume will be configured on the RAID drive.

Network Physical Setup:
  1. Connect all nodes to the Admin (PXE) 1GbE switch (preferably through the eth0 interface on board).
    It is recommended to write the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab).
  2. Connect all nodes to the Public 1GbE switch (preferably through the eth1 interface on board).
  3. Connect port #1 (eth2) of ConnectX-3 Pro to SX1036 Ethernet switch (Private, Management, Storage networks).
     
Note:  The interface names (eth0, eth1, p2p1, etc.) may vary between servers from different vendors.
Note: Port bonding is not supported when using SR-IOV over the ConnectX-3 adapter family.
Rack Setup Example:
2.Rack Diagram.jpg
Fuel Node:
3.Fuel Node.JPG.jpg
Compute and Controller Nodes:
4.Compute and Controller.JPG.jpg
5.Storage.JPG.jpg
   4. Configure the required VLANs and enable flow control on the Ethernet switch ports.
All related VLANs should be enabled on the 40/56GbE switch (Private, Management, Storage networks). On Mellanox switches, use the command flow below to enable VLANs (e.g. VLAN 1-100 on all ports).

Note: Refer to the MLNX-OS User Manual to get familiar with switch software (located at 
support.mellanox.com ).
Note:  Before start using the Mellanox switch, it is recommended to upgrade the switch to the latest MLNX-OS version.
switch > enable
switch # configure terminal
switch (config) #  vlan 1-100
switch (config vlan 1-100) # exit
switch (config) # interface ethernet 1/1 switchport mode hybrid
switch (config) # interface ethernet 1/1 switchport hybrid allowed-vlan all
switch (config) # interface ethernet 1/2 switchport mode hybrid
switch (config) # interface ethernet 1/2 switchport hybrid allowed-vlan all
...
switch (config) # interface ethernet 1/36 switchport mode hybrid
switch (config) # interface ethernet 1/36 switchport hybrid allowed-vlan all
Flow control is required when running iSER (RDMA over RoCE - Ethernet). On Mellanox switches, run the following command to enable flow control on the switches (on all ports in this example):
switch (config) # interface ethernet 1/1-1/36 flowcontrol receive on force
switch (config) # interface ethernet 1/1-1/36 flowcontrol send on force
To save the configuration (permanently), run:
switch (config) # configuration write
Note:  Flow control (global pause) is normally enabled by default on the servers. If it is disabled, run:
# ethtool -A <interface-name> rx on tx on
   6. If you are running 56GbE, follow this example to set the link between the servers to the switch
Networks Allocation (Example)
The example in this post is based on the network allocation defined in this table:
Network Subnet/Mask Gateway Notes
Admin (PXE) 10.20.0.0/24 N/A

The network is used to provision and manage Cloud nodes via the Fuel Master.

The network is enclosed within a 1Gb switch and has no routing outside.

This is the default Fuel network.
Management 192.168.0.0/24 N/A

This is the Cloud Management network.

The network uses VLAN 2 in SX1036 over 40/56Gb interconnect.

This is the default Fuel network.

Storage 192.168.1.0/24 N/A

This network is used to provide storage services.

The network uses VLAN 3 in SX1036 over 40/56Gb interconnect.

This is the default Fuel network.

Public and

Neutron L3

10.7.208.0/24 10.7.208.1

Public network is used to connect Cloud nodes to an external network.

Neutron L3 is used to provide Floating IP for tenant VMs.

Both networks are represented by IP ranges within same subnet with routing to external networks.

All Cloud nodes which have Public IP and HA functionality require an additional Virtual IP.

For our example with 7 Cloud nodes we need 8 IPs in Public network range.

Consider a larger range if you are planning to add more servers to the cloud later.

In our build we will use range IP range 10.7.208.53 >> 10.7.208.76 for both Public and Neutron L3. IP allocation will be as follows:

  • Fuel Master IP: 10.7.208.53
  • Public Range: 10.7.208.54 >> 10.7.208.61 (used for physical servers)
  • Neutron L3 Range: 10.144.254.62 >> 10.144.254.76 (used for Floating IP pool)
Install the Fuel Master via ISO Image:
  1. Boot Fuel Master Server from the ISO as a virtual CD (click here for the image).
  2. Press the <TAB> key on the very first installation screen which says "Welcome to Fuel Installer" and update the kernel option from showmenu=no to showmenu=yes and hit Enter. It will now install Fuel and reboot the server.
    6.Fuel.Iso.Show_Menue.JPG.jpg
  3. After the reboot, boot from the local disk. The Fuel menu window will start.
  4. Network setup:
    1. Configure eth0 - PXE (Admin) network interface.
      Ensure the default Gateway entry is empty for the interface – the network is enclosed within the switch and has no routing outside.
      Select Apply.
      7.Net.eth0.jpg
    2. Configure eth1 – Public network interface.
      The interface is routable to LAN/internet and will be used to access the server.
      Configure static IP address, netmask and default gateway on the public network interface.
      Select Apply.
      8.Net.eth1.jpg
  5. PXE Setup
    The PXE network is enclosed within the switch.
    Do not make changes – proceed with defaults.
    Press Check button to ensure no errors are found.
    9.Net.PXE.jpg
  6. Time Sync
    Check NTP availability (e.g. ntp.org) via Time Sync tab on the left.
    Configure NTP server entries suitable for your infrastructure.
    Press Check to verify settings.9.Net.NTP.jpg
  7. Navigate to Quit Setup and select Save and Quit to proceed with the installation.
    10.Net.Save.jpg
  8. Once the Fuel installation is done, you are provided with Fuel access details both for SSH and HTTP.
    Access Fuel Web UI by http://10.7.208.53:8000. Use "admin" for both login and password.
    11.Net.END.jpg
OpenStack Environment:
Log into Fuel
  1. Open in WEB browser (for example: http://10.7.208.53:8000)
  2. Log into Fuel using "admin" for both login and password.
Creating a new OpenStack environment:
  1. Open a new environment in the Fuel dashboard. A configuration wizard will start.
    13.NewEnv.jpg
  2. Configure the new environment wizard as follows:
    • Name and Release
      • Name: TEST
      • Release: Icehouse on CentOS 6.5 (2014.1.1-5.1)
    • Deployment Mode
      • Multi-node with HA
    • Compute
      • KVM
    • Network
      • Neutron with VLAN segmentation
    • Storage Backend
      • Cinder: Default
      • Glance : Default
    • Additional Services
      • None
    • Finish
      • Click Create button
  3. When done, a new TEST environment will be created. Click on it and proceed with environment configuration.
    14.NewEnvDone.jpg
Configuring the OpenStack Environment:
Settings Tab
15.Bar.Settings.jpg
Kernel parameters
If you wish to enable iSER or SR-IOV, add this to the list of kernel parameters:  intel_iommu=on.
16.KernelParams.jpg
Mellanox Neutron component
To work with SR-IOV mode, select  Install Mellanox drivers and Mellanox SR-IOV plugin
17.SRIOV.jpg
Note:  The default number of supported virtual functions (VF) is 16. If you want to have more vNICs available, please contact Mellanox Support.
Configure Storage
  1. To use high performance block storage, check ISER protocol for volumes (Cinder) in the Storage section.
    18.iSER.jpg
    Note: "Cinder LVM over iSCSI for volumes" should remain checked (default).
  2. Save the settings.
Public Network Assignment
  1. Make sure Assign public network to all nodes is checked
    19.AssignPubIP.jpg
Nodes Tab
20.Bar.Nodes.jpg
Servers Discovery by Fuel
This section will assign Cloud roles to servers.
First of all, servers should be discovered by Fuel. For this to happen, make sure the servers are configured for PXE boot over Admin (PXE) network.
When done, reboot the servers and wait for them to be discovered.
Discovered nodes will be counted in top right corner of the Fuel dashboard.
21.Discovered.Nodes.jpg
Now you may add UNALLOCATED NODES to the setup.
First you may add Controller, Storage, and then Compute nodes.
Add Controller Nodes
  1. Click Add Node.
  2. Identify 3 controller node. Use the last 4 Hexa of its MAC address of interface connected to Admin (PXE) network. 
    Assign the node's role to be a Controller node.
  3. Click Apply Changes.
    22.Add.Controller.jpg
Add Storage Node
  1. Click Add Node.
  2. Identify your controller node. Use the last 4 Hexa of its MAC address of interface connected to Admin (PXE) network. 
    In our example this is an only Supermicro server, so identification is easy.
    Select this node to be a Storage - Cinder LVM node.
  3. Click Apply Changes.
    23.AddStorage.jpg
Add Compute Nodes
  1. Click Add Node.
  2. Select all the nodes that are left and assign them the Compute role.
  3. Click Apply Changes.
Configure Interfaces
In this step, we will map each network to a physical interface for each node.
You can choose and configure multiple nodes in parallel. 
Fuel will not let you to proceed with bulk configuration if HW differences between selected nodes (like the number of network ports) are detected.
In this case the Configure Interfaces button will have an error icon (see below).
24.ConfInterfaces.Error.jpg
The example below allows configuring 6 nodes in parallel. The 7th node (Supermicro storage node) will be configured separately. 
25.ConfIntByGroup.jpg
  1. In this example, we set the Admin (PXE) network to eth0 and the Public network to eth1.
  2. The Storage, Private and Management networks should run on the ConnectX-3 adapters 40/56GbE port.
    This is an example:
    26.ConfIntByGroup.2.jpg
  3. Click Back To Node List and perform network configuration for Storage Node
Note:  Port bonding is not supported when using SR-IOV over ConnectX-3 Pro adapter family.
Configure Disks
There is no need to change the defaults for Controller and Compute nodes unless you are sure changes are required.
For the Storage node it is recommended to allocate only high performing RAID as Cinder storage. The small disk shall be allocated to Base System.
  1. Select Storage node
    27.ChooseStrNode.jpg
  2. Press Configure Disks button
    28.CfgDskButton.jpg
  3. Click on sda disk bar, set Cinder allowed space to 0 MB and make Base System occupy the entire drive – press USE ALL ALLOWED SPACE.
    29.CfgDsk.jpg
  4. Press Apply.
Networks Tab
30.Bar.Network.jpg
Public
Note:  In our example, Public network does not use VLAN. If you use VLAN for Public network You should check Use VLAN tagging and set proper VLAN ID.
31.PublicNet.jpg

Management
In this example, we select VLAN 2 for the management network. The CIDR left untouched.
32.MngNet.jpg
Storage
In this example, we select VLAN 3 for the storage network. The CIDR is left untouched.
33.StrNet.jpg
Neutron L2 Configuration
In this example, we set the VLAN range to 4-100. It should be aligned with the switch VLAN configuration (above). 
The base MAC is left untouched.
34.NeutronL2Net.jpg
Neutron L3 Configuration:
Floating IP range : Configure it to be part of your Public network range, in this example, we select 10.7.208.62-10.7.208.76.
Internal Network:  Leave CIDR and Gateway with no changes.
Name servers:  Leave DNS servers with no changes.
35.NeutronL3Net.jpg

Save Configuration
Click Save Settings at the bottom of page
Verify Networks
Click Verify Networks.
You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting. 
36.NetVerify.jpg
Deployment
Click the Deploy Changes button and view the installation progress at the nodes tab and view logs.
37.Deploy.jpg
Health Test
  1. Click the Health Test tab.
    38.Health.jpg
  2. Check the Select All checkbox.
  3. Uncheck Platform services functional tests (image with special packages is required).
  4. Click Run Tests.
All tests should pass. Otherwise, check the log file for troubleshooting.
You can now safely use the cloud
Click the dashboard link at the top of the page.
39.END.jpg

Bug in Launchpad
Usernames and Passwords:
  • Fuel server Dashboard user / password: admin / admin
  • Fuel server SSH user / password: root / r00tme
  • TestVM SSH user / password: cirros / cubswin:)
  • To get controller node CLI permissions run:  # source /root/openrc

 

 

Prepare Linux VM Image for CloudX:

In order to have network and RoCE support on the VM, MLNX_OFED (2.2-1 or later) should be installed on the VM environment.

MLNX_OFED may be downloaded from http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers

(In case of CentOS/RHEL OS, you can use virt-manager to open existing VM image and perform MLNX_OFED installation).

 

 

Known Issues:
Issue #
Description
Workaround
Bug in Launchpad
1
The default number of supported virtual functions (VFs),16, is not sufficient.
To have more vNICs available, contact Mellanox Support.
2
Hypervisor crash on instance (VM) termination
Please contact Mellanox Support.
3 56Gb links are discovered by Fuel as 10Gb No action is required. Actual port speed is 56Gb. After deployment ports are re-discovered as 56Gb.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值