http://techwiki.juniper.net/Documentation/Contrail/Contrail_Release_Notes/090_Release_Notes_for_Contrail_Release_2.20
Release 2.20
June 2015
Introduction
These release notes accompany Release 2.20 of Juniper Networks Contrail. They describe new features, limitations, and known problems.
New and Changed Features
The features listed in this section are new or changed as of Contrail Release 2.20. A brief description of each new feature is included. Full documentation for each of these features is available on the main documentation site at https://techwiki.juniper.net/Documentation/Contrail.
Contrail Integration with VMware vCenter
Starting with Contrail Release 2.20, it is possible to install Contrail to work with the VMware vCenter Server so that it works with existing or already provisioned vSphere deployments that use VMware vCenter as the main orchestrator.
The Contrail VMware vCenter solution is comprised of the following main components:
- Control and management that runs the following components as needed per Contrail system:
- A VMware vCenter Server independent installation, that is not managed by Juniper Contrail. The Contrail software provisions vCenter with Contrail components and creates entities required to run Contrail.
- The Contrail controller, including the configuration nodes, control nodes, analytics, database, and Web UI, which is installed, provisioned, and managed by Contrail software.
- A VMware vCenter plugin provided with Contrail that typically resides on the Contrail configuration node.
- VMware ESXi virtualization platforms forming the compute cluster, with Contrail data plane (vRouter) components running inside an Ubuntu-based virtual machine. The virtual machine, named ContrailVM, forms the compute personality while performing Contrail installs. The ContrailVM is set up and provisioned by Contrail. There is one ContrailVM running on each ESXi host.
See: Contrail Integration with VMware vCenter: Installation and Contrail Integration with VMware vCenter: User Interfaces and Feature Configuration.
TOR Service Node High Availability (TSN HA)
Wtih Contrail 2.20, high availability of TOR service nodes is supported. When Top of Rack switches are managed via OVSDB on Contrail, high availability of the TOR agent is needed. This feature allows TOR agent high availability via HA proxy. Along with this many enhancement has beed done to support this feature to work at scale.
See TSN HA for more details
Underlay Overlay Mapping using Contrail Analytics (beta)
Today’s cloud data centers consist of large collections of interconnected servers that provide computing and storage capacity to run a variety of applications. The servers are connected with redundant TOR switches, which in turn, are connected to spine layer. The cloud deployment is typically shared by multiple tenants, each of which usually needs multiple isolated networks. Multiple isolated networks can be provided by overlay networks that are created by forming tunnels (for example, gre, ip-in-ip, mac-in-mac) over the underlay or physical network.
As data flows in the overlay network, Contrail can provide statistics and visualization of the traffic in the underlay network.
Starting with Contrail Release 2.20, you can view a variety of analytics related to underlay and overlay traffic in the Contrail Web user interface.
The following are some of the analytics that Contrail provides for statistics and visualization of overlay underlay traffic.
- Topology Discovery - View underlay network topology.
A user interface view of the physical underlay network and the connected servers, with a current snapshot and an historical view of the topology. - Flow Mapping- Gives a view of the underlay path for a given overlay flow.
Given an overlay flow, get the underlay path used for that flow and map the path in the topology view. You can also see where there are drops or high utilization on the interfaces in the selected paths. - Traceflows
Note: By default this feature is not enabled in Contrail UI, following needs to be done.
Following lines need to be added to /etc/contrail/config.global.js and 'service supervisor-webui restart' should be done.
config.features = {};
config.features.disabled = [];
Ceilometer support in Contrail Cloud (beta)
With Release 2.20 of Contrail Cloud, OpenStack Ceilometer is supported on the following OpenStack release and OS combination.
-
OpenStack release Juno on Ubuntu 14.04.1 LTS
Prerequisites for Ceilometer installation are
-
Contrail Cloud installation
-
Provisioned using Fabric with
enable_ceilometer = True
in testbed.py or -
Provisioned through Server Manager with
enable_ceilometer = True
in cluster.json or cluster configuration.
Note: Ceilometer services are only installed on the first OpenStack controller node and are not HA capable currently
OpenStack Nova Docker Support in Contrail
Starting with Contrail Release 2.20, it is possible to configure a compute node in a Contrail cluster to support Docker containers.
OpenStack Nova Docker containers can be used instead of virtual machines for specific use cases. DockerDriver is a driver for launching Docker containers. See: OpenStack Nova Docker Support in Contrail.
Server Manager Changes and New Features
This sections presents new and changed features in the Contrail Server Manager, as of Contrail Release 2.20.
In Contrail 2.2 release, Contrail Server Manager allows following additional functionality:
- Simplified SM installation and upgrade procedure for Ubuntu 12.04 and Ubuntu 14.04 platforms
- Provisioning of Contrail Controller in High availability mode with orchestration (sequencing of provisioning steps) in a multi-server cluster
- Provides Server Inventory and Monitoring information with Web-UI interface
- Supports contrail software upgrade for target clusters from Release 2.10 to Release 2.20.
- Supports Openstack Juno release for Ubuntu 14.04 target
- Supports LBAAS/SNAT, openstack heat, Ceilometer (without HA) feature provisioning of target clusters
MD5 Checksum support for BGP peers.
While configuring BGP peers on control node, user can enable MD5 checksum for as per RFC 2385.
See: MD5 authentication for bgp sessions:
Addition of Virtual Machine Interface UVE, Logical Interface UVE and Physical Interface UVE
In R2.20, new UVEs are added, which will be sent from contrail-vrouter-agent. These UVEs correspond to VM interfaces, logical interfaces and physical interfaces. Prior to R2.20, these were sent in the parent objects viz. VM uve and Prouter uve.
The issue with the interface UVEs being part of parent object are 2 fold.
1. there needs to exist a parent object before UVEs for these are created
2. as the UVE mechanism requires the whole field [in this case the list of interface structures] to be send even when only one field in one interface has changed.
Both these led us to create interface UVEs at the top level and only carry the keys in the parent object.
- UveVMInterfaceAgentTrace: UVE to represent Virtual-machine-interface. This UVE carries all the information about virtual-machine interface including statistics. Before release R2.20, this information was being sent in UveVirtualMachineAgentTrace and VirtualMachineStatsTrace Uves
- UveLogicalInterfaceAgentTrace: UVE to represent Logical Interface information. Before release R2.20, this information was being sent in UveProuterAgent.
- UvePhysicalInterfaceAgentTrace: UVE to represent Physical Interface information. Before release R2.20, this information was being sent in UveProuterAgent
Effect of this change on network statistics
Earlier we were using the following structures for sending interface and floating ip statistics.
struct UveVirtualMachineAgent {
1: string name (key="ObjectVMTable")
...
13: optional list<VmFloatingIPStats> fip_stats_list;
...
}
struct VirtualMachineStats {
1: string name (key="ObjectVMTable")
...
3: optional list<VmFloatingIPStatSamples> fip_stats (tags=".vn,.iface_name,.ip_address")
4: optional list<VmInterfaceStats> if_stats (tags=".name")
...
}
With the proposed changes, these statistics are moved to under VMInterface UVE as below, hence the APIs to retrieve these statistics is changed too.
struct UveVMInterfaceAgent {
1: string name (key="ObjectVMITable")
...
16: optional list<VmFloatingIPStats> fip_agg_stats;
17: optional list<VmFloatingIPStats> fip_diff_stats (tags=".virtual_network,.ip_address")
18: optional VmInterfaceStats if_stats (tags="vm_name,virtual_network,vm_uuid")
...
}
Supported Platforms
Contrail Release 2.20 is supported on the OpenStack Juno and Icehouse releases. Juno is only suported on Ubuntu 14.04.2. It will be supported on CentOS when we support CentOS 7.1. Currently Junos support on Centos7.1 is in beta.
Contrail networking is supported on Red Hat RHOSP 5.0, which is supported only on OpenStack Icehouse.
Contrail Release 2.20 also brings in support for VMware vCenter 5.5. vCenter support is limited to Ubuntu 14.04.2 (Linux kernel version: 3.13.0-40-generic) in this release.
Supported linux distributions are following:
- CentOS 6.5 (Linux kernel version: 2.6.32-358.el6.x86_64)
- Redhat 7/RHOSP 5.0 (Linux kernel version: 3.10.0-229.el7.x86_64)
- Ubuntu 12.04.04 (Linux kernel version: 3.13.0-34-generic)
- Ubuntu 14.04.2 (Linux kernel version: 3.13.0-40-generic)
Known Behavior
- Adding a rule containing Classless Inter-Domain Routing (CIDR) doesn't create a link between routing instances, therefore, no routes will be exchanged. A link can be created using another policy rule or through logical routers.
- When a subnet X is added to a router, the router will route for all subnets of the virtual network to which subnet X belongs.
Example: VN1 has three subnets: 10.1.1.0/24, 11.1.1.0/24, and 12.1.1.0/24, and another virtual network, public_vn.
When a router is added to the subnet of public_vn and 10.1.1.0/24, all the routes in VN1, including 11.1.1.0/24 and 12.1.1.0/24, are routable from public_vn. - Before upgrading a vRouter, the administrator needs to shut down the guest running on the server.
Live migration of a virtual machine is not supported unless Contrail storage is configured.
The following steps for upgrading a compute node are recommended:
- Pause all instances on the server, using the OpenStack Horizon interface.
- Upgrade the vRouter on the compute node, using fab scripts.
- Reboot.
- Restart guest virtual machines.
- Do not modify images used for launching service instances using Horizon Web UI or Glance shell commands.
- Only administrators can launch virtual machines on specific compute nodes.
- For new projects, users need to create an IPAM and network policy before creating a virtual network.
- If you are using the Internet Explorer Web browser, the minimum requirement is IE 9.0 and the recommended version is IE 10.0.
- In Contrail release 1.10, all of the daemons now support reading the configuration file upon startup, in addition to reading command-line options. Command-line options of previous releases have been deprecated. The daemon names are changed to follow a common naming scheme. Every attempt is made to keep this change transparent to user. The Fabric upgrade scripts handle the configuration file migration. Tools such as contrail-status support the new daemon names.
More details of these changes can be found in the following blog articles on GitHub: Contrail daemons' configuration and Contrail process names' changes in R1.10.
Known Issues
This section lists known limitations with this release. Bug numbers are listed and can be researched in Launchpad.net.
R2.20 Upgrade Instructions
NOTE: If you are installing Contrail for the first time, refer to the full documentation and installation instructions in the Contrail Getting Started Guide at: https://techwiki.juniper.net/Documentation/Contrail/Contrail_Controller_Getting_Started_Guide/202Installation .
Upgrading Contrail Software from Release 2.0 or Greater to Release 2.20
Use the following procedure to upgrade an installation of Contrail software from one release to a more recent release. This procedure is valid starting with Contrail Release 2.0 and greater.
Instructions are given for both CentOS and Ubuntu versions. The only Ubuntu versions supported for upgrading are Ubuntu 12.04, and 14.04.2.
To upgrade Contrail software from Contrail Release 2.0 or greater:
-
Download the file
contrail-install-packages-x.xx-xxx.xxx.noarch.rpm | deb
from: http://www.juniper.net/support/downloads/?p=contrail#sw and copy it to the/tmp
directory on the config node, as follows:
CentOS:scp <id@server>:/path/to/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm /tmp
Ubuntu:scp <id@server>:/path/to/contrail-install-packages-x.xx-xx~icehouse_all.deb /tmp
Note: The variables xxx.xxx and so on represent the release and build numbers that are present in the name of the installation packages that you download. - Install the
contrail-install-packages
,
using the correct command for your operating system:
CentOS: yum localinstall /tmp/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm
Ubuntu: dpkg –i /tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb
- Set up the local repository by running
setup.sh
:
cd /opt/contrail/contrail_packages; ./setup.sh
- Ensure that the
testbed.py
that was used to set up the cluster with Contrail is intact at/opt/contrail/utils/fabfile/testbeds
/
- Ensure thattestbed.py
has been set up with a combinedcontrol_data
section (required as of Contrail Release 1.10).
- Ensure that thedo_parallel
flag is set toTrue
intestbed.py
, see bug 1426522 in Launchpad.net.
5. Upgrade the software, using the correct set of commands to match your operating system and vrouter, as described in the following:
Change to the utils folder:
cd /opt/contrail/utils; \
and perform the correct upgrade procedure to match your operating system and vrouter. In the following, <from>
refers to the currently installed release number, such as 2.0, 2.01, 2.1.
CentOS Upgrade Procedure:
fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm;
Ubuntu 12.04 Upgrade Procedure:
fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;
Ubuntu 14.04 Upgrade Procedure:
There are two different upgrade procedures for Ubuntu 14.04 upgrade to Contrail Release 2.20, depending on the vrouter (contrail-vrouter-3.13.0-35-generic
or contrail-vrouter-dkms
) package installed in current setup.
As of Contrail Release 2.20, the recommended kernel version for an Ubuntu 14.04-based system is 3.13.0-40.
- Ubuntu 14.04 upgrade procedure for system with
contrail-vrouter-3.13.0-35-generic
installed:
The command sequence upgrades the kernel version and also reboots the compute nodes when finished.
fab install_pkg_all:/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;
fab migrate_compute_kernel;
fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;
fab upgrade_kernel_all;
fab restart_openstack_compute;
- Ubuntu 14.04 upgrade procedure for system with recommended version of
contrail-vrouter-3.13.40-generic
or
contrail-vrouter-dkms
installed:
fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;
All nodes in the cluster can be upgraded to kernel version 3.13.0-40 by using the following fabric-utils
command:
fab upgrade_kernel_all
- On the OpenStack node, soft reboot all of the virtual machines.
You can do this in the OpenStack dashboard or log into the node that uses the openstack role and issue the following:
source /etc/contrail/openstackrc ; nova reboot <vm-name>
You can also use the following fab command to reboot all virtual machines:
fab reboot vm
-
Check to ensure the openstack-nova-novncproxy is still running:
service openstack-nova-novncproxy status
If necessary, restart the service:
service openstack-nova-novncproxy restart
- (For Contrail Storage option, only.)
Contrail Storage has its own packages.
To upgrade Contrail Storage, download the file:
contrail-storage-packages_x.x-xx*.deb
from http://www.juniper.net/support/downloads/?p=contrail#sw
and copy it to the/tmp
directory on the config node, as follows:
Ubuntu:scp <id@server>:/path/to/contrail-storage-packages_x.x-xx*.deb /tmp
Note: Use only Icehouse packages (for example,contrail-storage-packages_2.0-22~icehouse_all.deb
) because OpenStack Havana is no longer supported.
Use the following statement to upgrade the software:
cd /opt/contrail/utils; \
Ubuntu: fab upgrade_storage:<from>,/tmp/contrail-storage-packages_2.0-22~icehouse_all.deb;
When upgrading to R2.10, add the following steps if you have live migration configured. Upgrades to Release 2.0 do not require these steps. Select the command that matches your live migration configuration.
fab setup_nfs_livem
or
fab setup_nfs_livem_
global
Note : To upgrade from older contrail releases prior to 2.0, refer to Contrail release notes for that version.
Documentation Feedback
We encourage you to provide feedback, comments, and suggestions so that we can improve the documentation. You can provide feedback by using either of the following methods:
· Online feedback rating system—On any page at the Juniper Networks Technical Documentation site athttp://www.juniper.net/techpubs/index.html, simply click the stars to rate the content, and use the pop-up form to provide us with information about your experience. Alternately, you can use the online feedback form at https://www.juniper.net/cgi-bin/docbugreport/.
· E-mail—Send your comments to techpubs-comments@juniper.net. Include the document or topic name, URL or page number, and software version (if applicable).
Requesting Technical Support
Technical product support is available through the Juniper Networks Technical Assistance Center (JTAC). If you are a customer with an active J-Care or JNASC support contract, or are covered under warranty, and need post-sales technical support, you can access our tools and resources online or open a case with JTAC.
· JTAC policies—For a complete understanding of our JTAC procedures and policies, review the JTAC User Guide located athttp://www.juniper.net/us/en/local/pdf/resource-guides/7100059-en.pdf.
· Product warranties—For product warranty information, visit http://www.juniper.net/support/warranty/.
· JTAC hours of operation—The JTAC centers have resources available 24 hours a day, 7 days a week, 365 days a year.
Self-Help Online Tools and Resources
For quick and easy problem resolution, Juniper Networks has designed an online self-service portal called the Customer Support Center (CSC) that provides you with the following features:
· Find CSC offerings: http://www.juniper.net/customers/support/
· Search for known bugs: http://www2.juniper.net/kb/
· Find product documentation: http://www.juniper.net/techpubs/
· Find solutions and answer questions using our Knowledge Base: http://kb.juniper.net/
· Download the latest versions of software and review release notes: http://www.juniper.net/customers/csc/software/
· Search technical bulletins for relevant hardware and software notifications: https://www.juniper.net/alerts/
· Join and participate in the Juniper Networks Community Forum: http://www.juniper.net/company/communities/
· Open a case online in the CSC Case Management tool: http://www.juniper.net/cm/
To verify service entitlement by product serial number, use our Serial Number Entitlement (SNE) Tool:https://tools.juniper.net/SerialNumberEntitlementSearch/
Opening a Case with JTAC
You can open a case with JTAC on the Web or by telephone.
· Use the Case Management tool in the CSC at http://www.juniper.net/cm/.
· Call 1-888-314-JTAC (1-888-314-5822 toll-free in the USA, Canada, and Mexico).
For international or direct-dial options in countries without toll-free numbers, see http://www.juniper.net/support/requesting-support.html.