Release Notes for Contrail Release 2.20

http://techwiki.juniper.net/Documentation/Contrail/Contrail_Release_Notes/090_Release_Notes_for_Contrail_Release_2.20

Release 2.20

June 2015

Introduction

These release notes accompany Release 2.20 of Juniper Networks Contrail. They describe new features, limitations, and known problems.

New and Changed Features

The features listed in this section are new or changed as of Contrail Release 2.20. A brief description of each new feature is included. Full documentation for each of these features is available on the main documentation site at  https://techwiki.juniper.net/Documentation/Contrail

Contrail Integration with VMware vCenter

Starting with Contrail Release 2.20, it is possible to install Contrail to work with the VMware vCenter Server so that it works with existing or already provisioned vSphere deployments that use VMware vCenter as the main orchestrator. 

The Contrail VMware vCenter solution is comprised of the following main components:

  1. Control and management that runs the following components as needed per Contrail system:
    1. A VMware vCenter Server independent installation, that is not managed by Juniper Contrail. The Contrail software provisions vCenter with Contrail components and creates entities required to run Contrail.
    2. The Contrail controller, including the configuration nodes, control nodes, analytics, database, and Web UI, which is installed, provisioned, and managed by Contrail software.
    3. A VMware vCenter plugin provided with Contrail that typically resides on the Contrail configuration node.
  2. VMware ESXi virtualization platforms forming the compute cluster, with Contrail data plane (vRouter) components running inside an Ubuntu-based virtual machine. The virtual machine, named ContrailVM, forms the compute personality while performing Contrail installs. The ContrailVM  is set up and provisioned by Contrail. There is one ContrailVM running on each ESXi host

See: Contrail Integration with VMware vCenter: Installation and Contrail Integration with VMware vCenter: User Interfaces and Feature Configuration.

TOR Service Node High Availability (TSN HA)

Wtih Contrail 2.20,  high availability of TOR service nodes is supported.  When Top of Rack switches are managed via OVSDB on Contrail, high availability of the TOR agent is needed. This feature allows TOR agent high availability via HA proxy. Along with this many enhancement has beed done to support this feature to work at scale.

See TSN HA for more details

Underlay Overlay Mapping using Contrail Analytics (beta)

Today’s cloud data centers consist of large collections of interconnected servers that provide computing and storage capacity to run a variety of applications. The servers are connected with redundant TOR switches, which in turn, are connected to spine layer. The cloud deployment is typically shared by multiple tenants, each of which usually needs multiple isolated networks. Multiple isolated networks can be provided by overlay networks that are created by forming tunnels (for example, greip-in-ip, mac-in-mac) over the underlay or physical network.

As data flows in the overlay network, Contrail can provide statistics and visualization of the traffic in the underlay network.

Starting with Contrail Release 2.20, you can view a variety of analytics related to underlay and overlay traffic in the Contrail Web user interface. 

The following are some of the analytics that Contrail provides for statistics and visualization of overlay underlay traffic.

  • Topology Discovery - View underlay network topology.
    A user interface view of the physical underlay network and the connected servers, with a current snapshot and an historical view of the topology.
  • Flow Mapping- Gives a view of the underlay path for a given overlay flow. 
    Given an overlay flow, get the underlay path used for that flow and map the path in the topology view. You can also see where there are drops or high utilization on the interfaces in the selected paths.​
  • Traceflows 

Note: By default this feature is not enabled in Contrail UI, following needs to be done.

Following lines need to be added to /etc/contrail/config.global.js and  'service supervisor-webui restart' should be done.

config.features = {};
config.features.disabled = [];

Ceilometer support in Contrail Cloud (beta)

With Release 2.20 of Contrail Cloud, OpenStack Ceilometer is supported on the following OpenStack release and OS combination.

  1. OpenStack release Juno on Ubuntu 14.04.1 LTS

Prerequisites for Ceilometer installation are

  • Contrail Cloud installation

  • Provisioned using Fabric with enable_ceilometer = True in testbed.py or

  • Provisioned through Server Manager with enable_ceilometer = True in cluster.json or cluster configuration.

Note: Ceilometer services are only installed on the first OpenStack controller node and are not HA capable currently

See: Ceilometer support in Contrail Cloud

OpenStack Nova Docker Support in Contrail

Starting with Contrail Release 2.20, it is possible to configure a compute node in a Contrail cluster to support Docker containers.

OpenStack Nova Docker containers can be used instead of virtual machines for specific use cases. DockerDriver is a driver for launching Docker containers. See: OpenStack Nova Docker Support in Contrail.

Server Manager Changes and New Features

This sections presents new and changed features in the Contrail Server Manager, as of Contrail Release 2.20.

In Contrail 2.2 release, Contrail Server Manager allows following additional functionality:

  • Simplified SM installation and upgrade procedure for Ubuntu 12.04 and Ubuntu 14.04 platforms
  • Provisioning of Contrail Controller in  High availability mode with orchestration (sequencing of provisioning steps) in a multi-server cluster
  • Provides Server Inventory and Monitoring information with Web-UI interface
  • Supports contrail software upgrade for target clusters from Release 2.10 to Release 2.20.
  • Supports Openstack Juno release for Ubuntu 14.04 target
  • Supports LBAAS/SNAT, openstack heat, Ceilometer (without HA) feature provisioning of target clusters


See: Server Manager in Contrail

MD5 Checksum support for BGP peers.

While configuring BGP peers on control node, user can enable MD5 checksum for as per RFC 2385. 

See: MD5 authentication for bgp sessions:

Addition of Virtual Machine Interface UVE, Logical Interface UVE and Physical Interface UVE

In R2.20, new UVEs are added, which will be sent from contrail-vrouter-agent. These  UVEs correspond to VM interfaces, logical interfaces and physical interfaces. Prior to R2.20, these were sent in the parent objects viz. VM uve and Prouter uve.

The issue with the interface UVEs being part of parent object are 2 fold.
1. there needs to exist a parent object before UVEs for these are created
2. as the UVE mechanism requires the whole field [in this case the list of interface structures] to be send even when only one field in one interface has changed.

Both these led us to create interface UVEs at the top level and only carry the keys in the parent object.

  • UveVMInterfaceAgentTrace​: UVE to represent Virtual-machine-interface. This UVE carries all the information about virtual-machine interface including statistics. Before release R2.20, this information was being sent in UveVirtualMachineAgentTrace and VirtualMachineStatsTrace Uves
  • UveLogicalInterfaceAgentTrace​: UVE to represent Logical Interface information. Before release R2.20, this information was being sent in UveProuterAgent.
  • UvePhysicalInterfaceAgentTrace​: UVE to represent Physical Interface information. Before release R2.20, this information was being sent in UveProuterAgent
Effect of this change on network statistics

Earlier we were using the following structures for sending interface and floating ip statistics.

struct UveVirtualMachineAgent {
1: string name (key="ObjectVMTable")
...
13: optional list<VmFloatingIPStats> fip_stats_list;
...
}

struct VirtualMachineStats {
1: string name (key="ObjectVMTable")
...
3: optional list<VmFloatingIPStatSamples> fip_stats (tags=".vn,.iface_name,.ip_address")
4: optional list<VmInterfaceStats> if_stats (tags=".name")
...
}

With the proposed changes, these statistics are moved to under VMInterface UVE as below, hence the APIs to retrieve these statistics is changed too.

struct UveVMInterfaceAgent {
1: string name (key="ObjectVMITable")
...
16: optional list<VmFloatingIPStats> fip_agg_stats;
17: optional list<VmFloatingIPStats> fip_diff_stats (tags=".virtual_network,.ip_address")
18: optional VmInterfaceStats if_stats (tags="vm_name,virtual_network,vm_uuid")
...
}

 

Supported Platforms

Contrail Release 2.20 is supported on the OpenStack Juno and Icehouse releases.  Juno is only suported on Ubuntu 14.04.2. It will be supported on CentOS when we support CentOS 7.1. Currently Junos support on Centos7.1 is in beta. 

Contrail networking is supported on Red Hat RHOSP 5.0, which is supported only on OpenStack Icehouse. 

Contrail Release 2.20 also brings in support for VMware vCenter 5.5. vCenter support is limited to Ubuntu 14.04.2 (Linux kernel version: 3.13.0-40-generic) in this release.

Supported linux distributions are following:

  • CentOS 6.5 (Linux kernel version: 2.6.32-358.el6.x86_64)
  • Redhat 7/RHOSP 5.0 (Linux kernel version: 3.10.0-229.el7.x86_64)
  • Ubuntu 12.04.04 (Linux kernel version: 3.13.0-34-generic)
  • Ubuntu 14.04.2 (Linux kernel version: 3.13.0-40-generic)

Known Behavior

  • Adding a rule containing Classless Inter-Domain Routing (CIDR) doesn't create a link between routing instances, therefore, no routes will be exchanged. A link can be created using another policy rule or through logical routers.
  • When a subnet is added to a router, the router will route for all subnets of the virtual network to which subnet belongs. 
    Example: VN1 has three subnets: 10.1.1.0/24, 11.1.1.0/24, and 12.1.1.0/24, and another virtual network, public_vn. 
    When a router is added to the subnet of public_vn and 10.1.1.0/24, all the routes in VN1, including 11.1.1.0/24 and 12.1.1.0/24, are routable from public_vn.
  • Before upgrading a vRouter, the administrator needs to shut down the guest running on the server. 
    Live migration of a virtual machine is not supported unless Contrail storage is configured. 
    The following steps for upgrading a compute node are recommended:
  1. Pause all instances on the server, using the OpenStack Horizon interface.
  2. Upgrade the vRouter on the compute node, using fab scripts.
  3. Reboot.
  4. Restart guest virtual machines.
  • Do not modify images used for launching service instances using Horizon Web UI or Glance shell commands.
  • Only administrators can launch virtual machines on specific compute nodes.
  • For new projects, users need to create an IPAM and network policy before creating a virtual network.
  • If you are using the Internet Explorer Web browser, the minimum requirement is IE 9.0 and the recommended version is IE 10.0.
  • In Contrail release 1.10, all of the daemons now support reading the configuration file upon startup, in addition to reading command-line options. Command-line options of previous releases have been deprecated. The daemon names are changed to follow a common naming scheme. Every attempt is made to keep this change transparent to user. The Fabric upgrade scripts handle the configuration file migration. Tools such as contrail-status support the new daemon names.
    More details of these changes can be found in the following blog articles on GitHub: Contrail daemons' configuration and Contrail process names' changes in R1.10.

Known Issues

This section lists known limitations with this release. Bug numbers are listed and can be researched in Launchpad.net. 

Storage:
Creating a bootable volume from a raw image fails with errors. A workaround is to use  Cinder to create a volume from a raw image, then use Nova to launch the instance from the volume.
When this occurs, resolve the errors by spawning only one virtual machine instance at a time
Boot from volume will fail after a HA failover occurs, to avoid this patch mentioned in the bug should be applied on all openstack nodes.
Contrail Networking:
Sometimes login credential expires before 1 hour timeout period.
In scaled setup control node crash is sometimes observed.
VPC API is not supported with Juno. Its planned to be supported in subsequent release.
Sandesh client tcp connection doesn't failover immediately which updates Analytics node fails.
This happens only in cases where SNAT instance and destination Floaring IP is on same compute node.
On highly scaled setup it takes upto 40 minutes for API server and schema transformer to converge.
If VN is deleted and added quickly TOR switch may go in bad state. 
Docker instance doesn't learn DNS address provided by vrouter.
Analyzer doesn't work properly if source and destination VM coexist with analyzer VM on same compute node.
While bulk deleting 1K or more ports API sometimes times out.
If network get isolated for long time, sometimes zookeeper doesn't cluster.
This leads to TOR to drop traffic while doing the master RE failover leading to traffic loss.
Currently control node doesn't implement graceful restart feature, so mac routes are immediately withdrawn on TOR agent switchover leading to traffic loss.
Root cause is unknown, however, is expected to be same as 1468474.
We don't check for duplicate vxlan IDs in this release.
Device manager only supports 1 subnet for public VN.
Single node Centos installation runs into API server exception. More investigation is needed.
We observed that few return ping traffic for unknown mac getting lost. Its not root caused entirely.
The updated name is displayed correctly in the Neutron net-list and in Horizon.
Floating IP association model in Juno has changed. This new  requirement doesn't apply to Contrail. So, on stock Juno Horizon, a patch is needed. This  patch is included in Horizon package as included in Release 2.20. 
vCenter plugin's connection to API server through HAProxy gets reset after a period of inactivity. Plugin retries and  HAProxy re-establishes the connection and the operation succeeds. More investigation is needed. 
 
Vxlan routing and other features involving MX gateway are tested with private image. All the fixes in these releases will be released as part of Junos  14.2R4. Additionally multiple MX gateway support require fix for bug  1466328. Due to this bug the QFX TOR doesn't program L2 route corresponding to VRRP MAC. This bug is fixed in Junos, which will be released as part of D30 release.

R2.20 Upgrade Instructions

 

NOTE: If you are installing Contrail for the first time, refer to the full documentation and installation instructions in the Contrail Getting Started Guide at: https://techwiki.juniper.net/Documentation/Contrail/Contrail_Controller_Getting_Started_Guide/202Installation .

Upgrading Contrail Software from Release 2.0  or Greater to Release 2.20

Use the following procedure to upgrade an installation of Contrail software from one release to a more recent release. This procedure is valid starting with Contrail Release 2.0 and greater.  

Instructions are given for both CentOS and Ubuntu versions. The only Ubuntu versions supported for upgrading are Ubuntu 12.04, and 14.04.2.

To upgrade Contrail software from Contrail Release 2.0 or greater:

  1. Download the file 
    contrail-install-packages-x.xx-xxx.xxx.noarch.rpm | deb  
    from: http://www.juniper.net/support/downloads/?p=contrail#sw  and copy it to the /tmp directory on the config node, as follows:
    CentOSscp <id@server>:/path/to/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm /tmp
    Ubuntuscp <id@server>:/path/to/contrail-install-packages-x.xx-xx~icehouse_all.deb /tmp​

    Note: The variables xxx.xxx and so on represent the release and build numbers that are present in the name of the installation packages that you download.

  2. Install the contrail-install-packagesusing the correct command for your operating system:

       ​CentOS​yum localinstall /tmp/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm
       Ubuntu​dpkg –i /tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb

  1. Set up the local repository by running setup.sh:
    cd /opt/contrail/contrail_packages; ./setup.sh
     
  2. Ensure that the testbed.py that was used to set up the cluster with Contrail is intact at  /opt/contrail/utils/fabfile/testbeds
      -  Ensure that testbed.py has been set up with a combined control_data section (required as of Contrail Release 1.10).
      -  Ensure that the do_parallel flag is set to True in testbed.py, see bug 1426522 in Launchpad.net.

  ​​5.  Upgrade the software, using the correct set of commands to match your operating system and vrouter, as described in the following:

Change to the utils folder:
 
   ​cd /opt/contrail/utils; \
 and perform the correct upgrade procedure to match your operating system and vrouter. In the following, <from> refers to the currently installed release number, such as  2.0, 2.01, 2.1.

CentOS Upgrade Procedure: 

           fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx.xxx.noarch.rpm;  

          Ubuntu 12.04 Upgrade Procedure: 

           fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;

Ubuntu 14.04 Upgrade Procedure:

There are two different upgrade procedures for Ubuntu 14.04 upgrade to Contrail Release 2.20, depending on the  vrouter (contrail-vrouter-3.13.0-35-generic or  contrail-vrouter-dkms)  package installed in current setup.

As of Contrail Release 2.20, the recommended kernel version for an Ubuntu 14.04-based system is 3.13.0-40.

  • Ubuntu 14.04 upgrade procedure for system with contrail-vrouter-3.13.0-35-generic installed:

The command sequence upgrades the kernel version and also reboots the compute nodes when finished.

                   fab install_pkg_all:/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;

                   fab migrate_compute_kernel;

                   fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;

                   fab upgrade_kernel_all;

                   fab restart_openstack_compute;

  • Ubuntu 14.04 upgrade procedure for system with recommended version of

                contrail-vrouter-3.13.40-generic or  contrail-vrouter-dkms installed:

fab upgrade_contrail:<from>,/tmp/contrail-install-packages-x.xx-xxx~icehouse_all.deb;

                   All nodes in the cluster can be upgraded to kernel version 3.13.0-40 by using the following fabric-utils command​:

                    fab upgrade_kernel_all

  1. On the OpenStack node, soft reboot all of the virtual machines. ​
    You can do this in the OpenStack dashboard or log into the node that uses the openstack role and issue the following: 
    source /etc/contrail/openstackrc ; nova reboot <vm-name>

    You can also use the following fab command to reboot all virtual machines:
    fab reboot vm
     
  2. Check to ensure the openstack-nova-novncproxy is still running: 
    service openstack-nova-novncproxy status 
    If necessary, restart the service:  
    service openstack-nova-novncproxy restart

  3. (For Contrail Storage option, only.)
    Contrail Storage has its own packages. 
    To upgrade Contrail Storage, download the file: 
    contrail-storage-packages_x.x-xx*.deb 
    from http://www.juniper.net/support/downloads/?p=contrail#sw 
    and copy it to the /tmp directory on the config node, as follows: 
    Ubuntu: scp <id@server>:/path/to/contrail-storage-packages_x.x-xx*.deb  /tmp​
    Note: Use only Icehouse packages (for example, contrail-storage-packages_2.0-22~icehouse_all.deb)  because OpenStack Havana is no longer supported.

 

Use the following statement to upgrade the software:
    ​    ​cd /opt/contrail/utils; \

    ​  Ubuntu: fab upgrade_storage:<from>,/tmp/contrail-storage-packages_2.0-22~icehouse_all.deb;

When upgrading to R2.10, add the following steps if you have live migration configured.  Upgrades to Release 2.0 do not require these steps.  Select the command that matches your live migration configuration.

fab setup_nfs_livem

or

fab setup_nfs_livem_global

Note : To upgrade from  older contrail releases prior to 2.0, refer to Contrail release notes for that version.

Documentation Feedback

 

We encourage you to provide feedback, comments, and suggestions so that we can improve the documentation. You can provide feedback by using either of the following methods:

·    Online feedback rating system—On any page at the Juniper Networks Technical Documentation site athttp://www.juniper.net/techpubs/index.html, simply click the stars to rate the content, and use the pop-up form to provide us with information about your experience. Alternately, you can use the online feedback form at https://www.juniper.net/cgi-bin/docbugreport/.

·    E-mail—Send your comments to techpubs-comments@juniper.net. Include the document or topic name, URL or page number, and software version (if applicable).

Requesting Technical Support

Technical product support is available through the Juniper Networks Technical Assistance Center (JTAC). If you are a customer with an active J-Care or JNASC support contract, or are covered under warranty, and need post-sales technical support, you can access our tools and resources online or open a case with JTAC.

·    JTAC policies—For a complete understanding of our JTAC procedures and policies, review the JTAC User Guide located athttp://www.juniper.net/us/en/local/pdf/resource-guides/7100059-en.pdf.

·    Product warranties—For product warranty information, visit http://www.juniper.net/support/warranty/.

·    JTAC hours of operation—The JTAC centers have resources available 24 hours a day, 7 days a week, 365 days a year.

Self-Help Online Tools and Resources

Edit section

For quick and easy problem resolution, Juniper Networks has designed an online self-service portal called the Customer Support Center (CSC) that provides you with the following features:

·    Find CSC offerings: http://www.juniper.net/customers/support/

·    Search for known bugs: http://www2.juniper.net/kb/

·    Find product documentation: http://www.juniper.net/techpubs/

·    Find solutions and answer questions using our Knowledge Base: http://kb.juniper.net/

·    Download the latest versions of software and review release notes: http://www.juniper.net/customers/csc/software/

·    Search technical bulletins for relevant hardware and software notifications: https://www.juniper.net/alerts/

·    Join and participate in the Juniper Networks Community Forum: http://www.juniper.net/company/communities/

·    Open a case online in the CSC Case Management tool: http://www.juniper.net/cm/

To verify service entitlement by product serial number, use our Serial Number Entitlement (SNE) Tool:https://tools.juniper.net/SerialNumberEntitlementSearch/

Opening a Case with JTAC

Edit section

You can open a case with JTAC on the Web or by telephone.

·    Use the Case Management tool in the CSC at http://www.juniper.net/cm/.

·    Call 1-888-314-JTAC (1-888-314-5822 toll-free in the USA, Canada, and Mexico).

For international or direct-dial options in countries without toll-free numbers, see http://www.juniper.net/support/requesting-support.html.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值