http://docs.openstack.org/releasenotes/neutron/newton.html
9.0.0-4
Upgrade Notes
A new option ha_keepalived_state_change_server_threads has been added to configure the number of concurrent threads spawned for keepalived server connection requests. Higher values increase the CPU load on the agent nodes. The default value is half of the number of CPUs present on the node. This allows operators to tune the number of threads to suit their environment. With more threads, simultaneous requests for multiple HA routers state change can be handled faster.
9.0.0
Add options to designate external dns driver of neutron for SSL based connections. This makes it possible to use neutron with designate in scenario where endpoints are SSL based. Users can specify to skip cert validation or specify path to a valid cert in [designate] section of neutron.conf file.
Call dhcp_release6 command line utility when releasing unused IPv6 leases for DHCPv6 stateful subnets. dhcp_release6 first appeared in dnsmasq 2.76
Add ip_allocation attribute to port resources
The default value for ‘external_network_bridge’ in the L3 agent is now ‘’.
Prior to Newton, the neutron-openvswitch-agent used ‘ovs-ofctl’ of_interface driver by default. In Newton, ‘of_interface’ defaults to ‘native’. This mostly eliminates spawning ovs-ofctl and improves performance a little.
Properly calculate overlay (tunnel) protocol overhead for environments using IPv4 or IPv6 endpoints. The ML2 plug-in configuration file contains a new configuration option, ‘overlay_ip_version’, in the ‘[ml2]’ section that indicates the IP version of all overlay network endpoints. Use ‘4’ for IPv4 and ‘6’ for IPv6. Defaults to ‘4’. Additionally, all layer-2 agents must use the same IP version for endpoints.
Prior to Newton, the default option for ‘ovsdb_interface’ was ‘vsctl’. In Newton ‘ovsdb_interface’ defaults to ‘native’. This change switches the way of communication with OVSDB from the ovs-vsctl tool to Open vSwitch python api to improve out-of-the-box performance for typical deployments.
The internal pluggable IPAM implementation – added in the Liberty release – is now the default for both old and new deployments. Old deployments are unconditionally switched to pluggable IPAM during upgrade. Old non-pluggable IPAM is deprecated and removed from code base.
Remove ‘quota_items’ configuration option from neutron.conf file. This option was deprecated since Liberty release and has no effect now.
Remove ‘router_id’ configuration option from the l3_agent.ini file. ‘router_id’ option has been defined in order to associate an l3-agent to a specific router when use_namespaces=False. It was deprecated after use_namespaces was removed in Mitaka release.
The created_at and updated_at fields available on Neutron resources now include a timezone indicator at the end. Because this is a change in format, the old ‘timestamp_core’ extension has been removed and replaced with a ‘timestamp’ extension.
The “vlan-aware-vms” feature allows Nova users to launch VMs on a single port (trunk parent port) that connects multiple Neutron logical networks together.
New Features
Two new options are added to [designate] section to support SSL.
First option insecure allows to skip SSL validation when creating a keystone session to initate a designate client. Default value is False, which means to always verify connection.
Second option ca_cert allows setting path to a valid cert file. Default is None.
SR-IOV now supports egress minimum bandwidth configuration.
The port resource now has an ip_allocation attribute. The value of this attribute will be set to ‘immediate’, ‘deferred’, or ‘none’ at the time the port is created. It will not be changed when the port is updated. ‘immediate’ means that the port is expected to have an IP address and Neutron attempted IP allocation on port creation. ‘deferred’ means that the port is expected to have an IP address but Neutron deferred IP allocation until a port update provides the host to which the port will be bound. ‘none’ means that the port was created explicitly with no addresses by passing [] in fixed_ips when creating it.
Subnets now have a new property ‘service_types’. This is a list of port device owners, such that only ports with a matching device owner will be given an IP from this subnet. If no matching service subnet exists for the given device owner, or no service subnets have been defined on the network, the port will be assigned an IP from a subnet with no service-types. This preserves backwards compatibility with older deployments.
net-mtu extension now recalculates network MTU on each network access, not just on creation. It now allows operators to tweak MTU related configuration options and see them applied to all network resources right after controller restart, both old and new.
The new l2_adjacency extension adds an l2_adjacency field to the network, to indicate whether or not there is guaranteed L2 adjacency between the ports on that Network. Routed network implementations would typically set l2_adjacency to False.
The neutron L3 agent now has the ability to load agent extensions, which allows other services to integrate without additional agent changes. An API for exposing the l3 agent’s router info data to the extensions is also provided so that extensions can remain consistent with router state.
Neutron switched to using oslo.cache library to cache port state in metadata agent. With it, more caching backends are now available, including Memcached and Mongo. More details in oslo.cache documentation.
The Networking API now supports the ‘project_id’ field in requests and responses, for compatibility with the Identity (Keystone) API V3. A new API extension, ‘project-id’, has been added to allow API users to detect if the ‘project_id’ field is supported. Note that the ‘tenant_id’ field is still supported, and the two fields are functionally equivalent.
Users can now apply a QoS rule to a port or network to setup the minimum egress bandwidth per queue and port. The minimum egress bandwidth rule is applied to each port individually.
New API extensions, ‘sorting’ and ‘pagination’, have been added to allow API users to detect if sorting and pagination features are enabled. These features are controlled by allow_sorting and allow_pagination configuration options.
The feature “vlan-aware-vms” is available. To enable it, a service plugin named ‘trunk’ must be added to the option service_plugins in your neutron.conf. The plugin exposes two new extensions trunk and trunk_details. The plugin can work with multiple backends and in particular Neutron has support for ML2/openvswitch and ML2/linuxbridge. Even though Neutron API compatibility should be preserved for ports associated to trunks, since this is the first release where the feature is available, it is reasonable to expect possible functionality gaps for one or both drivers. These will be filled over time as being reported. The CLI is available via openstackclient, and python-neutronclient 5.1.0 or above. For more details, please check the networking guide.
Known Issues
Absence of dhcp_release6 when DHCPv6 stateful addressing is in use may lead to bug 1521666. Neutron supports dhcp_release6 now, but if the tool is not available this leads to increased log warnings. Read bug report 1622002 for more details.
Upgrade Notes
A version of dnsmasq that includes dhcp_release6 should be installed on systems running the DHCP agent. Failure to do this could cause DHCPv6 stateful addressing to not function properly.
The rootwrap filters file dhcp.filters must be updated to include dhcp_release6, otherwise trying to run the utility will result in a NoFilterMatched exception.
All existing ports are considered to have ‘immediate’ IP allocation. Any ports that do not have this attribute should also be considered to have immediate IP allocation.
A new table ‘subnet_service_types’ has been added to cater for this feature. It uses the ID field from the ‘subnets’ table as a foreign key.
The default value for ‘external_network_bridge’ has been changed to ‘’ since that is the preferred way to configure the L3 agent and will be the only way in future releases. If you have not explicitly set this value and you use the L3 agent, you will need to set this value to ‘br-ex’ to match the old default. If you are using ‘br-ex’, you should switch to ‘’, ensure your external network has a flat segment and ensure your L2 agent has a bridge_mapping entry between the external network’s flat segment physnet and ‘br-ex’ to get the same connectivity. If the external network did not already have the flat segment, you will need to detach all routers from the external networks, delete the incorrect segment type, add the flat segment, and re-attach the routers.
The configuration option dhcp_lease_time was deprecated in the Havana cycle. This option is no longer supported. The option was replaced by dhcp_lease_duration.
The configuration option dnsmasq_dns_server was deprecated in the kilo cycle. This value is no longer supported.
API sorting and pagination features are now enabled by default.
Existing networks with MTU values that don’t reflect configuration will receive new MTU values after controller upgrade. Note that to propagate new correct MTU values to your backend, you may need to resync all agents that set up ports, as well as re-attach VIFs to affected instances.
To retain the old default for neutron-openvswitch-agent, use ‘of_interface = ovs-ofctl’ in the ‘[ovs]’ section of your openvswitch agent configuration file.
By default, the native interface will have the Ryu controller listen on 127.0.0.1:6633. The listen address can be configured with of_listen_address and of_listen_port options. Ensure that the controller has permission to listen at the configured address.
Define the ‘overlay_ip_version’ option and value appropriate for the environment. Only required if not using the Default of ‘4’.
To keep the old default value use ‘ovsdb_interface = vsctl’ in ‘[ovs]’ section of openvswitch_agent.ini (common path ‘/etc/neutron/plugins/ml2/openvswitch_agent.ini’) if there is a separate openvswitch agent configuration file; otherwise apply changes mentioned above to ml2_conf.ini (common path ‘/etc/neutron/plugins/ml2/ml2_conf.ini’).
The native interface configures ovsdb-server to listen for connections on 127.0.0.1:6640 by default. The address can be configured with the ovsdb_connection config option. Ensure that ovsdb-server has permissions to listen on the configured address.
During upgrade ‘internal’ ipam driver becomes default for ‘ipam_driver’ config option and data is migrated to new tables using alembic migration.
The network_device_mtu option is removed. Existing users of the option are advised to adopt new configuration options to accommodate for their underlying physical infrastructure. The relevant options are global_physnet_mtu for all plugins, and also path_mtu and physical_network_mtus for ML2.
Remove ‘quota_items’ configuration option from neutron.conf file.
Remove ‘router_id’ configuration option from the l3_agent.ini file.
The configuration options for default_ipv4_subnet_pool and default_ipv6_subnet_pool have been removed. Please use the is_default option of the create/update subnetpool API instead.
tenant_id column has been renamed to project_id. This database migration is required to be applied as offline migration.
The ‘timestamp_core’ extension has been removed and replaced with the ‘standard-attr-timestamp’ extension. Objects will still have timestamps in the ‘created_at’ and ‘updated_at’ fields, but they will have the timestamp appended to the end of them to be consistent with other OpenStack projects.
Deprecation Notes
The allow_sorting and allow_pagination configuration options are deprecated and will be removed in a future release.
Neutron controller service currently allows to load service_providers options from some files that are not passed to it via –config-dir or –config-file CLI options. This behaviour is now deprecated and will be disabled in Ocata. Current users are advised to switch to aforementioned CLI options.
The option min_l3_agents_per_router is deprecated and will be removed for the Ocata release where the scheduling of new HA routers will always be allowed.
The ‘supported_pci_vendor_devs’ option is deprecated in Newton and will be removed in Ocata. The validation of supported pci vendors is done in nova-scheduler through the pci_passthrough_whitelist option when it selects a suitable hypervisor, hence the option is considered redundant.
The cache_url configuration option is deprecated as of Newton, and will be removed in Ocata. Please configure metadata cache using [cache] group, setting enable = True and configuring your backend.
The non-pluggable ipam implementatios is deprecated and will be removed in Newton release cycle.
Security Issues
When working with the ML2/openvswitch driver, the “vlan-aware-vms” feature has the following limitations:
security groups do not work in conjunction with the iptables-based firewall driver.
if security groups are desired, the use of the stateful OVS firewall is required, however that prevents the use of the DPDK datapath for OVS versions 2.5 or lower.
Bug Fixes
In order to fix the communication issues between SR-IOV instances and regular instances the FDB population extension is added to the OVS or linuxbridge agent. the cause was that messages from SR-IOV direct port instance to normal port instances located on the same hypervisor were sent directly to the wire because the FDB table was not yet updated. FDB population extension tracks instances boot/delete operations using the handle_port delete_port extension interface messages and update the hypervisor’s FDB table accordingly. Please note this L2 agent extension doesn’t support allowed address pairs extension.
Allow SR-IOV agent to run with 0 vfs
Bug 1561200 has been fixed by including the timezone with Neutron ‘created_at’ and ‘updated_at’ fields.
Other Notes
In order to use QoS egress minimum bandwidth limit feature, ‘ip-link’ must support the extended VF management parameter min_tx_rate. Minimum version of ip-link supporting this parameter is iproute2-ss140804, git tag v3.16.0.
The value of the ‘overlay_ip_version’ option adds either 20 bytes for IPv4 or 40 bytes for IPv6 to determine the total tunnel overhead amount.
At the time of writing, Neutron bandwidth booking is not integrated with Compute scheduler, which means that minimal bandwidth is not guaranteed but provided as best effort.
8.1.0
Support configuration of greenthreads pool for WSGI.
The Neutron server no longer needs to be configured with a firewall driver and it can support mixed environments of hybrid iptables firewalls and the pure OVS firewall.
Support for IPv6 addresses as tunnel endpoints in OVS.
OFAgent has been removed in the Newton cycle.
By default, the QoS driver for the Open vSwitch and Linuxbridge agents calculates the burst value as 80% of the available bandwidth.
New Features
Return code for quota delete for a tenant whose quota has not been previously defined has been changed from 204 to 404.
The Neutron server now learns the appropriate firewall wiring behavior from each OVS agent so it no longer needs to be configured with the firewall_driver. This means it also supports multiple agents with different types of firewalls.
The local_ip value in ml2_conf.ini can now be set to an IPv6 address configured on the system.
Upgrade Notes
OSprofiler support was introduced. To allow its usage the api-paste.ini file needs to be modified to contain osprofiler middleware. Also [profiler] section needs to be added to the neutron.conf file with enabled, hmac_keys and trace_sqlalchemy flags defined.
In case you rely on the default ML2 path_mtu value of 1500 to cap MTU used for new network resources, please set it explicitly in your ml2_conf.ini file.
Deprecation Notes
The ‘advertise_mtu’ option is deprecated and will be removed in Ocata. There should be no use case to disable the feature, hence the option is considered redundant. DHCP and L3 agents will continue advertising MTU values to instances. Other plugins not using those agents are also encouraged to advertise MTU to instances. The actual implementation of MTU advertisement depends on the plugin in use, but it’s assumed that at least DHCP option for IPv4 clients and Router Advertisements for IPv6 clients is supported.
The tool neutron-debug is now deprecated, to be replaced with a new set of troubleshooting and diagnostic tools. There is no plan for removal in the immediate term, and not until comparable tools will be adequate enough to supplant neutron-debug altogether. For more information, please see https://blueprints.launchpad.net/neutron/+spec/troubleshooting
The option [AGENT] prevent_arp_spoofing has been deprecated and will be removed in Ocata release. ARP spoofing protection should always be enabled unless its explicitly disabled via the port security extension via the API. The primary reason it was a config option was because it was merged at the end of Kilo development cycle so it was not considered stable. It has been enabled by default since Liberty and is considered stable and there is no reason to keep this configurable.
Security Issues
OSprofiler support requires passing of trace information between various OpenStack services. This information is securely signed by one of HMAC keys, defined in neutron.conf configuration file. To allow cross-project tracing user should use the key, that is common among all OpenStack services he or she wants to trace.
Bug Fixes
Missing OSprofiler support was added. This cross-project profiling library allows to trace various OpenStack requests through all OpenStack services that support it. To initiate OpenStack request tracing –profile <HMAC_KEY> option needs to be added to the CLI command. This key needs to present one of the secret keys defined in neutron.conf configuration file with hmac_keys option under the [profiler] configuration section. To enable or disable Neutron profiling the appropriate enabled option under the same section needs to be set either to True or False. By default Neutron will trace all API and RPC requests, but there is an opportunity to trace DB requests as well. For this purpose trace_sqlalchemy option needs to be set to True. As a prerequisite OSprofiler library and its storage backend needs to be installed to the environment. If so (and if profiling is enabled in neutron.conf) the trace can be generated via command - $ neutron –profile SECRET_KEY <subcommand>. At the end of output there will be message with <trace_id>, and to plot nice HTML graphs the following command should be used - $ osprofiler trace show <trace_id> –html –out result.html
The default value for ML2 path_mtu option is changed from 1500 to 0, effectively disabling its participation in network MTU calculation unless it’s overridden in the ml2_conf.ini configuration file.
Fixes bug 1572670
Other Notes
Operators may want to tune the max_overflow and wsgi_default_pool_size configuration options according to the investigations outlined in this mailing list post. The default value of wsgi_default_pool_size inherits from that of oslo.config, which is currently 100. This is a change in default from the previous Neutron-specific value of 1000.
Requires OVS 2.5+ version or higher with linux kernel 4.3 or higher. More info at OVS github page.
The Openflow Agent(OFAgent) mechanism driver and its agent have been removed in favor of OpenvSwitch mechanism driver with “native” of_interface in the Newton cycle.
8.0.0
The ML2 plug-in supports calculating the MTU for instances using overlay networks by subtracting the overlay protocol overhead from the value of ‘path_mtu’, ideally the physical (underlying) network MTU, and providing the smaller value to instances via DHCP. Prior to Mitaka, ‘path_mtu’ defaults to 0 which disables this feature. In Mitaka, ‘path_mtu’ defaults to 1500, a typical MTU for physical networks, to improve the “out of box” experience for typical deployments.
The ML2 plug-in supports calculating the MTU for networks that are realized as flat or VLAN networks, by consulting the ‘segment_mtu’ option. Prior to Mitaka, ‘segment_mtu’ defaults to 0 which disables this feature. This creates slightly confusing API results when querying Neutron networks, since the plugins that support the MTU API extension would return networks with the MTU equal to zero. Networks with an MTU of zero make little sense, since nothing could ever be transmitted. In Mitaka, ‘segment_mtu’ now defaults to 1500 which is the standard MTU for Ethernet networks in order to improve the “out of box” experience for typical deployments.
The LinuxBridge agent now supports QoS bandwidth limiting.
External networks can now be controlled using the RBAC framework that was added in Liberty. This allows networks to be made available to specific tenants (as opposed to all tenants) to be used as an external gateway for routers and floating IPs.
DHCP and L3 Agent scheduling is availability zone aware.
The “get-me-a-network” feature simplifies the process for launching an instance with basic network connectivity (via an externally connected private tenant network).
Support integration with external DNS service.
Add popular IP protocols to the security group code. End-users can specify protocol names instead of protocol numbers in both RESTful API and python-neutronclient CLI.
ML2: ports can now recover from binding failed state.
RBAC support for QoS policies
Add description field to security group rules, networks, ports, routers, floating IPs, and subnet pools.
Add tag mechanism for network resources
Timestamp fields have been added to neutron core resources.
Announcement of tenant prefixes and host routes for floating IP’s via BGP is supported
Allowed address pairs can now be cleared by passing None in addition to an empty list. This is to make it possible to use the –action=clear option with the neutron client. neutron port-update –allowed-address-pairs action=clear
Core configuration files are automatically generated.
max_fixed_ips_per_port has been deprecated and will be removed in the Newton or Ocata cycle depending on when all identified usecases of the options are satisfied via another quota system.
OFAgent is decomposed and deprecated in the Mitaka cycle.
Add new VNIC type for SR-IOV physical functions.
A new rule has been added to the API that allows for tagging traffic with DSCP values. This is currently supported by the Open vSwitch QoS driver.
High Availability (HA) of SNAT service is supported for Distributed Virtual Routers (DVRs).
An OVS agent configured to run in DVR mode will fail to start if it cannot get proper DVR configuration values from the server on start-up. The agent will no longer fallback to non-DVR mode, since it may lead to inconsistency in the DVR-enabled cluster as the Neutron server does not distinguish between DVR and non-DVR OVS agents.
Improve DVR’s resiliency during Nova VM live migration events.
The Linuxbridge agent now supports l2 agent extensions.
Adding MacVtap ML2 driver and L2 Agent as new vswitch choice
Support for MTU selection and advertisement.
Neutron now provides network IP availability information.
Neutron is integrated with Guru Meditation Reports library.
Schedule networks on dhcp-agents with access to network
oslo.messaging.notify.drivers entry points are deprecated
Several NICs per physical network can be used with SR-IOV.
New Features
In Mitaka, the combination of ‘path_mtu’ defaulting to 1500 and ‘advertise_mtu’ defaulting to True provides a value of MTU accounting for any overlay protocol overhead on the network to instances using DHCP. For example, an instance attaching to a VXLAN network receives a 1450 MTU from DHCP accounting for 50 bytes of overhead from the VXLAN overlay protocol if using IPv4 endpoints.
In Mitaka, queries to the Networking API for network objects will now return network objects that contain a sane MTU value.
The LinuxBridge agent can now configure basic bandwidth limiting QoS rules set for ports and networks. It introduces two new config options for LinuxBridge agent. First is ‘kernel_hz’ option which is value of host kernel HZ setting. It is necessary for proper calculation of minimum burst value in tbf qdisc setting. Second is ‘tbf_latency’ which is value of latency to be configured in tc-tbf setting. Details about this option can be found in tc-tbf manual.
External networks can now be controlled using the RBAC framework that was added in Liberty. This allows networks to be made available to specific tenants (as opposed to all tenants) to be used as an external gateway for routers and floating IPs. By default this feature will also allow regular tenants to make their networks available as external networks to other individual tenants (or even themselves), but they are prevented from using the wildcard to share to all tenants. This behavior can be adjusted via policy.json by the operator if desired.
A DHCP agent is assigned to an availability zone; the network will be hosted by the DHCP agent with availability zone specified by the user.
An L3 agent is assigned to an availability zone; the router will be hosted by the L3 agent with availability zone specified by the user. This supports the use of availability zones with HA routers. DVR isn’t supported now because L3HA and DVR integration isn’t finished.
Once Nova takes advantage of the “get-me-a-network” feature, a user can launch an instance without explicitly provisioning network resources.
Floating IPs can have dns_name and dns_domain attributes associated with them
Ports can have a dns_name attribute associated with them. The network where a port is created can have a dns_domain associated with it
Floating IPs and ports will be published in an external DNS service if they have dns_name and dns_domain attributes associated with them.
The reference driver integrates neutron with designate
Drivers for other DNSaaS can be implemented
Driver is configured in the default section of neutron.conf using parameter ‘external_dns_driver’
Ports that failed to bind when an L2 agent was offline can now recover after the agent is back online.
Neutron now supports sharing of QoS policies between a subset of tenants.
Security group rules, networks, ports, routers, floating IPs, and subnet pools may now contain an optional description which allows users to easily store details about entities.
Users can set tags on their network resources.
Networks can be filtered by tags. The supported filters are ‘tags’, ‘tags-any’, ‘not-tags’ and ‘not-tags-any’.
Add timestamp fields created_at, updated_at into neutron core resources for example networks, subnets, ports and subnetpools.
These resources can now be queried by changed-since, which returns the resources changed after a specific time string like YYYY-MM-DDTHH:MM:SS.
By default, the DHCP agent provides a network MTU value to instances using the corresponding DHCP option if core plugin calculates the value. For ML2 plugin, calculation mechanism is enabled by setting [ml2] path_mtu option to a value greater than zero.
Allow non-admin users to define “external” extra-routes.
Announcement of tenant subnets via BGP using centralized Neutron router gateway port as the next-hop
Announcement of floating IP host routes via BGP using the centralized Neutron router gateway port as the next-hop
Announcement of floating IP host routes via BGP using the floating IP agent gateway as the next-hop when the floating IP is associated through a distributed router
Neutron no longer includes static example configuration files. Instead, use tools/generate_config_file_samples.sh to generate them. The files are generated with a .sample extension.
Add derived attributes to the network to tell users which address scopes the network is in.
The subnet API now includes a new use_default_subnetpool attribute. This attribute can be specified on creating a subnet in lieu of a subnetpool_id. The two are mutually exclusive. If it is specified as True, the default subnet pool for the requested ip_version will be looked up and used. If no default exists, an error will be returned.
Neutron now supports creation of ports for exposing physical functions as network devices to guests.
Neutron can apply a QoS rule to ports that mark outgoing traffic’s type of service packet header field.
The Open vSwitch Neutron agent has been extended to mark the Type of Service IP header field of packets egressing from the VM when the QoS rule has been applied.
High Availability support for SNAT services on Distributed Virtual Routers. Routers can now be created with the flags distributed=True and ha=True. The created routers will provide Distributed Virtual Routing as well as SNAT high availability on the l3 agents configured for dvr_snat mode.
Use the value of the network ‘mtu’ attribute for the MTU of virtual network interfaces such as veth pairs, patch ports, and tap devices involving a particular network.
Enable end-to-end support for arbitrary MTUs including jumbo frames between instances and provider networks by moving MTU disparities between flat or VLAN networks and overlay networks from layer-2 devices to layer-3 devices that support path MTU discovery (PMTUD).
The Linuxbridge agent can now be extended by 3rd parties using a pluggable mechanism.
Libvirt qemu/kvm instances can now be attached via MacVtap in bridge mode to a network. VLAN and FLAT attachments are supported. Other attachmentes than compute are not supported.
When advertise_mtu is set in the config, Neutron supports advertising the LinkMTU using Router Advertisements.
A new API endpoint /v2.0/network-ip-availabilities that allows an admin to quickly get counts of used_ips and total_ips for network(s) is available. New endpoint allows filtering by network_id, network_name, tenant_id, and ip_version. Response returns network and nested subnet data that includes used and total IPs.
SriovNicSwitchMechanismDriver driver now exposes a new VIF type ‘hostdev_physical’ for ports with vnic type ‘direct-physical’ (used for SR-IOV PF passthrough). This will enable Nova to provision PFs as Neutron ports.
The RPC and notification queues have been separated into different queues. Specify the transport_url to be used for notifications within the [oslo_messaging_notifications] section of the configuration file. If no transport_url is specified in [oslo_messaging_notifications], the transport_url used for RPC will be used.
Neutron services should respond to SIGUSR2 signal by dumping valuable debug information to standard error output.
New security groups firewall driver is introduced. It’s based on OpenFlow using connection tracking.
DHCP schedulers use “filter_host_with_network_access” plugin method to filter hosts with access to dhcp network. Plugins can overload it to define their own filtering logic. In particular, ML2 plugin delegates the filtering to mechanism drivers.
Neutron can interact with keystone v3.
Known Issues
The combination of ‘path_mtu’ and ‘advertise_mtu’ only adjusts the MTU for instances rather than all virtual network components between instances and provider/public networks. In particular, setting ‘path_mtu’ to a value greater than 1500 can cause packet loss even if the physical network supports it. Also, the calculation does not consider additional overhead from IPv6 endpoints.
When using DVR, if a floating IP is associated to a fixed IP direct access to the fixed IP is not possible when traffic is sent from outside of a Neutron tenant network (north-south traffic). Traffic sent between tenant networks (east-west traffic) is not affected. When using a distributed router, the floating IP will mask the fixed IP making it inaccessible, even though the tenant subnet is being announced as accessible through the centralized SNAT router. In such a case, traffic sent to the instance should be directed to the floating IP. This is a limitation of the Neutron L3 agent when using DVR and will be addressed in a future release.
Only creation of dvr/ha routers is currently supported. Upgrade from other types of routers to dvr/ha router is not supported on this release.
More synchronization between Nova and Neutron is needed to properly handle live migration failures on either side. For instance, if live migration is reverted or canceled, some dangling Neutron resources may be left on the destination host.
To ensure any kind of migration works between all compute nodes, make sure that the same physical_interface_mappings is configured on each MacVtap compute node. Having different mappings could cause live migration to fail (if the configured physical network interface does not exist on the target host), or even worse, result in an instance placed on the wrong physical network (if the physical network interface exists on the target host, but is used by another physical network or not used at all by OpenStack). Such an instance does not have access to its configured networks anymore. It then has layer 2 connectivity to either another OpenStack network, or one of the hosts networks.
OVS firewall driver doesn’t work well with other features using openflow.
Upgrade Notes
Operators using the ML2 plug-in with ‘path_mtu’ defaulting to 0 may need to perform a database migration to update the MTU for existing networks and possibly disable existing workarounds for MTU problems such as increasing the physical network MTU to 1550.
Operators using the ML2 plug-in with existing data may need to perform a database migration to update the MTU for existing networks
Add popular IP protocols to security group code.
To disable, use [DEFAULT] advertise_mtu = False.
The router_id option is deprecated and will be removed in the ‘N’ cycle.
Does not change MTU for existing virtual network interfaces.
Actions that create virtual network interfaces on an existing network with the ‘mtu’ attribute containing a value greater than zero could cause issues for network traffic traversing existing and new virtual network interfaces.
The Hyper-V Neutron Agent has been fully decomposed from Neutron. The neutron.plugins.hyperv.agent.security_groups_driver.HyperVSecurityGroupsDriver firewall driver has been deprecated and will be removed in the ‘O’ cycle. Update the neutron_hyperv_agent.conf files on the Hyper-V nodes to use hyperv.neutron.security_groups_driver.HyperVSecurityGroupsDriver, which is the networking_hyperv security groups driver.
When using ML2 and the Linux Bridge agent, the default value for the ARP Responder under L2Population has changed. The responder is now disabled to improve compatibility with the allowed-address-pair extension and to match the default behavior of the ML2 OVS agent. The logical network will now utilize traditional flood and learn through the overlay. When upgrading, existing vxlan devices will retain their old setup and be unimpacted by changes to this flag. To apply this to older devices created with the Liberty agent, the vxlan device must be removed and then the Mitaka agent restarted. The agent will recreate the vxlan devices with the current settings upon restart. To maintain pre-Mitaka behavior, enable the arp_responder in the Linux Bridge agent VXLAN config file prior to starting the updated agent.
Neutron depends on keystoneauth instead of keystoneclient.
Deprecation Notes
The default_subnet_pools option is now deprecated and will be removed in the Newton release. The same functionality is now provided by setting is_default attribute on subnetpools to True using the API or client.
The ‘force_gateway_on_subnet’ option is deprecated and will be removed in the ‘Newton’ cycle.
The ‘network_device_mtu’ option is deprecated and will be removed in the ‘Newton’ cycle. Please use the system-wide segment_mtu setting which the agents will take into account when wiring VIFs.
max_fixed_ips_per_port has been deprecated and will be removed in the Newton or Ocata cycle depending on when all identified usecases of the options are satisfied via another quota system. If you depend on this configuration option to stop tenants from consuming IP addresses, please leave a comment on the bug report.
The ‘segment_mtu’ option of the ML2 configuration has been deprecated and replaced with the ‘global_physnet_mtu’ option in the main Neutron configuration. This option is meant to be used by all plugins for an operator to reference their physical network’s MTU, regardless of the backend plugin. Plugins should access this config option via the ‘get_deployment_physnet_mtu’ method added to neutron.plugins.common.utils to avoid being broken on any potential renames in the future.
Bug Fixes
Prior to Mitaka, the settings that control the frequency of router advertisements transmitted by the radvd daemon were not able to be adjusted. Larger deployments may wish to decrease the frequency in which radvd sends multicast traffic. The ‘min_rtr_adv_interval’ and ‘max_rtr_adv_interval’ settings in the L3 agent configuration file map directly to the ‘MinRtrAdvInterval’ and ‘MaxRtrAdvInterval’ in the generated radvd.conf file. Consult the manpage for radvd.conf for more detailed information.
Fixes bug 1537734
Prior to Mitaka, name resolution in instances requires specifying DNS resolvers via the ‘dnsmasq_dns_servers’ option in the DHCP agent configuration file or via neutron subnet options. In this case, the data plane must provide connectivity between instances and upstream DNS resolvers. Omitting both of these methods causes the dnsmasq service to offer the IP address on which it resides to instances for name resolution. However, the static dnsmasq ‘–no-resolv’ process argument prevents name resolution via dnsmasq, leaving instances without name resolution. Mitaka introduces the ‘dnsmasq_local_resolv’ option, default value False to preserve backward-compatibility, that enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. In this case, the data plane must provide connectivity between the host and upstream DNS resolvers rather than between the instances and upstream DNS resolvers. Specifying DNS resolvers via the ‘dnsmasq_dns_servers’ option in the DHCP agent configuration overrides the ‘dnsmasq_local_resolv’ option for all subnets using the DHCP agent.
Before Mitaka, when a default subnetpool was defined in the configuration, a request to create a subnet would fall back to using it if no specific subnet pool was specified. This behavior broke the semantics of subnet create calls in this scenario and is now considered an API bug. This bug has been fixed so that there is no automatic fallback with the presence of a default subnet pool. Workflows which depended on this new behavior will have to be modified to set the new use_default_subnetpool attribute when creating a subnet.
Create DVR router namespaces pro-actively on the destination node during live migration events. This helps minimize packet loss to floating IP traffic.
Explicitly configure MTU of virtual network interfaces rather than using default values or incorrect values that do not account for overlay protocol overhead.
The server will fail to start if any of the declared required extensions, as needed by core and service plugins, are not properly configured.
partially closes bug 1468803
The Linuxbridge agent now supports the ability to toggle the local ARP responder when L2Population is enabled. This ensures compatibility with the allowed-address-pairs extension. closes bug 1445089
Fix SR-IOV agent macvtap assigned VF check when linux kernel < 3.13
Fixes Bug 1548193, removing ‘force_gateway_on_subnet’ configuration option. This will always allow adding gateway outside the subnet, and gateway cannot be forced onto the subnet range.
The ‘physical_device_mappings’ of sriov_nic configuration now can accept more than one NIC per physical network. For example, if ‘physnet2’ is connected to enp1s0f0 and enp1s0f1, ‘physnet2:enp1s0f0,physnet2:enp1s0f1’ will be a valid option.
Loaded agent extensions of SR-IOV agent are now shown in agent state API.
Other Notes
Please read the OpenStack Networking Guide.
For overlay networks managed by ML2 core plugin, the calculation algorithm subtracts the overlay protocol overhead from the value of [ml2] path_mtu. The DHCP agent provides the resulting (smaller) MTU to instances using overlay networks.
The [DEFAULT] advertise_mtu option must contain a consistent value on all hosts running the DHCP agent.
Typical networks can use [ml2] path_mtu = 1500.
The Openflow Agent(OFAgent) mechanism driver is decomposed completely from neutron tree in the Mitaka. The OFAgent driver and its agent also are deprecated in favor of OpenvSwitch mechanism driver with “native” of_interface in the Mitaka and will be removed in the next release.
For details please read Blueprint mtu-selection-and-advertisement.
OVS firewall driver requires OVS 2.5 version or higher with linux kernel 4.3 or higher. More info at OVS github page.
The configuration option ‘force_gateway_on_subnet’ is removed. This will always allow adding gateway outside the subnet, and gateway cannot be forced onto the subnet range.
The oslo.messaging.notify.drivers entry points that were left in tree for backward compatibility with Icehouse are deprecated and will be removed after liberty-eol. Configure notifications using the oslo_messaging configuration options in neutron.conf.