Router Does Not Forward Multicast Packets

Router Does Not Forward Multicast Packets to Host Due to RPF Failure
 

Background Information

When troubleshooting multicast routing, the primary concern is the source address. Multicast has a concept of Reverse Path Forwarding check (RPF check). When a multicast packet arrives on an interface, the RPF process checks to ensure that this incoming interface is the outgoing interface used by unicast routing to reach the source of the multicast packet. This RPF check process prevents loops. Multicast routing does not forward a packet unless the source of the packet passes a reverse path forwarding (RPF) check. Once a packet passes this RPF check, multicast routing forwards the packet based only upon the destination address.

Like unicast routing, multicast routing has several available protocols, such as Protocol Independent Multicast dense mode (PIM-DM), PIM sparse mode (PIM-SM), Distance Vector Multicast Routing Protocol (DVMRP), Multicast Border Gateway Protocol (MBGP), and Multicast Source Discovery Protocol (MSDP). The case studies in this document walk you through the process of troubleshooting various problems. You will see which commands are used to quickly pinpoint the problem and learn how to resolve it. The case studies listed here are generic across the protocols, except where noted.

Router Does Not Forward Multicast Packets to Host Due to RPF Failure

This section provides a solution to the common problem of an IP multicast Reverse Path Forwarding (RPF) failure. This network diagram is used as an example.

 

In the figure above, multicast packets come into E0/0 of Router 75a from a server whose IP address is 1.1.1.1 and sends to group 224.1.1.1. This is known as an (S,G) or (1.1.1.1, 224.1.1.1).

Diagnose the Problem

Hosts directly connected to Router 75a receive the multicast feed, but hosts directly connected to Router 72a do not. First, issue the show ip mroute 224.1.1.1 command to see what is going on with Router 75a. This command examines the multicast route (mroute) for the group address 224.1.1.1:

75a#show ip mroute 224.1.1.1 
IP Multicast Routing Table Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned        
R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT        
M - MSDP created entry, X - Proxy Join Timer Running       
 A - Advertised via MSDP 
Timers: Uptime/Expires 
Interface state: Interface, Next-Hop or VCD, State/Mode 
(*, 224.1.1.1), 00:01:23/00:02:59, RP 0.0.0.0, flags: D   
Incoming interface: Null, RPF nbr 0.0.0.0   
Outgoing interface list:     
Ethernet0/1, Forward/Sparse-Dense, 00:01:23/00:00:00 
(1.1.1.1, 224.1.1.1), 00:01:23/00:03:00, flags: TA   
Incoming interface: Ethernet0/0, RPF nbr 0.0.0.0   
Outgoing interface list:     
Ethernet0/1, Forward/Sparse-Dense, 00:01:23/00:00:00 

Since the router is running PIM dense mode (we know it is dense mode because of the D flag), ignore the *,G entry and focus on the S,G entry. This entry tells you that the multicast packets are sourced from a server whose address is 1.1.1.1, which sends to a multicast group of 224.1.1.1. The packets are coming in the Ethernet0/0 interface and are forwarded out the Ethernet0/1 interface. This is a perfect scenario.

Issue the show ip pim neighbor command to see whether Router 72a is showing the upstream router (75a) as a PIM neighbor:

ip22-72a#show ip pim neighbor
PIM Neighbor Table Neighbor Address  Interface          
Uptime    Expires   Ver  Mode 2.1.1.1           
Ethernet3/1        2d00h     00:01:15  v2 

From the show ip pim neighbor command output, the PIM neighborship look good.

Use this show ip mroute command to see whether Router 72a has good mroute:

ip22-72a#show ip mroute 224.1.1.1
IP Multicast Routing TableFlags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,       
L - Local, P - Pruned, R - RP-bit set, F - Register flag,       
T - SPT-bit set, J - Join SPT, M - MSDP created entry,       
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,       
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel       
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires 
Interface state: Interface, Next-Hop or VCD, State/Mode 
(*, 224.1.1.1), 00:10:42/stopped, RP 0.0.0.0, flags: DC  
Incoming interface: Null, RPF nbr 0.0.0.0  
Outgoing interface list:    
Ethernet3/1, Forward/Dense, 00:10:42/00:00:00    
Ethernet3/2, Forward/Dense, 00:10:42/00:00:00 (1.1.1.1, 224.1.1.1), 00:01:10/00:02:48, flags:   
Incoming interface: Ethernet2/0, RPF nbr 0.0.0.0  
Outgoing interface list:   
 Ethernet3/1, Forward/Dense, 00:01:10/00:00:00    
Ethernet3/2, Forward/Dense, 00:00:16/00:00:00 ip22-72a#

You can see from the show ip mroute 224.1.1.1 command that the incoming interface is Ethernet2/0, while Etheret3/1 is expected.

Issue the show ip mroute 224.1.1.1 count command to see if any multicast traffic for this group arrives to the Router 72a and what happens next:

ip22-72a#show ip mroute 224.1.1.1 count
IP Multicast Statistics3 routes using 2032 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg      
Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null,rate-limit etc)                
Group: 224.1.1.1, Source count: 1, Packets forwarded:      0, Packets received: 471  
Source:      1.1.1.1/32, Forwarding: 0/0/0/0, Other: 471/471/0ip22-72a#

You can see from the Other counts that traffic gets dropped due to RPF failure: total 471 drops, due to RPF failure – 471…

Issue the show ip rpf <source> command to see if there is an RPF error:

ip22-72a#show ip rpf 1.1.1.1
RPF information for ? (1.1.1.1)  
RPF interface: Ethernet2/0  
RPF neighbor: ? (0.0.0.0)  
RPF route/mask: 1.1.1.1/32  
RPF type: unicast (static)  
RPF recursion count: 0  
Doing distance-preferred lookups across tables
ip22-72a#

Cisco IOS® calculates the RPF interface in this way. Possible sources of RPF information are Unicast Routing Table, MBGP routing table, DVMRP routing table and Static Mroute table. When you calculate the RPF interface, primarily administrative distance is used to determine exactly which source of information the RPF calculation is based on. The specific rules are:

  • All preceding sources of RPF data are searched for a match on the source IP address. When using Shared Trees, the RP address is used instead of the source address.

  • If more than one matching route is found, the route with the lowest administrative distance is used.

  • If the admin distances are equal, then this order of preference is used:

    1. Static mroutes

    2. DVMRP routes

    3. MBGP routes

    4. Unicast routes

  • If multiple entries for a route occur within the same route table, the longest match route is used.

The show ip rpf 1.1.1.1 command output shows the RPF interface being E2/0, but the incoming interface should be E3/1.

Issue the show ip route 1.1.1.1 command to see why the RPF interface is different from what was expected.

ip22-72a#show ip route 1.1.1.1  
Routing entry for 1.1.1.1/32  
Known via "static", distance 1, metric 0 (connected)  
Routing Descriptor Blocks:  * directly connected, via Ethernet2/0   
Route metric is 0, traffic share count is 1

You can see from this show ip route 1.1.1.1 command output that there is a static /32 route, which makes the wrong interface to be chosen as RPF interface.

Issue some further debug commands:

ip22-72a#debug ip mpacket 224.1.1.1 
*Jan 14 09:45:32.972: IP: s=1.1.1.1 (Ethernet3/1) d=224.1.1.1 len 60, not RPF interface 
*Jan 14 09:45:33.020: IP: s=1.1.1.1 (Ethernet3/1) d=224.1.1.1 len 60, not RPF interface 
*Jan 14 09:45:33.072: IP: s=1.1.1.1 (Ethernet3/1) d=224.1.1.1 len 60, not RPF interface 
*Jan 14 09:45:33.120: IP: s=1.1.1.1 (Ethernet3/1) d=224.1.1.1 len 60, not RPF interface

The packets are coming in on E3/1, which is correct. However, they are being dropped because that is not the interface the unicast routing table uses for the RPF check.

Note: Debugging packets is dangerous. Pakcet debugging triggers process switching of the multicast pakcets, which is CPU intensive. Also, packet debugging can produce huge output which can hang the router completely due to slow output to the console port. Befor debugging packet, special care must be taken to disable logging output to the console, and enable logging to the memory buffer. In order to achieve this, configure no logging console and logging buffered debugging. The results of the debug can be seen with the show logging command.

Possible Fixes

You can either change the unicast routing table to satisfy this requirement or you can add a static mroute to force multicast to RPF out a particular interface, regardless of what the unicast routing table states. Add a static mroute:

ip22-72a(config)#ip mroute 1.1.1.1 255.255.255.255 2.1.1.1

This static mroute states that to get to the address 1.1.1.1, for RPF, use 2.1.1.1 as the next hop, which is out interface E3/1.

ip22-72a#show ip rpf 1.1.1.1 
RPF information for ? (1.1.1.1)   
RPF interface: Ethernet3/1   
RPF neighbor: ? (2.1.1.1)   
RPF route/mask: 1.1.1.1/32   
RPF type: static mroute   
RPF recursion count: 0   
Doing distance-preferred lookups across tables 
The output of show ip mroute and debug ip mpacket looks good, 
the number of sent packets in the show ip mroute count increases and HostA receives packets.
ip22-72a#show ip mroute 224.1.1.1 
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, C - Connected, L - Local, P - Pruned
       R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT
       M - MSDP created entry, X - Proxy Join Timer Running
       A - Advertised via MSDP
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 224.1.1.1), 00:01:15/00:02:59, RP 0.0.0.0, flags: DJC 
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet3/1, Forward/Sparse-Dense, 00:01:15/00:00:00
    Ethernet3/2, Forward/Sparse-Dense, 00:00:58/00:00:00
(1.1.1.1, 224.1.1.1), 00:00:48/00:02:59, flags: CTA 
  Incoming interface: Ethernet3/1, RPF nbr 2.1.1.1, Mroute
  Outgoing interface list:
    Ethernet3/2, Forward/Sparse-Dense, 00:00:48/00:00:00
ip22-72a#show ip mroute 224.1.1.1 count
IP Multicast Statistics
3 routes using 2378 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)
 
Group: 224.1.1.1, Source count: 1, Packets forwarded: 1019, Packets received: 1019
  Source: 1.1.1.1/32, Forwarding: 1019/1/100/0, Other: 1019/0/0
 
ip22-72a#show ip mroute 224.1.1.1 count
IP Multicast Statistics
3 routes using 2378 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)
 
Group: 224.1.1.1, Source count: 1, Packets forwarded: 1026, Packets received: 1026
  Source: 1.1.1.1/32, Forwarding: 1026/1/100/0, Other: 1026/0/0
ip22-72a#
 
ip22-72a#debug ip mpacket 224.1.1.1 
*Jan 14 10:18:29.951: IP: s=1.1.1.1 (Ethernet3/1)
d=224.1.1.1 (Ethernet3/2) len 60, mforward
*Jan 14 10:18:29.999: IP: s=1.1.1.1 (Ethernet3/1)
d=224.1.1.1 (Ethernet3/2) len 60, mforward
*Jan 14 10:18:30.051: IP: s=1.1.1.1 (Ethernet3/1)
d=224.1.1.1 (Ethernet3/2) len 60, mforward
 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
SQLAlchemy 是一个 SQL 工具包和对象关系映射(ORM)库,用于 Python 编程语言。它提供了一个高级的 SQL 工具和对象关系映射工具,允许开发者以 Python 类和对象的形式操作数据库,而无需编写大量的 SQL 语句。SQLAlchemy 建立在 DBAPI 之上,支持多种数据库后端,如 SQLite, MySQL, PostgreSQL 等。 SQLAlchemy 的核心功能: 对象关系映射(ORM): SQLAlchemy 允许开发者使用 Python 类来表示数据库表,使用类的实例表示表中的行。 开发者可以定义类之间的关系(如一对多、多对多),SQLAlchemy 会自动处理这些关系在数据库中的映射。 通过 ORM,开发者可以像操作 Python 对象一样操作数据库,这大大简化了数据库操作的复杂性。 表达式语言: SQLAlchemy 提供了一个丰富的 SQL 表达式语言,允许开发者以 Python 表达式的方式编写复杂的 SQL 查询。 表达式语言提供了对 SQL 语句的灵活控制,同时保持了代码的可读性和可维护性。 数据库引擎和连接池: SQLAlchemy 支持多种数据库后端,并且为每种后端提供了对应的数据库引擎。 它还提供了连接池管理功能,以优化数据库连接的创建、使用和释放。 会话管理: SQLAlchemy 使用会话(Session)来管理对象的持久化状态。 会话提供了一个工作单元(unit of work)和身份映射(identity map)的概念,使得对象的状态管理和查询更加高效。 事件系统: SQLAlchemy 提供了一个事件系统,允许开发者在 ORM 的各个生命周期阶段插入自定义的钩子函数。 这使得开发者可以在对象加载、修改、删除等操作时执行额外的逻辑。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值