从前一节可知道,MSI-X中断信息的最终的支持是通过函数ixgbe_acquire_msix_vectors来实现的,所以这里我们详细分析一下这个函数,看下Linux源代码是怎么收集MSI-X中断信息的。
/**
* ixgbe_acquire_msix_vectors - acquire MSI-X vectors
* @adapter: board private structure
*
* Attempts to acquire a suitable range of MSI-X vector interrupts. Will
* return a negative error code if unable to acquire MSI-X vectors for any
* reason.
*/
static int ixgbe_acquire_msix_vectors(struct ixgbe_adapter *adapter)
{
struct ixgbe_hw *hw = &adapter->hw;
int i, vectors, vector_threshold;
/* We start by asking for one vector per queue pair with XDP queues
* being stacked with TX queues.
*/
vectors = max(adapter->num_rx_queues, adapter->num_tx_queues);
vectors = max(vectors, adapter->num_xdp_queues);
/* It is easy to be greedy for MSI-X vectors. However, it really
* doesn't do much good if we have a lot more vectors than CPUs. We'll
* be somewhat conservative and only ask for (roughly) the same number
* of vectors as there are CPUs.
*/
vectors = min_t(int, vectors, num_online_cpus());
/* Some vectors are necessary for non-queue interrupts */
vectors += NON_Q_VECTORS;
/* Hardware can only support a maximum of hw.mac->max_msix_vectors.
* With features such as RSS and VMDq, we can easily surpass the
* number of Rx and Tx descriptor queues supported by our device.
* Thus, we cap the maximum in the rare cases where the CPU count also
* exceeds our vector limit
*/
vectors = min_t(int, vectors, hw->mac.max_msix_vectors);
/* We want a minimum of two MSI-X vectors for (1) a TxQ[0] + RxQ[0]
* handler, and (2) an Other (Link Status Change, etc.) handler.
*/
vector_threshold = MIN_MSIX_COUNT;
adapter->msix_entries = kcalloc(vectors,
sizeof(struct msix_entry),
GFP_KERNEL);
if (!adapter->msix_entries)
return -ENOMEM;
for (i = 0; i < vectors; i++)
adapter->msix_entries[i].entry = i;
vectors = pci_enable_msix_range(adapter->pdev, adapter->msix_entries,
vector_threshold, vectors);
if (vectors < 0) {
/* A negative count of allocated vectors indicates an error in
* acquiring within the specified range of MSI-X vectors
*/
e_dev_warn("Failed to allocate MSI-X interrupts. Err: %d\n",
vectors);
adapter->flags &= ~IXGBE_FLAG_MSIX_ENABLED;
kfree(adapter->msix_entries);
adapter->msix_entries = NULL;
return vectors;
}
/* we successfully allocated some number of vectors within our
* requested range.
*/
adapter->flags |= IXGBE_FLAG_MSIX_ENABLED;
/* Adjust for only the vectors we'll use, which is minimum
* of max_q_vectors, or the number of vectors we were allocated.
*/
vectors -= NON_Q_VECTORS;
adapter->num_q_vectors = min_t(int, vectors, adapter->max_q_vectors);
return 0;
}
代码的前半段主要是用于计算中断向量的个数,如取最大收发数队列的最大个数,这里的设计思想是一个队列对应一个MSI-X中断。当然最大中断数从注释上来看是不应该大于CPU数量的,因为只有多CPU才能提高中断的效率,而如果挂多个中断到同一个CPU,其实是没有提高中断效率的,因为中断也是排列的。
获取中断的数量后,分配中断的msix_entries结构体。
这个结构体的定义如下:
struct msix_entry {
u32 vector; /* Kernel uses to write allocated vector */
u16 entry; /* Driver uses to specify entry, OS writes */
};
msix_entries结构体中的vector记录的是MSI-X中断索引号,entry这里并未初始化。
最后是使能中断。
补充知识:
xdp是一种基于事件的钩子(钩子是在正常运转的程序中添加的一个控制模块,如果没有钩子这个模块就是一个默认处理程序),它是挂载在网卡上的,当一个数据包到达网卡之后,xdp程序就可以对这个数据包进行处理,决定对数据包执行的操作,包括丢弃数据包、转发数据包到其他网卡、或者直接将数据包传递给上层网络协议栈。
更多可见 XDP-内核可编程数据包处理方案:https://blog.csdn.net/hbhgyu/article/details/109354273