【】 Intel(R) 800 Series序列网卡 ice 驱动安装

本文档详细介绍了Intel 800 Series序列网卡的ice驱动安装步骤和注意事项,包括识别适配器、配置SR-IOV、固件恢复模式等关键信息。同时,提供了驱动的构建和安装方法,包括手动和通过RPM包的方式,以及驱动的命令行参数和额外功能。驱动安装过程中需注意与SR-IOV、RDMA和链路聚合的兼容性问题。
摘要由CSDN通过智能技术生成

概述

========

此驱动程序支持内核版本3.10.0及更新版本。但是,有些功能可能需要更新的内核版本。关联的虚拟功能(VF)驱动程序对于这个驱动程序,是iavf。可以使用ethtool、lspci和ip获取驱动程序信息。

更新ethtool时,说明书可在在本文件中稍后的“附加配置”部分中找到。

目前仅支持将此驱动程序作为可加载模块。英特尔不是针对内核源提供补丁。

有关硬件要求的问题,请参阅随英特尔适配器提供的文档。列出的所有硬件要求均适用于Linux。

此驱动程序支持内核4.14及更高版本的 XDP (Express Data Path)和内核4.18及更高版本上的AF_XDP zero-copy零拷贝。请注意,XDP被阻止用于大于3KB的大小的帧。


识别适配器

========================

驱动程序与以下设备兼容:

*英特尔(R)以太网控制器E810-C

*英特尔(R)以太网控制器E810-XXV

有关如何识别适配器的信息,以及最新的英特尔网络驱动程序,请参阅英特尔技术支持网站:

http://www.intel.com/support

重要注意事项

===============

配置SR-IOV以提高网络安全性

------------------------------------------------

在虚拟化环境中,在Intel(R)以太网网络适配器上支持SR-IOV时,虚拟功能(VF)可能会受到恶意行为的影响。软件生成的层二帧,如IEEE 802.3x(链路流控制),IEEE802.1Qbb(基于优先级的流量控制)和其他此类控制不适用,预期并可以限制主机和虚拟交换机之间的通信量,降低性能。解决此问题,并确保与意外流量流,为VLAN标记配置所有启用SR-IOV的端口

从PF上的管理界面。此配置允许要删除的意外且可能是恶意的帧。

请参阅本章后面的“在启用SR-IOV的适配器端口上配置VLAN标记”

配置说明的自述文件。


如果带有活动VM的VF绑定到端口驱动程序,则不要卸载端口驱动程序

-------------------------------------------------------------

如果虚拟功能(VF)具有活动的虚拟端口,请不要卸载端口的驱动程序

计算机(VM)已绑定到它。这样做将导致端口看起来挂起。

一旦VM关闭或以其他方式释放VF,命令将完成。


固件恢复模式

----------------------

如果设备检测到以下问题,则将进入固件恢复模式:

需要重新编程固件。当设备处于固件恢复状态时

模式,它不会通过流量或允许任何配置;你只能尝试

恢复设备的固件。请参阅英特尔(R)以太网适配器和

设备用户指南,了解有关固件恢复模式和如何恢复的详细信息

从它。


Important notes for SR-IOV, RDMA, and Link Aggregation
------------------------------------------------------
Link Aggregation is mutually exclusive with SR-IOV and RDMA.
- If Link Aggregation is active, RDMA peers will not be able to register with
the PF, and SR-IOV VFs cannot be created on the PF.
- If either RDMA or SR-IOV is active, you cannot set up Link Aggregation on the
interface.

Bridging and MACVLAN are also affected by this. If you wish to use bridging or
MACVLAN with RDMA/SR-IOV, you must set up bridging or MACVLAN before enabling
RDMA or SR-IOV. If you are using bridging or MACVLAN in conjunction with SR-IOV
and/or RDMA, and you want to remove the interface from the bridge or MACVLAN,
you must follow these steps:
1. Remove RDMA if it is active
2. Destroy SR-IOV VFs if they exist
3. Remove the interface from the bridge or MACVLAN
4. Reactivate RDMA and recreate SRIOV VFs as needed


Building and Installation
=========================
The ice driver requires the Dynamic Device Personalization (DDP) package file
to enable advanced features (such as dynamic tunneling, Flow Director, RSS, and
ADQ, or others). The driver installation process installs the default DDP
package file and creates a soft link ice.pkg to the physical package
ice-x.x.x.x.pkg in the firmware root directory (typically /lib/firmware/ or
/lib/firmware/updates/). The driver install process also puts both the driver
module and the DDP file in the initramfs/initrd image.

NOTE: When the driver loads, it looks for intel/ice/ddp/ice.pkg in the firmware
root. If this file exists, the driver will download it into the device. If not,
the driver will go into Safe Mode where it will use the configuration contained
in the device's NVM. This is NOT a supported configuration and many advanced
features will not be functional. See "Dynamic Device Personalization" later for
more information.


To build a binary RPM package of this driver
--------------------------------------------
Note: RPM functionality has only been tested in Red Hat distributions.

1. Run the following command, where <x.x.x> is the version number for the
   driver tar file.

   # rpmbuild -tb ice-<x.x.x>.tar.gz

   NOTE: For the build to work properly, the currently running kernel MUST
   match the version and configuration of the installed kernel sources. If
   you have just recompiled the kernel, reboot the system before building.

2. After building the RPM, the last few lines of the tool output contain the
   location of the RPM file that was built. Install the RPM with one of the
   following commands, where <RPM> is the location of the RPM file:

   # rpm -Uvh <RPM>
       or
   # dnf/yum localinstall <RPM>

NOTES:
- To compile the driver on some kernel/arch combinations, you may need to
install a package with the development version of libelf (e.g. libelf-dev,
libelf-devel, elfutilsl-libelf-devel).
- When compiling an out-of-tree driver, details will vary by distribution.
However, you will usually need a kernel-devel RPM or some RPM that provides the
kernel headers at a minimum. The RPM kernel-devel will usually fill in the link
at /lib/modules/'uname -r'/build.


To manually build the driver
----------------------------
1. Move the base driver tar file to the directory of your choice.
   For example, use '/home/username/ice' or '/usr/local/src/ice'.

2. Untar/unzip the archive, where <x.x.x> is the version number for the
   driver tar file:

   # tar zxf ice-<x.x.x>.tar.gz

3. Change to the driver src directory, where <x.x.x> is the version number
   for the driver tar:

   # cd ice-<x.x.x>/src/

4. Compile the driver module:

   # make install

   The binary will be installed as:
   /lib/modules/<KERNEL VER>/updates/drivers/net/ethernet/intel/ice/ice.ko

   The install location listed above is the default location. This may differ
   for various Linux distributions.

   NOTE: To build the driver using the schema for unified ethtool statistics
   defined in https://sourceforge.net/p/e1000/wiki/Home/, use the following
   command:

   # make CFLAGS_EXTRA='-DUNIFIED_STATS' install

   NOTE: To compile the driver with ADQ (Application Device Queues) flags set,
   use the following command, where <nproc> is the number of logical cores:

   # make -j<nproc> CFLAGS_EXTRA='-DADQ_PERF_COUNTERS' install

   (This will also apply the above 'make install' command.)

5. Load the module using the modprobe command.

   To check the version of the driver and then load it:

   # modinfo ice
   # modprobe ice

   Alternately, make sure that any older ice drivers are removed from the
   kernel before loading the new module:

   # rmmod ice; modprobe ice

NOTE: To enable verbose debug messages in the kernel log, use the dynamic debug
feature (dyndbg). See "Dynamic Debug" later in this README for more information.

6. Assign an IP address to the interface by entering the following,
   where <ethX> is the interface name that was shown in dmesg after modprobe:

   # ip address add <IP_address>/<netmask bits> dev <ethX>

7. Verify that the interface works. Enter the following, where IP_address
   is the IP address for another machine on the same subnet as the interface
   that is being tested:

   # ping <IP_address>


Command Line Parameters
=======================
The only command line parameter the ice driver supports is the debug parameter
that can control the default logging verbosity of the driver. (Note: dyndbg
also provides dynamic debug information.)

In general, use ethtool and other OS-specific commands to configure
user-changeable parameters after the driver is loaded.


Additional Features and Configurations
======================================

ethtool
-------
The driver utilizes the ethtool interface for driver configuration and
diagnostics, as well as displaying statistical information. The latest ethtool
version is required for this functionality. Download it at:
https://kernel.org/pub/software/network/ethtool/

NOTE: The rx_bytes value of ethtool does not match the rx_bytes value of
Netdev, due to the 4-byte CRC being stripped by the device. The difference
between the two rx_bytes values will be 4 x the number of Rx packets. For
example, if Rx packets are 10 and Netdev (software statistics) displays
rx_bytes as "X", then ethtool (hardware statistics) will display rx_bytes as
"X+40" (4 bytes CRC x 10 packets).


Viewing Link Messages
---------------------
Link messages will not be displayed to the console if the distribution is
restricting system messages. In order to see network driver link messages on
your console, set dmesg to eight by entering the following:

# dmesg -n 8

NOTE: This setting is not saved across reboots.


Dynamic Device Personalization
------------------------------
Dynamic Device Personalization (DDP) allows you to change the packet processing
pipeline of a device by applying a profile package to the device at runtime.
Profiles can be used to, for example, add support for new protocols, change
existing protocols, or change default settings. DDP profiles can also be rolled
back without rebooting the system.

The ice driver automatically installs the default DDP package file during
driver installation. NOTE: It's important to do 'make install' during initial
ice driver installation so that the driver loads the DDP package automatically.

The DDP package loads during device initialization. The driver looks for
intel/ice/ddp/ice.pkg in your firmware root (typically /lib/firmware/ or
/lib/firmware/updates/) and checks that it contains a valid DDP package file.

If the driver is unable to load the DDP package, the device will enter Safe
Mode. Safe Mode disables advanced and performance features and supports only
basic traffic and minimal functionality, such as updating the NVM or
downloading a new driver or DDP package. Safe Mode only applies to the affected
physical function and does not impact any other PFs. See the "Intel(R) Ethernet
Adapters and Devices User Guide" for more details on DDP and Safe Mode.

NOTES:
- If you encounter issues with the DDP package file, you may need to download
an updated driver or DDP package file. See the log messages for more
information.

- The ice.pkg file is a symbolic link to the default DDP package file installed
by the Linux-firmware software package or the ice out-of-tree driver
installation.

- You cannot update the DDP package if any PF drivers are already loaded. To
overwrite a package, unload all PFs and then reload the driver with the new
package.

- Only the first loaded PF per device can download a package for that device.

You can install specific DDP package files for different physical devices in
the same system. To install a specific DDP package file:

1. Download the DDP package file you want for your device.

2. Rename the file ice-xxxxxxxxxxxxxxxx.pkg, where 'xxxxxxxxxxxxxxxx' is the
unique 64-bit PCI Express device serial number (in hex) of the device you want
the package downloaded on. The filename must include the complete serial number
(including leading zeros) and be all lowercase. For example, if the 64-bit
serial number is b887a3ffffca0568, then the file name would be
ice-b887a3ffffca0568.pkg.

To find the serial number from the PCI bus address, you can use the following
command:

# lspci -vv -s af:00.0 | grep -i Serial
Capabilities: [150 v1] Device Serial Number b8-87-a3-ff-ff-ca-05-68

You can use the following command to format the serial number without the
dashes:

# lspci -vv -s af:00.0 | grep -i Serial | awk '{print $7}' | sed s/-//g
b887a3ffffca0568

3. Copy the renamed DDP package file to /lib/firmware/updates/intel/ice/ddp/.
If the directory does not yet exist, create it before copying the file.

4. Unload all of the PFs on the device.

5. Reload the driver with the new package.

NOTE: The presence of a device-specific DDP package file overrides the loading
of the default DDP package file (ice.pkg).


RDMA (Remote Direct Memory Access)
----------------------------------
Remote Direct Memory Access, or RDMA, allows a network device to transfer data
directly to and from application memory on another system, increasing
throughput and lowering latency in certain networking environments.

The ice driver supports the following RDMA protocols:
- iWARP (Internet Wide Area RDMA Protocol)
- RoCEv2 (RDMA over Converged Ethernet)
The major difference is that iWARP performs RDMA over TCP, while RoCEv2 uses
UDP.

For detailed installation and configuration information, see the README file in
the RDMA driver tarball.

Notes:
- Devices based on the Intel(R) Ethernet 800 Series do not support RDMA when
operating in multiport mode with more than 4 ports.

- You cannot use RDMA or SR-IOV when link aggregation (LAG)/bonding is active,
and vice versa. To enforce this, on kernels 4.5 and above, the driver checks
for this mutual exclusion. On kernels older than 4.5, the driver cannot check
for this exclusion and is unaware of bonding events.


NVM Express* (NVMe) over TCP and Fabrics
----------------------------------------
RDMA provides a high throughput, low latency means to directly access NVM
Express* (NVMe*) drives on a remote server.

Refer to the following for details on supported operating systems and how to
set up and configure your server and client systems:
- NVM Express over TCP for Intel(R) Ethernet Products Configuration Guide
- NVM Express over Fabrics for Intel(R) Ethernet Products with RDMA
Configuration Guide

Both guides are available on the Intel Technical Library at:
https://www.intel.com/content/www/us/en/design/products-and-solutions/networking
-and-io/ethernet-controller-e810/technical-library.html


Application Device Queues (ADQ)
-------------------------------
Application Device Queues (ADQ) allow you to dedicate one or more queues to a
specific application. This can reduce latency for the specified application,
and allow Tx traffic to be rate limited per application.

The ADQ information contained here is specific to the ice driver. For more
details, refer to the E810 ADQ Configuration Guide at:
https://cdrdv2.intel.com/v1/dl/getContent/609008

Requirements:
- Kernel version 4.19.58 or later
- Operating system: Red Hat* Enterprise Linux* 7.5+ or SUSE* Linux Enterprise
Server* 12+
- The sch_mqprio, act_mirred and cls_flower modules must be loaded. For example:
# modprobe sch_mqprio
# modprobe act_mirred
# modprove cls_flower
- The latest version of iproute2
# cd iproute2
# ./configure
# make DESTDIR=/opt/iproute2 install
- The latest ice driver and NVM image (Note: You must compile the ice driver
with the ADQ flag as shown in the "Building and Installation" section.)

When ADQ is enabled:
- You cannot change RSS parameters, the number of queues, or the MAC address in
the PF or VF. Delete the ADQ configuration before changing these settings.
- The driver supports subnet masks for IP addresses in the PF and VF. When you
add a subnet mask filter, the driver forwards packets to the ADQ VSI instead of
the main VSI.
- When the PF adds or deletes a port VLAN filter for the VF, it will extend to
all the VSIs within that VF.

Known issues:
- The latest RHEL and SLES distros have kernels with back-ported support for
ADQ. For all o

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值