linux kernel driver development
linux kernel driver development
RDMA 优先级 流量控制
RDMA 优先级 流量控制
update kernel headers
update kernel headers
rdma-core/irdma: Implement device supported verb APIs
rdma-core/irdma: Implement device supported verb APIs
rdma-core/irdma: Add library setup and utility
rdma-core/irdma: Add library setup and utility
rdma-core/irdma: Add user/kernel shared libraries
rdma-core/irdma: Add user/kernel shared libraries
0001-rdma-core-irdma-Add-irdma-to-Makefiles-distro-files-.patch
0001-rdma-core-irdma-Add-irdma-to-Makefiles-distro-files-.patch
Chapter3.docx
Chapter3.docx
Chapter2.docx
Linux is available for a wide range of architectures, so an architecture-independent way of describing memory is needed. This chapter describes the structures used to keep account of memory banks, pages and flags that affect VM behavior.
312_On-demand-paging_LLiss.pdf
312_On-demand-paging_LLiss.pdf
内存管理Introduction.docx
内存管理Introduction.docx
1Introduction to the Linux Kernel.docx
1Introduction to the Linux Kernel.docx
2. Getting Started with the Kernel
2. Getting Started with the Kernel
3Process Management.docx
This chapter introduces the concept of the process, one of the fundamental abstractions in Unix operating systems. It defines the process, as well as related concepts such as threads, and then discusses how the Linux kernel manages each process: how they are enumerated within the kernel
Advanced programmability and recent updates with tc’s cls bpf
eBPF: efficient, generic in-kernel bytecode engine Today used mainly in networking, tracing, sandboxing tc, XDP, socket filters/demuxing, perf, bcc, seccomp, LSM, ... cls bpf programmable classifier and action in tc subsystem Attachable to ingress, egress of kernel’s networking data path
4Process Scheduling.docx
The prior chapter discussed processes, the operating system abstraction of active program code. This chapter discusses the process scheduler, the kernel subsystem that puts those processes to work.
OVS_Offload_using_ASAP2_Direct_User_Manual_v3.3.pdf
Open vSwitch (OVS) allows Virtual Machines (VM) to communicate with each other and with
the outside world. OVS traditionally resides in the hypervisor and switching is based on twelve
tuple matching on flows. The OVS software based solution is CPU intensive, affecting system
performance and preve
OVS kernel intro - By Maor.pptx
ASAP^2 Direct design
SRIOV, legacy/switchdev
Software based vs Hardware based
ASAP2 Software Architecture
ASAP^2 Features
Classifications fields (Matches)
Actions
Feature matrix
BF-2 Virtio-net_.docx
This will show how to configure and test virtio-net emulation using BlueField-2
AVS Offload Tech Discussions (November 2018).pdf
VirtIO Connected VMs Multiple interfaces per VM (vPorts) Accelerate AVS virtual switching using hardware offload Overlay networking using Ali-VXLAN/Ali-VXLAN-GPE NAT (modify IP/port, adjust TCP sequence number) Routing (TTL decrement, header modify) Aging acceleration Rate metering
Introduction to InfiniBand.ppt
InfiniBand features
Glossary
Get familiar with IB HW/SW entities
Get familiar with data path operations
Get familiar with IB SW objects
Intro to RDMA.pptx
Motivation
Background
Kernel Bypass & Transport Offload
RDMA programming APIs
FAE training - ASAP^2 - Gavi.pptx
PCIe device presents multiple instances to the OS/Hypervisor
Enables application direct access
Bare metal performance for VM
Reduces CPU overhead
Enables many advanced features – DPDK, RDMA, etc.
ASAP2_Hardware_Offloading_for_vSwitches_User_Manual_v4.4
Open vSwitch (OVS) allows Virtual Machines (VM) to communicate with each other and with
the outside world. OVS traditionally resides in the hypervisor and switching is based on twelve
tuple matching on flows. The OVS software based solution is CPU intensive, affecting system
performance and preve
Mellanox Adapters Programmer’s Reference Manual (PRM)
This Programmer’s Reference Manual (PRM) describes the interface used by developers to
develop Mellanox Adapters based solutions and to write a driver for the supported adapter
devices. The following Mellanox adapters are supported in this document: • Connect-IB® • ConnectX®-4 • ConnectX®-4 Lx • C
Verbs programming tutorial-final.pdf
SR-IOV and IOMMU/VT-d must be enabled in BIOS
intel_iommu=on option must be specified in kernel command line- to check: cat /proc/cmdline- to setup: edit and configure bootloader files (GRUB/GRUB2)
vswitch diagram (002).pptx
The diagram describes the whole process that packet passes.
vSwitch_Data_Path_HW_Offload_UM.pdf
This manual describes the proper use of DPDK APIs to efficiently offload a part or all of the vSwitch data path to the device.
RDMA overview for verification team.pptx
What is RDMA?
Direct Memory Access is an ability of a device to access the host memory directly for reads and writes without involving the CPU
RDMA is the ability of doing DMA on a remote machine
Kernel and TCP/IP bypass
0-BlueField SmartNic Overview.pptx
QPs can be initialized and set-up without coordination
Use TCP to exchange parameters
Connection Management
Out-of-band: uses GSI (QP1) MADs
OVS_Offload_using_ASAP2_Performance_Tuning_Guide_v3.0.pdf
This document describes Open vSwitch (OVS) offload using Mellanox "Accelerated Switching And Packet Processing" (ASAP2) Direct technology performance verification procedure. Additionally, it describes the proper way to bring-up a system for optimized packet processing performance.
OVS_Offload_using_ASAP2_Performance_Tuning_Guide_v1.0.pdf
This document describes Open vSwitch (OVS) offload using Mellanox "Accelerated Switching And Packet Processing" (ASAP2) Direct technology performance verification procedure.
ASAP2 Arch.pdf
ASAP2 takes advantage of ConnectX-4\5 capabilities to offload “in host” network stack
Introduction to InfiniBand.ppt
InfiniBand is a network architecture for interconnecting processor nodes and I/O nodes to form a system area network. The architecture is independent on the host operating system (OS) and processor platform.
IB is an open standard (not proprietary)
IB has low latency (< 3usec) and high Bandwidth (up
Intro to RDMA.PPTX
RC (~= TCP)
Reliable, connection oriented, transport. Guarantees full, in-order, delivery of messages and RDMA
UD (~= UDP)
Unreliable, connection-less, transport. Best effort to deliver messages. Optional multicast support
UC
Unreliable, connection oriented, transport. Best effort, in-order, deliver
Virtio Full Emulation - Spec.docx
This spec describes the SW architecture for Virtio device (net and block) emulation on top of BlueField2.
BlueField2_MOC3.0_POC_0.5.pptx
Progress:
ASAP2 OVS-DPDK offload setup finished. VFs inside local host and between two hosts can run forwarding
Preliminary forwarding performance got
Full emulation virtio-net demo in Mellanox setup delivered to Alibaba
RoCE perftest latency and bandwidth got
Need:
Full emulation virtio-net perfo
BlueField2_MOC3.0_POC_0.3.pptx
High priority cases which need to finish in Aug
Case1 local forwarding
case2 forwarding
Case3 boot from local nvme disk
Case4 virtIO emulation
Other cases which need to finish in Sep
Case5 RoCE HPCC
Case6 IPSEC/KTLS
BlueField Software Documentation v2.5.0.11176__03_05_2020.pdf
The release note pages provide information for the BlueField Software such as changes and new features, supported platforms, and reports on software known issues as well as bug fixes.
sFlow-ovs.pptx
sFlow test environment
OVS flow dump
Sflowtool result