dpdk Usage:
receive and send packets within the minimum number of CPU cycles (usually less than 80 cycles)
develop fast packet capture algorithms (tcpdump-like)
run third-party fast path stacks
dpdk主要应用的技术包括 :
uio (用户层驱动、轮询、0拷贝) , hugepage(在这块大页面上做自己的内存管理系统), cpu 亲和性(多核架构,核线程绑定物理核)
hugetlbpage的主要好处是通过利用大内存页提高内存使用效率;
uio实现用户空间下驱动程序的支撑机制,由于DPDK是应用层平台,所以与此紧密相连的网卡驱动程序(当然,主要是intel自身的千兆igb与万兆ixgbe驱动程序)都通过uio机制运行在用户态下。
cpuaffinity机制是多核cpu发展的结果,在越来越多核心的cpu机器上,如何提高外设以及程序工作效率的最直观想法就是让各个cpu核心各自干专门的事情,比如两个网卡eth0和eth1都收包,可以让cpu0专心处理eth0,cpu1专心处理eth1,没必要cpu0一下处理eth0,一下又处理eth1,还有一个网卡多队列的情况也是类似,等等;DPDK利用cpuaffinity主要是将控制面线程以及各个数据面线程绑定到不同的cpu,省却了来回反复调度的性能消耗,各个线程一个while死循环,专心致志的做事,互不干扰(当然还是有通信的,比如控制面接收用户配置,转而传递给数据面的参数设置等)
此外,还有无锁队列、多进程架构(dpdk kit 被设计为单进程,这样如果要多个基于dpdk的进程要交互,必须有一种机制,dpdk提供了
这样的例子,主要是通过地址偏移达到两个进程共享一块内存)
PF_RING是Luca.Deri发明的提高内核处理数据包效率,并兼顾应用程序的补丁,如Libpcap和TCPDUMP等,以及一些辅助性程序(如ntop查看并分析网络流量等)。PF_RING是一种新型的网络socket,它可以极大的改进包捕获的速度。特征:
Available for Linux kernels 2.6.32 and newer.
No need to patch the kernel: just load the kernel module.
PF_RING-aware drivers for increased packet capture acceleration.
10 Gbit Hardware Packet Filtering using commodity network adapters
User-space DNA (Direct NIC Access) drivers for extreme packet capture/transmission speed as the NIC NPU (Network Process Unit) is pushing/getting packets to/from userland without any kernel intervention. Using the 10Gbit DNA driver you can send/received at wire-speed at any packet sizes.
Libzero for DNAfor distributing packets in zero-copy across threads and applications.
Device driver independent.
Kernel-based packet capture and sampling.
Libpcap support (see below) for seamless integration with existing pcap-based applications.
Ability to specify hundred of header filters in addition to BPF.
Content inspection, so that only packets matching the payload filter are passed.
PF_RING plugins for advanced packet parsing and content filtering.
Ability to work in transparent mode (i.e. the packets are also forwarded to upperlinks so existing applications will work as usual).
NAPI(NewAPI)是Linux上采用的一种提高网络处理效率的技术,它的核心概念就是不采用中断的方式读取数据,而代之以首先采用中断唤醒数据接收的服务程序,然后 POLL 的方法来轮询数据。
if you like flexibility you should use PF_RING,
if you want pure speed PF_RING DNA is the solution.
in DNA mode NAPI polling does not take place, hence PF_RING features such as reflection and packet filtering are not supported.
零拷贝(zero-copy)基本思想是:数据报从网络设备到用户程序空间传递的过程中,减少数据拷贝次数,减少系统调用,实现CPU的零参与,彻底消除CPU在这方面的负载。实现零拷贝用到的最主要技术是DMA数据传输技术和内存区域映射技术。PF_RING DNA有zero-copy,PF_RING没有
receive and send packets within the minimum number of CPU cycles (usually less than 80 cycles)
develop fast packet capture algorithms (tcpdump-like)
run third-party fast path stacks
dpdk主要应用的技术包括 :
uio (用户层驱动、轮询、0拷贝) , hugepage(在这块大页面上做自己的内存管理系统), cpu 亲和性(多核架构,核线程绑定物理核)
hugetlbpage的主要好处是通过利用大内存页提高内存使用效率;
uio实现用户空间下驱动程序的支撑机制,由于DPDK是应用层平台,所以与此紧密相连的网卡驱动程序(当然,主要是intel自身的千兆igb与万兆ixgbe驱动程序)都通过uio机制运行在用户态下。
cpuaffinity机制是多核cpu发展的结果,在越来越多核心的cpu机器上,如何提高外设以及程序工作效率的最直观想法就是让各个cpu核心各自干专门的事情,比如两个网卡eth0和eth1都收包,可以让cpu0专心处理eth0,cpu1专心处理eth1,没必要cpu0一下处理eth0,一下又处理eth1,还有一个网卡多队列的情况也是类似,等等;DPDK利用cpuaffinity主要是将控制面线程以及各个数据面线程绑定到不同的cpu,省却了来回反复调度的性能消耗,各个线程一个while死循环,专心致志的做事,互不干扰(当然还是有通信的,比如控制面接收用户配置,转而传递给数据面的参数设置等)
此外,还有无锁队列、多进程架构(dpdk kit 被设计为单进程,这样如果要多个基于dpdk的进程要交互,必须有一种机制,dpdk提供了
这样的例子,主要是通过地址偏移达到两个进程共享一块内存)
PF_RING是Luca.Deri发明的提高内核处理数据包效率,并兼顾应用程序的补丁,如Libpcap和TCPDUMP等,以及一些辅助性程序(如ntop查看并分析网络流量等)。PF_RING是一种新型的网络socket,它可以极大的改进包捕获的速度。特征:
Available for Linux kernels 2.6.32 and newer.
No need to patch the kernel: just load the kernel module.
PF_RING-aware drivers for increased packet capture acceleration.
10 Gbit Hardware Packet Filtering using commodity network adapters
User-space DNA (Direct NIC Access) drivers for extreme packet capture/transmission speed as the NIC NPU (Network Process Unit) is pushing/getting packets to/from userland without any kernel intervention. Using the 10Gbit DNA driver you can send/received at wire-speed at any packet sizes.
Libzero for DNAfor distributing packets in zero-copy across threads and applications.
Device driver independent.
Kernel-based packet capture and sampling.
Libpcap support (see below) for seamless integration with existing pcap-based applications.
Ability to specify hundred of header filters in addition to BPF.
Content inspection, so that only packets matching the payload filter are passed.
PF_RING plugins for advanced packet parsing and content filtering.
Ability to work in transparent mode (i.e. the packets are also forwarded to upperlinks so existing applications will work as usual).
NAPI(NewAPI)是Linux上采用的一种提高网络处理效率的技术,它的核心概念就是不采用中断的方式读取数据,而代之以首先采用中断唤醒数据接收的服务程序,然后 POLL 的方法来轮询数据。
if you like flexibility you should use PF_RING,
if you want pure speed PF_RING DNA is the solution.
in DNA mode NAPI polling does not take place, hence PF_RING features such as reflection and packet filtering are not supported.
零拷贝(zero-copy)基本思想是:数据报从网络设备到用户程序空间传递的过程中,减少数据拷贝次数,减少系统调用,实现CPU的零参与,彻底消除CPU在这方面的负载。实现零拷贝用到的最主要技术是DMA数据传输技术和内存区域映射技术。PF_RING DNA有zero-copy,PF_RING没有