Jperf/Iperf-Speed test tool for TCP,UDP

Introduction

Due to the fact that our customers in China and Taiwan use Jperf and Iperf to test network bandwidth,So this article is used to introduce what are jperf and iperf,how do i used it.

Jperf

We usually use iperf under the windows,Beceuse it‘s a graphical tool.We can download it at following link:

This is the latest version i can find.

http://www.softpedia.com/get/Network-Tools/Network-Testing/JPerf.shtml

How do i used it?

Jperf require java environment,before run it you should download the latest version java packet and install on your computer.

Download the Jperf-2.0.2.zip file from above.
Extract the contents of the zip file to a location on your computer.
Run jperf.bat

we can divide the entire plane into three parts:

Choose iperf Mode:the Jperf program needs to run on two machines, divided into server and client.When you run in server mode, you can select the port that you want to listen on and limit the client.When you run in client mode, you can select the server and port you want to connect to.Parallel Streams option which you can setting how many parallel streams sent to the server at the same time and the default is 1.
Applocation layer option:
Transmint:time in seconds to transmit for (default 10 secs).

    Output Format:format to report: Kbits, Mbits, KBytes, MBytes.

    Testing Mode:

              Dual:Do a bidirectional test simultaneously,At the same time, server and client send test stream to                            each other.the default is just client to server.

             Trade:Do a bidirectional test individually,When the test of the client to the server is complete, the                                 server then send test stream to client.

Transport layer option:
TCP:

        Measure bandwidth
        Report MSS/MTU size and observed read sizes.
        Support for TCP window size via socket buffers.
   UDP:
        Client can create UDP streams of specified bandwidth.
        Measure packet loss
        Measure delay jitter
        Multicast capable

Example

topology:

PC(server)-----------(1G port)ASR9K(1G)--------------(client)PC

TCP test for bandwidth:

We can see that using the TCP stream to test, the result is between 700Mbits/sec and 800Mbits/sec,very close to 1G bandwidth,but how to tune:
you can adjust the following parameters:TCP window size or Length,but the result is not very obvious.

Also you can increase PC MTU and ASR9K MTU,and the end result will reached 900Mbits/sec.The following is the PC setting method.

And enable Jumbo frame to the largest size supported,the default is disabled:

UDP test for loss and jitter:

I have tried many methods, but have not found a way to test the bandwidth using UDP,if you have any suggestion you can contact me.

JPerf Tips

According to my observation, during the test, the PC's CPU will be greatly improved, and the suppose result may be related to the hardware.
JPerf uses iPerf as a back end to run all of the tests,and the iperf version is to low.we can find the iperf in the “bin” floder,and running iperf using CMD command line,it can show iperf version.So we do not recommend that customers use Jperf.
AS the jperf uses iperf as a back end to run all of the tests,we can use jperf to help us build useful iperf commands.

Iperf 2.0

Iperf is no longer being developed by its original maintainers,Includes iperf version used by jperf.

Beginning in 2014,another developer began fixing bugs and enhancing functionality.

We can download it at following link,if you used in linux,you should install “gcc” and "gcc-c++" packets:

https://sourceforge.net/projects/iperf2

We can find that the parameters of iperf2.0 are similar to those used by Jperf.
So we can be clear that iperf2.0 doesn't make too big a change relative to jperf.

[root@vm ~]# iperf
Usage: iperf [-s|-c host] [options]
Try `iperf --help' for more information.
[root@vm ~]# iperf --help
Usage: iperf [-s|-c host] [options]
iperf [-h|--help] [-v|--version]

Client/Server:
-b, --bandwidth #[kmgKMG | pps] bandwidth to send at in bits/sec or packets per second
-e, --enhancedreports use enhanced reporting giving more tcp/udp and traffic information
-f, --format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes
-i, --interval # seconds between periodic bandwidth reports
-l, --len #[kmKM] length of buffer in bytes to read or write (Defaults: TCP=128K, v4 UDP=1470, v6 UDP=1450)
-m, --print_mss print TCP maximum segment size (MTU - TCP/IP header)
-o, --output <filename> output the report or error message to this specified file
-p, --port # server port to listen on/connect to
-u, --udp use UDP rather than TCP
--udp-counters-64bit use 64 bit sequence numbers with UDP
-w, --window #[KM] TCP window size (socket buffer size)
-z, --realtime request realtime scheduler
-B, --bind <host> bind to <host>, an interface or multicast address
-C, --compatibility for use with older versions does not sent extra msgs
-M, --mss # set TCP maximum segment size (MTU - 40 bytes)
-N, --nodelay set TCP no delay, disabling Nagle's Algorithm
-S, --tos # set the socket's IP_TOS (byte) field

Server specific:
-s, --server run in server mode
-t, --time # time in seconds to listen for new connections as well as to receive traffic (default not set)
-U, --single_udp run in single threaded UDP mode
-D, --daemon run the server as a daemon
-V, --ipv6_domain Enable IPv6 reception by setting the domain and socket to AF_INET6 (Can receive on both IPv4 and IPv6)

Client specific:
-c, --client <host> run in client mode, connecting to <host>
-d, --dualtest Do a bidirectional test simultaneously
-n, --num #[kmgKMG] number of bytes to transmit (instead of -t)
-r, --tradeoff Do a bidirectional test individually
-t, --time # time in seconds to transmit for (default 10 secs)
-B, --bind [<ip> | <ip:port>] bind src addr(s) from which to originate traffic
-F, --fileinput <name> input the data to be transmitted from a file
-I, --stdin input the data to be transmitted from stdin
-L, --listenport # port to receive bidirectional tests back on
-P, --parallel # number of parallel client threads to run
-R, --reverse reverse the test (client receives, server sends)
-T, --ttl # time-to-live, for multicast (default 1)
-V, --ipv6_domain Set the domain to IPv6 (send packets over IPv6)
-X, --peer-detect perform server version detection and version exchange
-Z, --linux-congestion <algo> set TCP congestion control algorithm (Linux only)

Miscellaneous:
-x, --reportexclude [CDMSV] exclude C(connection) D(data) M(multicast) S(settings) V(server) reports
-y, --reportstyle C report as a Comma-Separated Values
-h, --help print this message and quit
-v, --version print version information and quit

Iperf 3.0:

What is iperf 3.0
Iperf3 is a new implemention and developed by Esnet from 2010.Primary development for iperf3 take place on CentOS linux,FreeBSD,and MacOS,the latest version is iperf3.5 released in 2018.03.02.

Note that iperf3 is not backwards compatible with the original iperf2.0 or Jperf.

link:

iperf3 — iperf3 3.5 documentation ---Linux

https://iperf.fr/ ---window

What is the difference between Iperf2 and Iperf3?
New option:
Deprecated options:

if you used iperf2.0 and iperf3.0,you will find Iperf2.0 parameters is more cumbersome than iperf3.0,For iperf 3.0,many parameters can be set on the client,Of course, the server-side display result can also be obtained by the parameter “--get-server-output”.

Example
We use the above topology, modify the MTU and use iperf3.0 to test.

The following is the result of the experiment:


server:
iperf3 -s -p 6666

TCP test:

C:\Users\xuxing\Desktop\iperf-3.1.3-win64>iperf3.exe -c 13.1.1.2 -p 6666 -t 50
Connecting to host 13.1.1.2, port 6666
[ 4] local 12.1.1.2 port 7751 connected to 13.1.1.2 port 6666
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 105 MBytes 881 Mbits/sec
[ 4] 1.00-2.00 sec 111 MBytes 928 Mbits/sec
[ 4] 2.00-3.00 sec 110 MBytes 927 Mbits/sec
[ 4] 3.00-4.00 sec 111 MBytes 929 Mbits/sec
[ 4] 4.00-5.00 sec 110 MBytes 925 Mbits/sec
[ 4] 5.00-6.00 sec 111 MBytes 929 Mbits/sec
[ 4] 6.00-7.00 sec 111 MBytes 928 Mbits/sec
[ 4] 7.00-8.00 sec 110 MBytes 925 Mbits/sec
[ 4] 8.00-9.00 sec 111 MBytes 929 Mbits/sec
[ 4] 9.00-10.00 sec 111 MBytes 928 Mbits/sec
[ 4] 10.00-11.00 sec 110 MBytes 926 Mbits/sec
[ 4] 11.00-12.00 sec 110 MBytes 927 Mbits/sec
[ 4] 12.00-13.00 sec 111 MBytes 929 Mbits/sec
[ 4] 13.00-14.00 sec 111 MBytes 929 Mbits/sec
[ 4] 14.00-15.00 sec 109 MBytes 913 Mbits/sec
[ 4] 15.00-16.00 sec 111 MBytes 928 Mbits/sec
[ 4] 16.00-17.00 sec 110 MBytes 926 Mbits/sec
[ 4] 17.00-18.00 sec 111 MBytes 929 Mbits/sec
[ 4] 18.00-19.00 sec 110 MBytes 922 Mbits/sec
[ 4] 19.00-20.00 sec 111 MBytes 929 Mbits/sec
[ 4] 20.00-21.00 sec 111 MBytes 928 Mbits/sec
[ 4] 21.00-22.00 sec 111 MBytes 928 Mbits/sec
[ 4] 22.00-23.00 sec 111 MBytes 929 Mbits/sec
[ 4] 23.00-24.00 sec 111 MBytes 928 Mbits/sec
[ 4] 24.00-25.00 sec 111 MBytes 928 Mbits/sec
[ 4] 25.00-26.00 sec 111 MBytes 929 Mbits/sec
[ 4] 26.00-27.00 sec 110 MBytes 921 Mbits/sec
[ 4] 27.00-28.00 sec 111 MBytes 930 Mbits/sec
[ 4] 28.00-29.00 sec 111 MBytes 927 Mbits/sec
[ 4] 29.00-30.00 sec 111 MBytes 929 Mbits/sec
[ 4] 30.00-31.00 sec 110 MBytes 924 Mbits/sec
[ 4] 31.00-32.00 sec 111 MBytes 929 Mbits/sec
[ 4] 32.00-33.00 sec 110 MBytes 922 Mbits/sec
[ 4] 33.00-34.00 sec 109 MBytes 912 Mbits/sec
[ 4] 34.00-35.00 sec 110 MBytes 921 Mbits/sec
[ 4] 35.00-36.00 sec 110 MBytes 926 Mbits/sec
[ 4] 36.00-37.00 sec 110 MBytes 926 Mbits/sec
[ 4] 37.00-38.00 sec 110 MBytes 923 Mbits/sec
[ 4] 38.00-39.00 sec 110 MBytes 925 Mbits/sec
[ 4] 39.00-40.00 sec 97.4 MBytes 817 Mbits/sec
[ 4] 40.00-41.00 sec 107 MBytes 897 Mbits/sec
[ 4] 41.00-42.00 sec 110 MBytes 924 Mbits/sec
[ 4] 42.00-43.00 sec 110 MBytes 926 Mbits/sec
[ 4] 43.00-44.00 sec 111 MBytes 929 Mbits/sec
[ 4] 44.00-45.00 sec 111 MBytes 928 Mbits/sec
[ 4] 45.00-46.00 sec 109 MBytes 917 Mbits/sec
[ 4] 46.00-47.00 sec 110 MBytes 927 Mbits/sec
[ 4] 47.00-48.00 sec 110 MBytes 925 Mbits/sec
[ 4] 48.00-49.00 sec 110 MBytes 924 Mbits/sec
[ 4] 49.00-50.00 sec 110 MBytes 925 Mbits/sec


[ ID] Interval Transfer Bandwidth
[ 4] 0.00-50.00 sec 5.37 GBytes 922 Mbits/sec sender
[ 4] 0.00-50.00 sec 5.37 GBytes 922 Mbits/sec receiver

iperf Done.


UDP test:

C:\Users\xuxing\Desktop\iperf-3.1.3-win64>iperf3.exe -c 13.1.1.2 -p 6666 -u -t 50 -V -b 0
iperf 3.1.3
CYGWIN_NT-10.0 XUXING-LAVVH 2.5.1(0.297/5/3) 2016-04-21 22:14 x86_64
Time: Mon, 12 Mar 2018 12:35:57 GMT
Connecting to host 13.1.1.2, port 6666
Cookie: XUXING-LAVVH.1520858157.569039.50b58
[ 4] local 12.1.1.2 port 64392 connected to 13.1.1.2 port 6666
Starting Test: protocol: UDP, 1 streams, 8192 byte blocks, omitting 0 seconds, 50 second test
[ ID] Interval Transfer Bandwidth Total Datagrams
[ 4] 0.00-1.00 sec 114 MBytes 953 Mbits/sec 14580
[ 4] 1.00-2.00 sec 109 MBytes 910 Mbits/sec 13900
[ 4] 2.00-3.00 sec 113 MBytes 946 Mbits/sec 14430
[ 4] 3.00-4.00 sec 117 MBytes 982 Mbits/sec 14980
[ 4] 4.00-5.00 sec 118 MBytes 987 Mbits/sec 15060
[ 4] 5.00-6.00 sec 117 MBytes 985 Mbits/sec 15030
[ 4] 6.00-7.00 sec 118 MBytes 987 Mbits/sec 15050
[ 4] 7.00-8.00 sec 117 MBytes 985 Mbits/sec 15030
[ 4] 8.00-9.00 sec 118 MBytes 985 Mbits/sec 15040
[ 4] 9.00-10.00 sec 117 MBytes 985 Mbits/sec 15030
[ 4] 10.00-11.00 sec 116 MBytes 972 Mbits/sec 14840
[ 4] 11.00-12.00 sec 117 MBytes 980 Mbits/sec 14950
[ 4] 12.00-13.00 sec 117 MBytes 985 Mbits/sec 15030
[ 4] 13.00-14.00 sec 117 MBytes 985 Mbits/sec 15030
[ 4] 14.00-15.00 sec 118 MBytes 987 Mbits/sec 15060
[ 4] 15.00-16.00 sec 117 MBytes 981 Mbits/sec 14970
[ 4] 16.00-17.00 sec 117 MBytes 983 Mbits/sec 15010
[ 4] 17.00-18.00 sec 117 MBytes 984 Mbits/sec 15010
[ 4] 18.00-19.00 sec 117 MBytes 984 Mbits/sec 15010
[ 4] 19.00-20.00 sec 117 MBytes 983 Mbits/sec 15000
[ 4] 20.00-21.00 sec 117 MBytes 982 Mbits/sec 14980
[ 4] 21.00-22.00 sec 118 MBytes 986 Mbits/sec 15040
[ 4] 22.00-23.00 sec 118 MBytes 986 Mbits/sec 15050
[ 4] 23.00-24.00 sec 118 MBytes 988 Mbits/sec 15070
[ 4] 24.00-25.00 sec 118 MBytes 987 Mbits/sec 15070
[ 4] 25.00-26.00 sec 117 MBytes 984 Mbits/sec 15010
[ 4] 26.00-27.00 sec 117 MBytes 980 Mbits/sec 14960
[ 4] 27.00-28.00 sec 114 MBytes 957 Mbits/sec 14590
[ 4] 28.00-29.00 sec 118 MBytes 986 Mbits/sec 15050
[ 4] 29.00-30.00 sec 118 MBytes 987 Mbits/sec 15060
[ 4] 30.00-31.00 sec 117 MBytes 982 Mbits/sec 14980
[ 4] 31.00-32.00 sec 118 MBytes 987 Mbits/sec 15060
[ 4] 32.00-33.00 sec 118 MBytes 987 Mbits/sec 15060
[ 4] 33.00-34.00 sec 118 MBytes 986 Mbits/sec 15050
[ 4] 34.00-35.00 sec 115 MBytes 964 Mbits/sec 14700
[ 4] 35.00-36.00 sec 118 MBytes 988 Mbits/sec 15070
[ 4] 36.00-37.00 sec 118 MBytes 986 Mbits/sec 15050
[ 4] 37.00-38.00 sec 116 MBytes 976 Mbits/sec 14900
[ 4] 38.00-39.00 sec 117 MBytes 983 Mbits/sec 15000
[ 4] 39.00-40.00 sec 118 MBytes 988 Mbits/sec 15070
[ 4] 40.00-41.00 sec 117 MBytes 982 Mbits/sec 14980
[ 4] 41.00-42.00 sec 117 MBytes 982 Mbits/sec 14980
[ 4] 42.00-43.00 sec 118 MBytes 986 Mbits/sec 15050
[ 4] 43.00-44.00 sec 118 MBytes 987 Mbits/sec 15060
[ 4] 44.00-45.00 sec 118 MBytes 986 Mbits/sec 15050
[ 4] 45.00-46.00 sec 117 MBytes 983 Mbits/sec 15010
[ 4] 46.00-47.00 sec 118 MBytes 986 Mbits/sec 15040
[ 4] 47.00-48.00 sec 118 MBytes 987 Mbits/sec 15070
[ 4] 48.00-49.00 sec 118 MBytes 987 Mbits/sec 15060
[ 4] 49.00-50.00 sec 118 MBytes 986 Mbits/sec 15050


Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 4] 0.00-50.00 sec 5.71 GBytes 981 Mbits/sec 0.016 ms 8149/748210 (1.1%)
[ 4] Sent 748210 datagrams
CPU Utilization: local/sender 88.3% (2.2%u/86.1%s), remote/receiver 26.2% (8.2%u/18.0%s)


If you want to get more parameter modifications, performance improvements, or 100G 40G test,Please refer to the following article:

http://software.es.net/iperf/faq.html

Summary

If the customer needs to use the graphical interface, it is recommended jperf. But its version is too low, the result can only be used as a reference.
Run a Bi-Directional Test,it is recommended the latest version of iperf2.0
If the customer needs more accurate results and UDP bandwidth test, recommend the latest version of iperf3.0
According to my observation, during the test, the PC's CPU will be greatly improved, and the suppose result may be related to the hardware.
It is recommended that the test client use the linux system, because the parameters will be more, by adjusting these parameters, we can ignore the hardware and operating system caused deviation of results.

转载于:https://blog.51cto.com/superxing/2091498

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值