Testing VMkernel network connectivity with the vmkping command (1003728)

Testing VMkernel network connectivity with the vmkping command (1003728)


Last Updated: 2021/3/29Categories: How toTotal Views: 102784 71Language:                 簡體中文葡萄牙文 (歐洲)日文西班牙文英文                         SUBSCRIBE

Purpose

For troubleshooting purposes, it may be necessary to test VMkernel network connectivity between ESXi hosts in your environment.

This article provides you with the steps to perform a vmkping test between your ESXi hosts.
 

Resolution

The vmkping command sources a ping from the local VMkernel port.

Instructions to test vmkernel ping connectivity with vmkping:

  1. Connect to the ESXi host using an SSH session. For more information, see Using ESXi Shell in ESXi 5.x, 6.x and 7.x (2004746).
     
  2. In the command shell, run this command:

    vmkping -I vmkX x.x.x.x

    where x.x.x.x is the hostname or IP address of the server that you want to ping and vmkX is the vmkernel interface to ping out of.
     
  3. If you have Jumbo Frames configured in your environment, run the vmkping command with the -s and -d options.

    vmkping -d -s 8972 x.x.x.x

    Note: In the command, the -d option sets DF (Don't Fragment) bit on the IPv4 packet. 8972 is the size needed for 9000 MTU in ESXi.

To test 1500 MTU, run the command:

vmkping -I vmkX x.x.x.x -d -s 1472

Verification of your MTU size can be obtained from a SSH session by running this command:

esxcfg-nics -l

Output should be similar to:

esxcfg-nics -l

Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:02:00.00 e1000 Up 1000Mbps Full xx:xx:xx:xx:xx:xx 9000 Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
vmnic1 0000:02:01.00 e1000 Up 1000Mbps Full xx:xx:xx:xx:xx:xx 9000 Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

esxcfg-vmknic -l

Output should be similar to:

esxcfg-vmknic -l

Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type

vmk1 iSCSI IPv4 10.10.10.10 255.255.255.0 10.10.10.255 XX:XX:XX:XX:XX:XX 9000 65535 true STATIC

A successful ping response is similar to:

vmkping -I vmk0 10.0.0.1

PING server(10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=10.245 ms
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.935 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.926 ms
--- server ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.926/4.035/10.245 ms

An unsuccessful ping response is similar to:

vmkping 10.0.0.2
PING server (10.0.0.2) 56(84) bytes of data.
--- server ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 3017ms

Note: The commands shown above are the same for ipv6. Just need to add the -6 option in the command, for example:

vmkping -6 and replace x.x.x.x by an ipv6 address xx:xx:xx:xx:xx:xx:xx:xx

Full list of the vmkping options are:

vmkping [args] [host]
 

arguse
-4 use IPv4 (default)
-6use IPv6
-c <count>set packet count
-dset DF bit (do not fragment) in IPv4 or Disable Fragmentation (IPv6)
-Dvmkernel TCP stack debug mode
-i <interval>set interval (secs)
-I <interface>set outgoing interface, such as "-I vmk1"
-N <next_hop>set IP*_NEXTHOP- bypasses routing lookup
for IPv4, -I is required to use -N
-s <size>set the number of ICMP data bytes to be sent
The default is 56, which translates to a 64 byte ICMP frame when adding the 8 byte ICMP header (these sizes do not include the header)
-t <ttl>set IPv4 Time To Live or IPv6 Hop Limit
-vverbose
-W <time>set timeout to wait if no responses are received (secs)
-XXML output format for esxcli framework
-Ssets the network stack instance name. If unspecified, the default stack is used. Note: only works for IPv4, not IPv6)


Notes:

  • If you see intermittent ping success, this might indicate you have incompatible NICs teamed on the vmkernel port. Either team compatible NICs or set one of the NICs to standby.
  • If you do not see a response when pinging by the hostname of the server, initiate a ping to the IP address. Initiating a ping to the IP address allows you to determine if the problem is a result of an issue with hostname resolution. If you are testing connectivity to another VMkernel port on another server remember to use the VMkernel port IP address because the server's hostname usually resolves to the service console address on the remote server.

Related Information

VMware Skyline Health Diagnostics for vSphere - FAQ
Troubleshooting vMotion fails with network errors

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

weixin_40191861_zj

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值