DPDK usage discussions
 help / color / mirror / Atom feed
* Netvsc PMD on Hyper-V VM could not receive packets
@ 2025-09-10 16:28 madhukar mythri
  0 siblings, 0 replies; only message in thread
From: madhukar mythri @ 2025-09-10 16:28 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 6114 bytes --]

Hi,

We are migrating our DPDK App from “failsafe” to “netvsc” PMD as per the
recommendation of Microsoft.
“Failsafe” PMD works well on both Azure cloud and on-prem Hyper-V VM(Linux)
machines using the DPDK “testpmd” app.

Whereas “Netvsc” PMD works well only on Azure cloud with both(AN
enabled/disabled options), But not working on the on-prem local Hyper-V
Linux VM(based on RHEL-9).
We could not receive any packets on the synthetic device when we test with
DPDK “testpmd”. The Network Adapter(without SR-IOV enable) connection is
good from the Hyper-V switch to VM, we could receive packets on the
Kernel-mode driver “hv_netvsc” network interface, but, once we bind the
VMbus network device to “uio_hv_generic” as follows and load “testpmd” App,
we could not receive any pkts on Rx as per the port stats.

Steps to bind network device from “hv_netvsc” to “uio_hv_generic” and start
the “testpmd” app:
======================
NET_UUID="f8615163-df3e-46c5-913f-f2d2f965ed0e"
modprobe uio_hv_generic
echo $NET_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/new_id
echo $DEV_UUID > /sys/bus/vmbus/drivers/hv_netvsc/unbind
echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind

 ./dpdk-testpmd -l 2-3 -n 2 -v --legacy-mem  -- -i  --mbcache=64
================
Here, DEV_UUID got from the synthetic kernel interface using “cat
/sys/class/net/eth0/device/device_id”. Once we start the “testpmd” as
mentioned above we could see the driver-name properly as “net_netvsc” and
we could “start” the DPDK ports well without any Errors as shown below.

These steps works well on Azure cloud VM with AN enabled/disabled and we
could receive the traffic on Rx stats well.
It looks like, Hyper-V setup issue, as the above steps works fine on
Azure-VM and not working on local Hyper-V(Windows 2016 based) VM(Linux
RHEL-9).

Has anybody tried local on-prem Windows Hyper-V VM(Linux) DPDK App’s ?  If
so, please let me know, any suggestions on this issue.

Linux-kernel version on VM: 5.15.0
DPDK-version: 24.11

Sample output of “testpmd” App as follows:
====================
./dpdk-testpmd -l 2-3 -n 2 -v --legacy-mem  -- -i  --mbcache=64
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Static memory layout is selected, amount of reserved memory can be
adjusted with -m or --socket-mem
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=149504, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:15:5D:2E:CC:1E
Configuring Port 1 (socket 0)
Port 1: 00:15:5D:2E:CC:1F
Checking link statuses...
Done
testpmd> sh port info 0
Command not found
testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: 00:15:5D:2E:CC:1E
Device name: 7cac5c55-1a7c-4690-a70c-13d4acbb35ac
Driver name: net_netvsc
Firmware-version: not available
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 128
Supported RSS offload flow types:
  ipv4  ipv4-tcp  ipv4-udp  ipv6  ipv6-tcp
Minimum size of RX buffer: 1024
Maximum configurable length of RX packet: 16128
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 64
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 64
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 1
TXDs number alignment: 1
Max segment number per packet: 65535
Max segment number per MTU/TSO: 65535
Device capabilities: 0x0( )
Device error handling mode: none
Device private info:
  none
testpmd>
testpmd> start
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
enabled, MP allocation mode: native
Logical Core 3 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd>
testpmd> show port stats 0

  ######################## NIC statistics for port 0
 ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0

############################################################################
testpmd>
====================
We had pumped the traffic from another machine to this MAC address.

Thanks,
Madhukar.

[-- Attachment #2: Type: text/html, Size: 6630 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2025-09-10 16:29 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-10 16:28 Netvsc PMD on Hyper-V VM could not receive packets madhukar mythri

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).