From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F2DB146EBE for ; Wed, 10 Sep 2025 18:29:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A4A1A402CB; Wed, 10 Sep 2025 18:29:04 +0200 (CEST) Received: from mail-oo1-f44.google.com (mail-oo1-f44.google.com [209.85.161.44]) by mails.dpdk.org (Postfix) with ESMTP id B93C2402C2 for ; Wed, 10 Sep 2025 18:29:02 +0200 (CEST) Received: by mail-oo1-f44.google.com with SMTP id 006d021491bc7-61e783a1e00so4603381eaf.1 for ; Wed, 10 Sep 2025 09:29:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1757521742; x=1758126542; darn=dpdk.org; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=ezzoc63qQdpkdmYn8ZI3+W/GQWJ6wReBHSfgPvLyhWE=; b=l7vw5WEHvdcRNzk3awkfbBaC/FhmSObp2V+ylM4/92aZFfqlZq7KlYEzGIl5hlN8zv g5JuAT7BP9WcUsqiekhio+oIEVTMLISH8n8x5i/wI/zmDVztCxhVlbQfWHtspAZQaqQi 4rQx3+/hEPl3ekp17adW5PHJKpz+X842GqbkGZvWYCxqgAbWcoziF8Jsjn0JM2lGA7gs b9hCZ2VBviOs68TrPY1Zazg11LXQlSL9rpz12CITYuorjni5m+nKHpyVFiY260+akzZR 1wPih2gtDu+laAj0P3vjEwafKMb7BhDZW2RDtt1rWvXXCi9yVwKHpP5MrAiVHVmJOBLq 8ZVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1757521742; x=1758126542; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=ezzoc63qQdpkdmYn8ZI3+W/GQWJ6wReBHSfgPvLyhWE=; b=wm6BDx85vFy9x05IaVk5uzS0Jy8oOplhJHEGfKeAa5GVnqsGww1rw1F/aOVNngQUOx CIIxgB45F7Ba3I8JE33Xct0g1H2cu2JUUEewE4LcXgkGDXqJlBAS+xo5u3YZaUs4qEHJ CJqIBP/JlqtTVvTXYonO0pQYbVlF4z0lvsXBg+GnZ+fZLoiFTQ1lvY44YBAK891iAbmK /ki0fJitpsGemeUt6qbzsZ5DCz9mBifD4NGsg1dtHGyKEMXMxNObR78uBnA+xKTQmLfV 9IWqOMaj6hxOMXq3uAfApJN6setchPYPrA1cxFbQV00MqMTw6zPOgXZ22bRrK2DbUeJL Hlvg== X-Gm-Message-State: AOJu0YxnVIgsDAsZGck2eAFXzgqgrRU+1xVTNFQK5LXdyYD05BtOOp7B DSzuZvm08GrRVvKHc5BmRsSaCPcwBplzj2Ldd7G7w2AhA2qVD+tB3AvBfd3cx5qJFGUf08b1vCb FZMDJnEpL+k4GbwRiiWxnxAfR2bsBH7LILaeA X-Gm-Gg: ASbGncvzW/O4p7+yKgrtYWIBS4ohTY7jI21oJ3+uJaJ1nk0RKhsRcU0qWyyTa5X9++Y qpz8O1j0mJIL1bmcDvP9hm8xaaJ/WI8UJyzQRgX3w7tzpPLuQKcqA5hjvk+vt1GjNsfTDmvwCN3 zziYOzOmuWrghFl5QAFKcOfORgveCSKwUyiyg3UbeM27I0lZbgR6tApa5D2H4XAQCHOrJfJCX9B l/TVOy7PPwax6JqjX8= X-Google-Smtp-Source: AGHT+IFK+Co+Kxx52xwwXqS4ymMDZklX4R1aN7SQRjSJ81VgquDmRB4A9Nu2mOPfTK6TqiFwrCH3R5i9QjUNJ60lG+M= X-Received: by 2002:a05:6870:d204:b0:30b:8821:aba4 with SMTP id 586e51a60fabf-322631ac8f6mr8180285fac.20.1757521741378; Wed, 10 Sep 2025 09:29:01 -0700 (PDT) MIME-Version: 1.0 From: madhukar mythri Date: Wed, 10 Sep 2025 21:58:49 +0530 X-Gm-Features: Ac12FXw8X1e_dT3Dq81QrbTX2u_ztfVhGtQwaf6NPQwMpDhkrIy-BOQmb4_5L_c Message-ID: Subject: Netvsc PMD on Hyper-V VM could not receive packets To: users Content-Type: multipart/alternative; boundary="000000000000147a08063e74eb7d" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000147a08063e74eb7d Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi, We are migrating our DPDK App from =E2=80=9Cfailsafe=E2=80=9D to =E2=80=9Cn= etvsc=E2=80=9D PMD as per the recommendation of Microsoft. =E2=80=9CFailsafe=E2=80=9D PMD works well on both Azure cloud and on-prem H= yper-V VM(Linux) machines using the DPDK =E2=80=9Ctestpmd=E2=80=9D app. Whereas =E2=80=9CNetvsc=E2=80=9D PMD works well only on Azure cloud with bo= th(AN enabled/disabled options), But not working on the on-prem local Hyper-V Linux VM(based on RHEL-9). We could not receive any packets on the synthetic device when we test with DPDK =E2=80=9Ctestpmd=E2=80=9D. The Network Adapter(without SR-IOV enable) = connection is good from the Hyper-V switch to VM, we could receive packets on the Kernel-mode driver =E2=80=9Chv_netvsc=E2=80=9D network interface, but, once= we bind the VMbus network device to =E2=80=9Cuio_hv_generic=E2=80=9D as follows and loa= d =E2=80=9Ctestpmd=E2=80=9D App, we could not receive any pkts on Rx as per the port stats. Steps to bind network device from =E2=80=9Chv_netvsc=E2=80=9D to =E2=80=9Cu= io_hv_generic=E2=80=9D and start the =E2=80=9Ctestpmd=E2=80=9D app: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D NET_UUID=3D"f8615163-df3e-46c5-913f-f2d2f965ed0e" modprobe uio_hv_generic echo $NET_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/new_id echo $DEV_UUID > /sys/bus/vmbus/drivers/hv_netvsc/unbind echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind ./dpdk-testpmd -l 2-3 -n 2 -v --legacy-mem -- -i --mbcache=3D64 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Here, DEV_UUID got from the synthetic kernel interface using =E2=80=9Ccat /sys/class/net/eth0/device/device_id=E2=80=9D. Once we start the =E2=80=9Ct= estpmd=E2=80=9D as mentioned above we could see the driver-name properly as =E2=80=9Cnet_netvs= c=E2=80=9D and we could =E2=80=9Cstart=E2=80=9D the DPDK ports well without any Errors as = shown below. These steps works well on Azure cloud VM with AN enabled/disabled and we could receive the traffic on Rx stats well. It looks like, Hyper-V setup issue, as the above steps works fine on Azure-VM and not working on local Hyper-V(Windows 2016 based) VM(Linux RHEL-9). Has anybody tried local on-prem Windows Hyper-V VM(Linux) DPDK App=E2=80=99= s ? If so, please let me know, any suggestions on this issue. Linux-kernel version on VM: 5.15.0 DPDK-version: 24.11 Sample output of =E2=80=9Ctestpmd=E2=80=9D App as follows: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D ./dpdk-testpmd -l 2-3 -n 2 -v --legacy-mem -- -i --mbcache=3D64 EAL: Detected CPU lcores: 8 EAL: Detected NUMA nodes: 1 EAL: Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' Interactive-mode selected testpmd: create a new mbuf pool : n=3D149504, size=3D2176, socke= t=3D0 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0) Port 0: 00:15:5D:2E:CC:1E Configuring Port 1 (socket 0) Port 1: 00:15:5D:2E:CC:1F Checking link statuses... Done testpmd> sh port info 0 Command not found testpmd> show port info 0 ********************* Infos for port 0 ********************* MAC address: 00:15:5D:2E:CC:1E Device name: 7cac5c55-1a7c-4690-a70c-13d4acbb35ac Driver name: net_netvsc Firmware-version: not available Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 10 Gbps Link duplex: full-duplex Autoneg status: On MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 1 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off, filter off, extend off, qinq strip off Hash key size in bytes: 40 Redirection table size: 128 Supported RSS offload flow types: ipv4 ipv4-tcp ipv4-udp ipv6 ipv6-tcp Minimum size of RX buffer: 1024 Maximum configurable length of RX packet: 16128 Maximum configurable size of LRO aggregated packet: 0 Current number of RX queues: 1 Max possible RX queues: 64 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Current number of TX queues: 1 Max possible TX queues: 64 Max possible number of TXDs per queue: 4096 Min possible number of TXDs per queue: 1 TXDs number alignment: 1 Max segment number per packet: 65535 Max segment number per MTU/TSO: 65535 Device capabilities: 0x0( ) Device error handling mode: none Device private info: none testpmd> testpmd> start io packet forwarding - ports=3D2 - cores=3D1 - streams=3D2 - NUMA support enabled, MP allocation mode: native Logical Core 3 (socket 0) forwards packets on 2 streams: RX P=3D0/Q=3D0 (socket 0) -> TX P=3D1/Q=3D0 (socket 0) peer=3D02:00:00:00= :00:01 RX P=3D1/Q=3D0 (socket 0) -> TX P=3D0/Q=3D0 (socket 0) peer=3D02:00:00:00= :00:00 io packet forwarding packets/burst=3D32 nb forwarding cores=3D1 - nb forwarding ports=3D2 port 0: RX queue number: 1 Tx queue number: 1 Rx offloads=3D0x0 Tx offloads=3D0x0 RX queue: 0 RX desc=3D512 - RX free threshold=3D0 RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 RX Offloads=3D0x0 TX queue: 0 TX desc=3D512 - TX free threshold=3D0 TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 TX offloads=3D0x0 - TX RS bit threshold=3D0 port 1: RX queue number: 1 Tx queue number: 1 Rx offloads=3D0x0 Tx offloads=3D0x0 RX queue: 0 RX desc=3D512 - RX free threshold=3D0 RX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 RX Offloads=3D0x0 TX queue: 0 TX desc=3D512 - TX free threshold=3D0 TX threshold registers: pthresh=3D0 hthresh=3D0 wthresh=3D0 TX offloads=3D0x0 - TX RS bit threshold=3D0 testpmd> testpmd> show port stats 0 ######################## NIC statistics for port 0 ######################## RX-packets: 0 RX-missed: 0 RX-bytes: 0 RX-errors: 0 RX-nombuf: 0 TX-packets: 0 TX-errors: 0 TX-bytes: 0 Throughput (since last show) Rx-pps: 0 Rx-bps: 0 Tx-pps: 0 Tx-bps: 0 ###########################################################################= # testpmd> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D We had pumped the traffic from another machine to this MAC address. Thanks, Madhukar. --000000000000147a08063e74eb7d Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

We are migrating our DPDK App from =E2= =80=9Cfailsafe=E2=80=9D to =E2=80=9Cnetvsc=E2=80=9D PMD as per the recommen= dation of Microsoft.
=E2=80=9CFailsafe=E2=80=9D PMD works well on both A= zure cloud and on-prem Hyper-V VM(Linux) machines using the DPDK =E2=80=9Ct= estpmd=E2=80=9D app.

Whereas =E2=80=9CNetvsc=E2=80=9D PMD works well= only on Azure cloud with both(AN enabled/disabled options), But not workin= g on the on-prem local Hyper-V Linux VM(based on RHEL-9).
We could not r= eceive any packets on the synthetic device when we test with DPDK =E2=80=9C= testpmd=E2=80=9D. The Network Adapter(without SR-IOV enable) connection is = good from the Hyper-V switch to VM, we could receive packets on the Kernel-= mode driver =E2=80=9Chv_netvsc=E2=80=9D network interface, but, once we bin= d the VMbus network device to =E2=80=9Cuio_hv_generic=E2=80=9D as follows a= nd load =E2=80=9Ctestpmd=E2=80=9D App, we could not receive any pkts on Rx = as per the port stats.

Steps to bind network device from =E2=80=9Chv= _netvsc=E2=80=9D to =E2=80=9Cuio_hv_generic=E2=80=9D and start the =E2=80= =9Ctestpmd=E2=80=9D app:
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D
NET_UUID=3D"f8615163-df3e-46c5-913f-f2d2f965e= d0e"
modprobe uio_hv_generic
echo $NET_UUID > /sys/bus/vmbus/= drivers/uio_hv_generic/new_id
echo $DEV_UUID > /sys/bus/vmbus/drivers= /hv_netvsc/unbind
echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_gene= ric/bind

=C2=A0./dpdk-testpmd -l 2-3 -n 2 -v --legacy-mem =C2=A0-- -= i =C2=A0--mbcache=3D64
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<= br>Here, DEV_UUID got from the synthetic kernel interface using =E2=80=9Cca= t =C2=A0 /sys/class/net/eth0/device/device_id=E2=80=9D. Once we start the = =E2=80=9Ctestpmd=E2=80=9D as mentioned above we could see the driver-name p= roperly as =E2=80=9Cnet_netvsc=E2=80=9D and we could =E2=80=9Cstart=E2=80= =9D the DPDK ports well without any Errors as shown below.

These ste= ps works well on Azure cloud VM with AN enabled/disabled and we could recei= ve the traffic on Rx stats well.
It looks like, Hyper-V setup issue, as = the above steps works fine on Azure-VM and not=C2=A0working on local Hyper-= V(Windows 2016 based) VM(Linux RHEL-9).

Has anybody tried local on-p= rem Windows Hyper-V VM(Linux) DPDK App=E2=80=99s ?=C2=A0 If so, please let = me know, any suggestions on this issue.

Linux-kernel version on VM: = 5.15.0
DPDK-version: 24.11

Sample output of =E2=80=9Ctestpmd=E2= =80=9D App as follows:
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D
./dpdk-testpmd -l 2-3 -n 2 -v --legacy-mem =C2=A0-- -i =C2= =A0--mbcache=3D64
EAL: Detected CPU lcores: 8
EAL: Detected NUMA node= s: 1
EAL: Static memory layout is selected, amount of reserved memory ca= n be adjusted with -m or --socket-mem
EAL: Detected static linkage of DP= DK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selecte= d IOVA mode 'PA'
Interactive-mode selected
testpmd: create a = new mbuf pool <mb_pool_0>: n=3D149504, size=3D2176, socket=3D0
tes= tpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (soc= ket 0)
Port 0: 00:15:5D:2E:CC:1E
Configuring Port 1 (socket 0)
Por= t 1: 00:15:5D:2E:CC:1F
Checking link statuses...
Done
testpmd> = sh port info 0
Command not found
testpmd> show port info 0

= ********************* Infos for port 0 =C2=A0*********************
MAC a= ddress: 00:15:5D:2E:CC:1E
Device name: 7cac5c55-1a7c-4690-a70c-13d4acbb3= 5ac
Driver name: net_netvsc
Firmware-version: not available
Connec= t to socket: 0
memory allocation on the socket: 0
Link status: up
= Link speed: 10 Gbps
Link duplex: full-duplex
Autoneg status: On
MT= U: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maxi= mum number of MAC addresses: 1
Maximum number of MAC addresses of hash f= iltering: 0
VLAN offload:
=C2=A0 strip off, filter off, extend off, q= inq strip off
Hash key size in bytes: 40
Redirection table size: 128<= br>Supported RSS offload flow types:
=C2=A0 ipv4 =C2=A0ipv4-tcp =C2=A0ip= v4-udp =C2=A0ipv6 =C2=A0ipv6-tcp
Minimum size of RX buffer: 1024
Maxi= mum configurable length of RX packet: 16128
Maximum configurable size of= LRO aggregated packet: 0
Current number of RX queues: 1
Max possible= RX queues: 64
Max possible number of RXDs per queue: 65535
Min possi= ble number of RXDs per queue: 0
RXDs number alignment: 1
Current numb= er of TX queues: 1
Max possible TX queues: 64
Max possible number of = TXDs per queue: 4096
Min possible number of TXDs per queue: 1
TXDs nu= mber alignment: 1
Max segment number per packet: 65535
Max segment nu= mber per MTU/TSO: 65535
Device capabilities: 0x0( )
Device error hand= ling mode: none
Device private info:
=C2=A0 none
testpmd>
te= stpmd> start
io packet forwarding - ports=3D2 - cores=3D1 - streams= =3D2 - NUMA support enabled, MP allocation mode: native
Logical Core 3 (= socket 0) forwards packets on 2 streams:
=C2=A0 RX P=3D0/Q=3D0 (socket 0= ) -> TX P=3D1/Q=3D0 (socket 0) peer=3D02:00:00:00:00:01
=C2=A0 RX P= =3D1/Q=3D0 (socket 0) -> TX P=3D0/Q=3D0 (socket 0) peer=3D02:00:00:00:00= :00

=C2=A0 io packet forwarding packets/burst=3D32
=C2=A0 nb forw= arding cores=3D1 - nb forwarding ports=3D2
=C2=A0 port 0: RX queue numbe= r: 1 Tx queue number: 1
=C2=A0 =C2=A0 Rx offloads=3D0x0 Tx offloads=3D0x= 0
=C2=A0 =C2=A0 RX queue: 0
=C2=A0 =C2=A0 =C2=A0 RX desc=3D512 - RX f= ree threshold=3D0
=C2=A0 =C2=A0 =C2=A0 RX threshold registers: pthresh= =3D0 hthresh=3D0 =C2=A0wthresh=3D0
=C2=A0 =C2=A0 =C2=A0 RX Offloads=3D0x= 0
=C2=A0 =C2=A0 TX queue: 0
=C2=A0 =C2=A0 =C2=A0 TX desc=3D512 - TX f= ree threshold=3D0
=C2=A0 =C2=A0 =C2=A0 TX threshold registers: pthresh= =3D0 hthresh=3D0 =C2=A0wthresh=3D0
=C2=A0 =C2=A0 =C2=A0 TX offloads=3D0x= 0 - TX RS bit threshold=3D0
=C2=A0 port 1: RX queue number: 1 Tx queue n= umber: 1
=C2=A0 =C2=A0 Rx offloads=3D0x0 Tx offloads=3D0x0
=C2=A0 =C2= =A0 RX queue: 0
=C2=A0 =C2=A0 =C2=A0 RX desc=3D512 - RX free threshold= =3D0
=C2=A0 =C2=A0 =C2=A0 RX threshold registers: pthresh=3D0 hthresh=3D= 0 =C2=A0wthresh=3D0
=C2=A0 =C2=A0 =C2=A0 RX Offloads=3D0x0
=C2=A0 =C2= =A0 TX queue: 0
=C2=A0 =C2=A0 =C2=A0 TX desc=3D512 - TX free threshold= =3D0
=C2=A0 =C2=A0 =C2=A0 TX threshold registers: pthresh=3D0 hthresh=3D= 0 =C2=A0wthresh=3D0
=C2=A0 =C2=A0 =C2=A0 TX offloads=3D0x0 - TX RS bit t= hreshold=3D0
testpmd>
testpmd> show port stats 0

=C2=A0 = ######################## NIC statistics for port 0 =C2=A0##################= ######
=C2=A0 RX-packets: 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RX-missed:= 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0RX-bytes: =C2=A00
=C2=A0 RX-errors:= 0
=C2=A0 RX-nombuf: =C2=A00
=C2=A0 TX-packets: 0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0TX-errors: 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0TX-bytes: = =C2=A00

=C2=A0 Throughput (since last show)
=C2=A0 Rx-pps: =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Rx-b= ps: =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
=C2=A0 Tx-pps: =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Tx-bps: = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00
=C2=A0 ######################= ######################################################
testpmd>
= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
We had pump= ed the traffic from another machine to this MAC address.

=
Thanks,
Madhukar.
--000000000000147a08063e74eb7d--