DPDK usage discussions
 help / color / mirror / Atom feed
* Intermittent Azure NetVSC initalisation issues (DPDK 23.11.2) - dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
@ 2025-09-01  9:32 Peter Morrow
  0 siblings, 0 replies; only message in thread
From: Peter Morrow @ 2025-09-01  9:32 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 7803 bytes --]

Hi Folks,

Azure VM Size:  Standard DS3 v2
Kernel: 6.1.0-38-amd64
Distro: Based off Debian 12.
DPDK version: 23.11.2.

Our VM has a kernel managed port and 2 DPDK managed ports (these ports have accelerated networking enabled, the kernel managed port does not). We are seeing the following continuos warning/error logs from dpdk via vpp:

Aug 25 03:04:03.603881 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:03.978595 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:03.978942 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:03.978980 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:03.979025 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:04.302893 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:04.303720 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:04.603327 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:04.603901 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:04.978651 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:04.978789 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:04.978827 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:04.978893 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:05.302950 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:05.303121 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:05.603390 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:05.603966 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:05.978785 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:05.978911 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:05.978949 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:05.978980 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:06.303010 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:06.303813 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying
Aug 25 03:04:06.603457 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_attach(): Couldn't find port for VF
Aug 25 03:04:06.604092 tunnel-terminator-30000000015 vpp[777]: dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying


What has happened is after reboot one of the AN ports fails to initialise, the other is completely fine and operational (sending receiving packets). We used to run into issue like this intermittently which we usually put down to "Azure host issues", reprovision of the VM usually got us out of this hole - however the issue seems to be becoming more frequently thus  this query.

I've pasted a single output of the ports (show vpp hardware interfaces) in working condition if that is to be of any use in debugging:

vppctl show hardware-interfaces
Started at 2025-08-26T19:38:51,164557937+00:00
==================================================
              Name                Idx   Link  Hardware^M
GigabitEthernet1                   2     up   GigabitEthernet1^M
  Link speed: 50 Gbps^M
  RX Queues:^M
    queue thread         mode      ^M
    0     vpp_wk_1 (2)   polling   ^M
    1     vpp_wk_2 (3)   polling   ^M
    2     vpp_wk_0 (1)   polling   ^M
    3     vpp_wk_1 (2)   polling   ^M
  TX Queues:^M
    TX Hash: [name: hash-eth-l34 priority: 52 description: Hash ethernet L34 headers]^M
    queue shared thread(s)      ^M
    0     no     0^M
    1     no     1^M
    2     no     2^M
    3     no     3^M
  Ethernet address 00:22:48:23:58:2f^M
  Microsoft Hyper-V Netvsc^M
    carrier up full duplex max-frame-size 2044 ^M
    flags: admin-up tx-offload rx-ip4-cksum^M
    Devargs: ^M
    rx: queues 4 (max 64), desc 1024 (min 0 max 65535 align 1)^M
    tx: queues 4 (max 64), desc 2048 (min 1 max 4096 align 1)^M
    max rx packet len: 16128^M
    promiscuous: unicast off all-multicast on^M
    vlan offload: strip off filter off qinq off^M
    rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum rss-hash ^M
    rx offload active: ipv4-cksum ^M
    tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum tcp-tso ^M
                       multi-segs ^M
    tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs ^M
    rss avail:         ipv4-tcp ipv4-udp ipv4 ipv6-tcp ipv6 ^M
    rss active:        ipv4-tcp ipv4 ipv6-tcp ipv6 ^M
    tx burst function: (not available)^M
    rx burst function: (not available)^M
^M
    tx errors                                           1457^M
    rx frames ok                                        1334^M
    rx bytes ok                                       138643^M
    extended stats:^M
      rx_good_packets                                   1334^M
      rx_good_bytes                                   138643^M
      tx_errors                                         1457^M
      rx_q0_packets                                      101^M
      rx_q0_bytes                                       8221^M
      rx_q1_packets                                       35^M
      rx_q1_bytes                                       2974^M
      rx_q2_packets                                     1124^M
      rx_q2_bytes                                     121060^M
      rx_q3_packets                                       74^M
      rx_q3_bytes                                       6388^M
      tx_q0_errors                                         2^M
      tx_q0_multicast_packets                              2^M
      tx_q0_size_65_127_packets                            1^M
      tx_q0_size_128_255_packets                           1^M
      tx_q1_errors                                       839^M
      tx_q1_multicast_packets                            110^M
      tx_q1_broadcast_packets                            507^M
      tx_q1_size_65_127_packets                          284^M
      tx_q1_size_128_255_packets                          49^M
      tx_q1_size_256_511_packets                         507^M
      tx_q2_errors                                        25^M
      tx_q2_multicast_packets                             25^M
      tx_q2_size_65_127_packets                           25^M
      tx_q3_errors                                       591^M
      tx_q3_multicast_packets                             60^M
      tx_q3_size_65_127_packets                          652^M
      rx_q0_good_packets                                 101^M

Are there any know issues in this area? Is there anything else I can collect to assist in further triage? Are there any steps we can take to attempt to mitigate this happening?

Thanks,
Peter.

[-- Attachment #2: Type: text/html, Size: 41516 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2025-09-01  9:32 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-09-01  9:32 Intermittent Azure NetVSC initalisation issues (DPDK 23.11.2) - dpdk: hn_vf_add(): RNDIS reports VF but device not found, retrying Peter Morrow

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).