Observation. Two instances of testpmd. Only one report shows correct stats when you run 16 queues on RX with the default RSS config. i.e rss-ip , rss-udp etc. You only see the counter for a single queue. How I know I took last report on first intestine test-pmd at the end of run, take all bytes added. I know time I run compute pps correlated to the PPS in the actual switch. so either ICEN or IAVF doesn't report stats (only for queue 0) On Fri, Apr 18, 2025 at 10:30 PM spyroot wrote: > Hi Folks, > > > I am observing that DPDK test-pmd with IAVF PMD, ICEN PF driver, > > reporting statistics incorrectly when the RX side generates a UDP flow that randomizes or increments IP/UDP header data (IP/port, etc). > > I tested all 23.x stable and all branches. > > > -If I use *single* flow (on the TX side, all tuples are the same on the RX > > HASH() produce -> same result). no issue. > > So, on the RX side, I see all zero packet drops and the correct pps value reported by test-pmd. > > > - If I increase the number of flows ( IP/UDP, etc.), the PPS Counter and byte/pkt counter > > report only single queues. (i.e, it looks to me like it uses some default queue 0 > > or something and skips the rest 15 (in my case --rxq=16). (It could IAVF do that or ICEN report that). I'm not sure. > > > For example, the counter I'm referring to test-pmd Rx-pps counter. > > > Rx-pps: 4158531 Rx-bps: 2129167832 > > > I'm also observing PMD Failing fetch stats error msg. > > > iavf_query_stats(): fail to execute command OP_GET_STATS > > iavf_dev_stats_get(): Get statistics failed > > > My Question. > > > If I have two instances of > > test-pmd --allowed X > > test-pmd --allowed Y > > Where X is VFs PCI ADDR X and Y PCI ADDR Y from the PF? > > I expect to see the total stats (pps/bytes, etc.) (combined value for all 16 queues for a port 0 ) > > RX-PPS and bytes per port on both instances. > > Yes/No? > > > Has anyone had a similar issue in the past? > > > Thank you, > > MB > >