Thanks Asaf for your reply.

We want to use shared_rxqs between PF and VF_rep. I tried to test this feature with the testpmd shared_rxq fwd. We used the "--rxq-share" option in the command line to run the testpmd. With server A, we were able to create shared_rxqs between PF and VF_rep0, but with server B we were not able to create shared_rxqs instead server B created individual RX queues on PF and VF_rep0. We confirmed this by printing the stats of both ports.  

As in mlx5 doc "Shared Rx queue: Counters of received packets and bytes number of devices in same share group are same. Counters of received packets and bytes number of queues in same group and queue ID are same."

As PF and VF_rep0 should have the same RX stats, by printing port stats server A shows the same stats on both PF and VF_rep but server B doesn't. Please review the below test configurations and port stats on both servers.

Configurations on SERVER A and SERVER B:
$ sudo ./build_22_07/app/dpdk-testpmd -l 0-1 -n 4 -a 0000:04:00.0,representor=vf[0] -- -i --nb-cores=1 --rxq-share=2
mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0
mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0

testpmd> set fwd shared_rxq
testpmd> start

SEND PACKET FROM SCAPY:
The packet sent from the wire is received on PF but port stats of both were updated correctly:
sendp(Ether()/Dot1Q(vlan=123)/IP()/UDP(),iface='enp130s0f0',count=24)

SHOW STATS ON SERVER A:
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 24         RX-missed: 0          RX-bytes:  1440
  RX-errors: 0
  RX-nombuf:  0        
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 24         RX-missed: 0          RX-bytes:  1440
  RX-errors: 0
  RX-nombuf:  0        
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

SHOW STATS ON SERVER B:

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 24         RX-missed: 0          RX-bytes:  1440
  RX-errors: 0
  RX-nombuf:  0        
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            1          Rx-bps:          808
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0        
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

Regards,
Haider

From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, October 6, 2022 11:44 AM
To: Haider Ali <haider@dreambigsemi.com>; users <users@dpdk.org>
Subject: Re: mlx5 - shared_rxq not working on some Connect x6 DX
 
Hello Haider,
Besides the info print issue, can you describe what is the use case?
What do you try to achieve and whether you see any issue?

Regards,
Asaf Penso

From: Haider Ali <haider@dreambigsemi.com>
Sent: Tuesday, October 4, 2022 4:24:03 PM
To: users <users@dpdk.org>
Cc: Asaf Penso <asafp@nvidia.com>
Subject: mlx5 - shared_rxq not working on some Connect x6 DX
 
Hi,

I used two servers A and B. Server A has Connect x5 and x6 DX cards while server B has Connect x6 DX card.

I have checked with Connect x5 and x6 DX cards on "server A" and I am able to get below rxq_share device capability:

testpmd> show port info all

Device capabilities: 0x14( RXQ_SHARE FLOW_SHARED_OBJECT_KEEP )

Server A setting:
# ofed_info -s
MLNX_OFED_LINUX-5.4-3.1.0.0:

# ethtool -i enp129s0f0
driver: mlx5_core
version: 5.4-3.1.0
firmware-version: 16.32.1010 (MT_0000000080)
expansion-rom-version:
bus-info: 0000:81:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes


But when I moved to server B with another Connect x6 DX card, I could not see this capability.

testpmd> show port info all

Device capabilities: 0x14( FLOW_SHARED_OBJECT_KEEP )

Server B Settings:
# ofed_info -s
MLNX_OFED_LINUX-5.4-3.5.8.0:

# ethtool -i enp132s0f0
driver: mlx5_core
version: 5.4-3.5.8
firmware-version: 22.31.1014 (MT_0000000436)
expansion-rom-version:
bus-info: 0000:84:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes


Although server B has higher versions of OFED and firmware, my question is do I need to enable/disable firmware settings? Or are there any other configurations we need to apply?

Regards,
Haider