DPDK usage discussions
 help / color / mirror / Atom feed
From: Haider Ali <haider@dreambigsemi.com>
To: Asaf Penso <asafp@nvidia.com>, users <users@dpdk.org>
Subject: Re: mlx5 -  shared_rxq not working on some Connect x6 DX
Date: Thu, 6 Oct 2022 07:48:26 +0000	[thread overview]
Message-ID: <MW5PR22MB3395BDBFC6C722F8844994B6A75C9@MW5PR22MB3395.namprd22.prod.outlook.com> (raw)
In-Reply-To: <DM5PR1201MB255597C7B84036E55092C7B7CD5C9@DM5PR1201MB2555.namprd12.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 5542 bytes --]

Thanks Asaf for your reply.

We want to use shared_rxqs between PF and VF_rep. I tried to test this feature with the testpmd shared_rxq fwd. We used the "--rxq-share" option in the command line to run the testpmd. With server A, we were able to create shared_rxqs between PF and VF_rep0, but with server B we were not able to create shared_rxqs instead server B created individual RX queues on PF and VF_rep0. We confirmed this by printing the stats of both ports.

As in mlx5 doc "Shared Rx queue: Counters of received packets and bytes number of devices in same share group are same. Counters of received packets and bytes number of queues in same group and queue ID are same."

As PF and VF_rep0 should have the same RX stats, by printing port stats server A shows the same stats on both PF and VF_rep but server B doesn't. Please review the below test configurations and port stats on both servers.

Configurations on SERVER A and SERVER B:
$ sudo ./build_22_07/app/dpdk-testpmd -l 0-1 -n 4 -a 0000:04:00.0,representor=vf[0] -- -i --nb-cores=1 --rxq-share=2
mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0
mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0

testpmd> set fwd shared_rxq
testpmd> start

SEND PACKET FROM SCAPY:
The packet sent from the wire is received on PF but port stats of both were updated correctly:
sendp(Ether()/Dot1Q(vlan=123)/IP()/UDP(),iface='enp130s0f0',count=24)

SHOW STATS ON SERVER A:
testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 24         RX-missed: 0          RX-bytes:  1440
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 24         RX-missed: 0          RX-bytes:  1440
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

SHOW STATS ON SERVER B:

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 24         RX-missed: 0          RX-bytes:  1440
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            1          Rx-bps:          808
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 0          RX-missed: 0          RX-bytes:  0
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

Regards,
Haider
________________________________
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, October 6, 2022 11:44 AM
To: Haider Ali <haider@dreambigsemi.com>; users <users@dpdk.org>
Subject: Re: mlx5 - shared_rxq not working on some Connect x6 DX

Hello Haider,
Besides the info print issue, can you describe what is the use case?
What do you try to achieve and whether you see any issue?

Regards,
Asaf Penso
________________________________
From: Haider Ali <haider@dreambigsemi.com>
Sent: Tuesday, October 4, 2022 4:24:03 PM
To: users <users@dpdk.org>
Cc: Asaf Penso <asafp@nvidia.com>
Subject: mlx5 - shared_rxq not working on some Connect x6 DX

Hi,

I used two servers A and B. Server A has Connect x5 and x6 DX cards while server B has Connect x6 DX card.

I have checked with Connect x5 and x6 DX cards on "server A" and I am able to get below rxq_share device capability:

testpmd> show port info all

Device capabilities: 0x14( RXQ_SHARE FLOW_SHARED_OBJECT_KEEP )

Server A setting:
# ofed_info -s
MLNX_OFED_LINUX-5.4-3.1.0.0:

# ethtool -i enp129s0f0
driver: mlx5_core
version: 5.4-3.1.0
firmware-version: 16.32.1010 (MT_0000000080)
expansion-rom-version:
bus-info: 0000:81:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes


But when I moved to server B with another Connect x6 DX card, I could not see this capability.

testpmd> show port info all

Device capabilities: 0x14( FLOW_SHARED_OBJECT_KEEP )

Server B Settings:
# ofed_info -s
MLNX_OFED_LINUX-5.4-3.5.8.0:

# ethtool -i enp132s0f0
driver: mlx5_core
version: 5.4-3.5.8
firmware-version: 22.31.1014 (MT_0000000436)
expansion-rom-version:
bus-info: 0000:84:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes


Although server B has higher versions of OFED and firmware, my question is do I need to enable/disable firmware settings? Or are there any other configurations we need to apply?

Regards,
Haider

[-- Attachment #2: Type: text/html, Size: 19671 bytes --]

  reply	other threads:[~2022-10-09 21:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-04 13:24 Haider Ali
2022-10-06  6:44 ` Asaf Penso
2022-10-06  7:48   ` Haider Ali [this message]
2022-10-18  6:04   ` Xueming(Steven) Li
2022-10-18  6:10     ` Haider Ali
2022-10-18  6:30       ` Xueming(Steven) Li
2022-10-19 10:30         ` Haider Ali

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MW5PR22MB3395BDBFC6C722F8844994B6A75C9@MW5PR22MB3395.namprd22.prod.outlook.com \
    --to=haider@dreambigsemi.com \
    --cc=asafp@nvidia.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).