From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56B0F4590E; Thu, 5 Sep 2024 14:54:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0CBD342D6A; Thu, 5 Sep 2024 14:54:02 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id 0E87D40684 for ; Thu, 5 Sep 2024 14:54:00 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id E77A94590F; Thu, 5 Sep 2024 14:53:59 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Subject: [DPDK/other Bug 1533] testpmd performance drops with Mellanox ConnectX6 when using 8 cores 8 queues Date: Thu, 05 Sep 2024 12:53:59 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: other X-Bugzilla-Version: 23.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: wangliangxing@hygon.cn X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone attachments.created Message-ID: Content-Type: multipart/alternative; boundary=17255408390.fcbD.2167449 Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --17255408390.fcbD.2167449 Date: Thu, 5 Sep 2024 14:53:59 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All https://bugs.dpdk.org/show_bug.cgi?id=3D1533 Bug ID: 1533 Summary: testpmd performance drops with Mellanox ConnectX6 when using 8 cores 8 queues Product: DPDK Version: 23.11 Hardware: All OS: Linux Status: UNCONFIRMED Severity: normal Priority: Normal Component: other Assignee: dev@dpdk.org Reporter: wangliangxing@hygon.cn Target Milestone: --- Created attachment 287 --> https://bugs.dpdk.org/attachment.cgi?id=3D287&action=3Dedit mpps and packets stats of 8 cores 8 queues Environment: Intel Cascadelak server running Centos 7. Mellanox ConnectX6 NIC and used cores are on same one NUMA node. Input traffic is always line rate 100Gbps, 64 bytes packets, 256 flows.=20 Test duration is 30 seconds. Run testpmd io mode with 7 cores and 7 queues: ./dpdk-testpmd -l 24-32 -n 4= -a af:00.0 -- --nb-cores=3D7 --rxq=3D7 --txq=3D7 -i Rx/Tx throughput is 91.6/91.6 MPPS. No TX-dropped packets.=20 However, run testpmd io mode with 8 cores and 8 queues: ./dpdk-testpmd -l 2= 4-32 -n 4 -a af:00.0 -- --nb-cores=3D8 --rxq=3D8 --txq=3D8 -i Rx/Tx throughput is 113.6/85.4 MPPS. The tx is lower than tx of 7 core. The= re are a lot of TX-dropped. Please refer to attached picture. I notice similar issue on other x86 and aarch64 servers too. --=20 You are receiving this mail because: You are the assignee for the bug.= --17255408390.fcbD.2167449 Date: Thu, 5 Sep 2024 14:53:59 +0200 MIME-Version: 1.0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All
Bug ID 1533
Summary testpmd performance drops with Mellanox ConnectX6 when using= 8 cores 8 queues
Product DPDK
Version 23.11
Hardware All
OS Linux
Status UNCONFIRMED
Severity normal
Priority Normal
Component other
Assignee dev@dpdk.org
Reporter wangliangxing@hygon.cn
Target Milestone ---

Created attachment 287 [details]
mpps and packets stats of 8 cores 8 queues

Environment: Intel Cascadelak server running Centos 7.

Mellanox ConnectX6 NIC and used cores are on same one NUMA node.

Input traffic is always line rate 100Gbps, 64 bytes packets, 256 flows.=20
Test duration is 30 seconds.

Run testpmd io mode with 7 cores and 7 queues: ./dpdk-testpmd -l 24-32 -n 4=
 -a
af:00.0 -- --nb-cores=3D7 --rxq=3D7 --txq=3D7 -i

Rx/Tx throughput is 91.6/91.6 MPPS. No TX-dropped packets.=20

However, run testpmd io mode with 8 cores and 8 queues: ./dpdk-testpmd -l 2=
4-32
-n 4 -a af:00.0 -- --nb-cores=3D8 --rxq=3D8 --txq=3D8 -i

Rx/Tx throughput is 113.6/85.4 MPPS. The tx is lower than tx of 7 core. The=
re
are a lot of TX-dropped. Please refer to attached picture.

I notice similar issue on other x86 and aarch64 servers too.
          


You are receiving this mail because:
  • You are the assignee for the bug.
=20=20=20=20=20=20=20=20=20=20
= --17255408390.fcbD.2167449--