DPDK usage discussions
 help / color / mirror / Atom feed
From: Asaf Penso <asafp@nvidia.com>
To: "Дмитрий Степанов" <stepanov.dmit@gmail.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: Mellanox performance degradation with more than 12 lcores
Date: Fri, 18 Feb 2022 13:30:22 +0000	[thread overview]
Message-ID: <MWHPR1201MB2557C600084627D089346E7DCD379@MWHPR1201MB2557.namprd12.prod.outlook.com> (raw)
In-Reply-To: <CA+-SuJ3FqMW5aTcuEpuqoKLffothhqn5karWTAC=fETpOs_3Rw@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 1500 bytes --]

Hello Dmitry,

Could you please paste the testpmd commands per each experiment?

Also, have you looked into dpdk.org performance report to see how to tune for best results?

Regards,
Asaf Penso
________________________________
From: Дмитрий Степанов <stepanov.dmit@gmail.com>
Sent: Friday, February 18, 2022 9:32:59 AM
To: users@dpdk.org <users@dpdk.org>
Subject: Mellanox performance degradation with more than 12 lcores

Hi folks!

I'm using Mellanox ConnectX-6 Dx EN adapter card (100GbE; Dual-port QSFP56; PCIe 4.0/3.0 x16) with DPDK 21.11 on a server with AMD EPYC 7702 64-Core Processor (NUMA system with 2 sockets). Hyperthreading is turned off.
I'm testing the maximum receive throughput I can get from a single port using testpmd utility (shipped with dpdk). My generator produces random UDP packets with zero payload length.

I get the maximum performance using 8-12 lcores (overall 120-125Mpps on receive path of single port):

numactl -N 1 -m 1 /opt/dpdk-21.11/build/app/dpdk-testpmd -l 64-127 -n 4  -a 0000:c1:00.0 -- --stats-period 1 --nb-cores=12 --rxq=12 --txq=12 --rxd=512

With more than 12 lcores overall receive performance reduces. With 16-32 lcores I get 100-110 Mpps, and I get a significant performance fall with 33 lcores - 84Mpps. With 63 cores I get even 35Mpps  overall receive performance.

Are there any limitations on the total number of receive queues (total lcores) that can handle a single port on a given NIC?

Thanks,
Dmitriy Stepanov

[-- Attachment #2: Type: text/html, Size: 2225 bytes --]

  reply	other threads:[~2022-02-18 13:30 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-18  7:32 Дмитрий Степанов
2022-02-18 13:30 ` Asaf Penso [this message]
2022-02-18 13:49   ` Дмитрий Степанов
2022-02-18 13:39 ` Dmitry Kozlyuk
2022-02-18 16:14   ` Дмитрий Степанов

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MWHPR1201MB2557C600084627D089346E7DCD379@MWHPR1201MB2557.namprd12.prod.outlook.com \
    --to=asafp@nvidia.com \
    --cc=stepanov.dmit@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).