From: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
To: Ivan Malov <ivan.malov@arknetworks.am>
Cc: Stephen Hemminger <stephen@networkplumber.org>, users <users@dpdk.org>
Subject: RE: dpdk Tx falling short
Date: Tue, 8 Jul 2025 16:31:15 +0000 [thread overview]
Message-ID: <CH3PR01MB8470A19065C3F7780CF31E638F4EA@CH3PR01MB8470.prod.exchangelabs.com> (raw)
In-Reply-To: <1b7533d3-a3de-b5e9-8838-2d6608f2c8e5@arknetworks.am>
Hi Ivan,
Thanks, this clears up my confusion. Using API[2] to create one mempool for the network Rx and Tx queues must be MP/MC. The CPU Cycles spent on the common_ring_mp_enqueue increase as more ports are transmitting. The transmit operation causes the call for Rx and Tx queues results in fight for access to the mbuf mempool because of one mempool?
This is why you suggested creating two mempools, one for each pair of ports.
If I go this route what are the precautions I need to take?
I will try RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload flag first.
Thanks,
Ed
-----Original Message-----
From: Ivan Malov <ivan.malov@arknetworks.am>
Sent: Tuesday, July 8, 2025 10:49 AM
To: Lombardo, Ed <Ed.Lombardo@netscout.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>; users <users@dpdk.org>
Subject: RE: dpdk Tx falling short
External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
On Tue, 8 Jul 2025, Lombardo, Ed wrote:
> Hi Ivan,
> Yes, only the user space created rings.
> Can you add more to your thoughts?
I was seeking to address the probable confusion here. If the application creates a SC / MP ring for its own pipiline logic using API [1] and then invokes another API [2] to create a common "mbuf mempool" to be used with Rx and Tx queues of the network ports, then the observed appearance of "common_ring_mp_enqueue" is likely attributed to the fact that API [2] creates a ring-based mempool internally, and in MP / MC mode by default. And the latter ring is not the same as the one created by the application logic. These are two independent rings.
BTW, does your application set RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload flag when configuring Tx port/queue offloads on the network ports?
Thank you.
[1] https://urldefense.com/v3/__https://doc.dpdk.org/api-25.03/rte__ring_8h.html*a155cb48ef311eddae9b2e34808338b17__;Iw!!Nzg7nt7_!GXTS2DQR0JZFGhdahtcpSBjmoh-AZ4dzS73R_J9A1I0JxlORLHvylHea80X_KHTZRcZV4qcMEvJd7Z7izij40zap9fvA$
[2] https://urldefense.com/v3/__https://doc.dpdk.org/api-25.03/rte__mbuf_8h.html*a8f4abb0d54753d2fde515f35c1ba402a__;Iw!!Nzg7nt7_!GXTS2DQR0JZFGhdahtcpSBjmoh-AZ4dzS73R_J9A1I0JxlORLHvylHea80X_KHTZRcZV4qcMEvJd7Z7izij407rwGv1P$
[3] https://urldefense.com/v3/__https://doc.dpdk.org/api-25.03/rte__mempool_8h.html*a0b64d611bc140a4d2a0c94911580efd5__;Iw!!Nzg7nt7_!GXTS2DQR0JZFGhdahtcpSBjmoh-AZ4dzS73R_J9A1I0JxlORLHvylHea80X_KHTZRcZV4qcMEvJd7Z7izij402Z4uOww$
>
> Ed
>
> -----Original Message-----
> From: Ivan Malov <ivan.malov@arknetworks.am>
> Sent: Tuesday, July 8, 2025 10:19 AM
> To: Lombardo, Ed <Ed.Lombardo@netscout.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; users <users@dpdk.org>
> Subject: RE: dpdk Tx falling short
>
> External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
>
> Hi Ed,
>
> On Tue, 8 Jul 2025, Lombardo, Ed wrote:
>
>> Hi Stephen,
>> When I replace rte_eth_tx_burst() with mbuf free bulk I do not see the tx ring fill up. I think this is valuable information. Also, perf analysis of the tx thread shows common_ring_mp_enqueue and rte_atomic32_cmpset, where I did not expect to see if I created all the Tx rings as SP and SC (and the workers and ack rings as well, essentially all the 16 rings).
>>
>> Perf report snippet:
>> + 57.25% DPDK_TX_1 test [.] common_ring_mp_enqueue
>> + 25.51% DPDK_TX_1 test [.] rte_atomic32_cmpset
>> + 9.13% DPDK_TX_1 test [.] i40e_xmit_pkts
>> + 6.50% DPDK_TX_1 test [.] rte_pause
>> 0.21% DPDK_TX_1 test [.] rte_mempool_ops_enqueue_bulk.isra.0
>> 0.20% DPDK_TX_1 test [.] dpdk_tx_thread
>>
>> The traffic load is constant 10 Gbps 84 bytes packets with no idles. The burst size of 512 is a desired burst of mbufs, however the tx thread will transmit what ever it can get from the Tx ring.
>>
>> I think if resolving why the perf analysis shows ring is MP when it has been created as SP / SC should resolve this issue.
>
> The 'common_ring_mp_enqueue' is the enqueue method of mempool variant 'ring', that is, based on RTE Ring internally. When you say that ring has been created as SP / SC you seemingly refer to the regular RTE ring created by your application logic, not the internal ring of the mempool. Am I missing something?
>
> Thank you.
>
>>
>> Thanks,
>> ed
>>
>> -----Original Message-----
>> From: Stephen Hemminger <stephen@networkplumber.org>
>> Sent: Tuesday, July 8, 2025 9:47 AM
>> To: Lombardo, Ed <Ed.Lombardo@netscout.com>
>> Cc: Ivan Malov <ivan.malov@arknetworks.am>; users <users@dpdk.org>
>> Subject: Re: dpdk Tx falling short
>>
>> External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
>>
>> On Tue, 8 Jul 2025 04:10:05 +0000
>> "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:
>>
>>> Hi Stephen,
>>> I ensured that in every pipeline stage that enqueue or dequeues mbufs it uses the burst version, perf showed the repercussions of doing one mbuf dequeue and enqueue.
>>> For the receive stage rte_eth_rx_burst() is used and Tx stage we use rte_eth_tx_burst(). The burst size used in tx_thread for dequeue burst is 512 Mbufs.
>>
>> You might try buffering like rte_eth_tx_buffer does.
>> Need to add an additional mechanism to ensure that buffer gets flushed when you detect idle period.
>>
>
next prev parent reply other threads:[~2025-07-08 16:31 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-03 20:14 Lombardo, Ed
2025-07-03 21:49 ` Stephen Hemminger
2025-07-04 5:58 ` Rajesh Kumar
2025-07-04 11:44 ` Ivan Malov
2025-07-04 14:49 ` Stephen Hemminger
2025-07-05 17:36 ` Lombardo, Ed
2025-07-05 19:02 ` Stephen Hemminger
2025-07-05 19:08 ` Stephen Hemminger
2025-07-06 0:03 ` Lombardo, Ed
2025-07-06 16:02 ` Stephen Hemminger
2025-07-06 17:44 ` Lombardo, Ed
2025-07-07 3:30 ` Stephen Hemminger
2025-07-07 16:27 ` Lombardo, Ed
2025-07-07 21:00 ` Lombardo, Ed
2025-07-07 21:49 ` Ivan Malov
2025-07-07 23:04 ` Stephen Hemminger
2025-07-08 4:10 ` Lombardo, Ed
2025-07-08 13:47 ` Stephen Hemminger
2025-07-08 14:03 ` Lombardo, Ed
2025-07-08 14:18 ` Ivan Malov
2025-07-08 14:29 ` Lombardo, Ed
2025-07-08 14:49 ` Ivan Malov
2025-07-08 16:31 ` Lombardo, Ed [this message]
2025-07-08 16:53 ` Ivan Malov
2025-07-09 1:09 ` Lombardo, Ed
2025-07-09 21:58 ` Lombardo, Ed
2025-07-10 6:45 ` Ivan Malov
2025-07-05 17:33 ` Lombardo, Ed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CH3PR01MB8470A19065C3F7780CF31E638F4EA@CH3PR01MB8470.prod.exchangelabs.com \
--to=ed.lombardo@netscout.com \
--cc=ivan.malov@arknetworks.am \
--cc=stephen@networkplumber.org \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).