From: Ivan Malov <ivan.malov@arknetworks.am>
To: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>, users <users@dpdk.org>
Subject: RE: dpdk Tx falling short
Date: Tue, 8 Jul 2025 18:49:11 +0400 (+04) [thread overview]
Message-ID: <1b7533d3-a3de-b5e9-8838-2d6608f2c8e5@arknetworks.am> (raw)
In-Reply-To: <CH3PR01MB8470A4E2F5D9FDB9AFCB804E8F4EA@CH3PR01MB8470.prod.exchangelabs.com>
On Tue, 8 Jul 2025, Lombardo, Ed wrote:
> Hi Ivan,
> Yes, only the user space created rings.
> Can you add more to your thoughts?
I was seeking to address the probable confusion here. If the application creates
a SC / MP ring for its own pipiline logic using API [1] and then invokes another
API [2] to create a common "mbuf mempool" to be used with Rx and Tx queues of
the network ports, then the observed appearance of "common_ring_mp_enqueue" is
likely attributed to the fact that API [2] creates a ring-based mempool
internally, and in MP / MC mode by default. And the latter ring is not the same
as the one created by the application logic. These are two independent rings.
BTW, does your application set RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload flag
when configuring Tx port/queue offloads on the network ports?
Thank you.
[1] https://doc.dpdk.org/api-25.03/rte__ring_8h.html#a155cb48ef311eddae9b2e34808338b17
[2] https://doc.dpdk.org/api-25.03/rte__mbuf_8h.html#a8f4abb0d54753d2fde515f35c1ba402a
[3] https://doc.dpdk.org/api-25.03/rte__mempool_8h.html#a0b64d611bc140a4d2a0c94911580efd5
>
> Ed
>
> -----Original Message-----
> From: Ivan Malov <ivan.malov@arknetworks.am>
> Sent: Tuesday, July 8, 2025 10:19 AM
> To: Lombardo, Ed <Ed.Lombardo@netscout.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; users <users@dpdk.org>
> Subject: RE: dpdk Tx falling short
>
> External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
>
> Hi Ed,
>
> On Tue, 8 Jul 2025, Lombardo, Ed wrote:
>
>> Hi Stephen,
>> When I replace rte_eth_tx_burst() with mbuf free bulk I do not see the tx ring fill up. I think this is valuable information. Also, perf analysis of the tx thread shows common_ring_mp_enqueue and rte_atomic32_cmpset, where I did not expect to see if I created all the Tx rings as SP and SC (and the workers and ack rings as well, essentially all the 16 rings).
>>
>> Perf report snippet:
>> + 57.25% DPDK_TX_1 test [.] common_ring_mp_enqueue
>> + 25.51% DPDK_TX_1 test [.] rte_atomic32_cmpset
>> + 9.13% DPDK_TX_1 test [.] i40e_xmit_pkts
>> + 6.50% DPDK_TX_1 test [.] rte_pause
>> 0.21% DPDK_TX_1 test [.] rte_mempool_ops_enqueue_bulk.isra.0
>> 0.20% DPDK_TX_1 test [.] dpdk_tx_thread
>>
>> The traffic load is constant 10 Gbps 84 bytes packets with no idles. The burst size of 512 is a desired burst of mbufs, however the tx thread will transmit what ever it can get from the Tx ring.
>>
>> I think if resolving why the perf analysis shows ring is MP when it has been created as SP / SC should resolve this issue.
>
> The 'common_ring_mp_enqueue' is the enqueue method of mempool variant 'ring', that is, based on RTE Ring internally. When you say that ring has been created as SP / SC you seemingly refer to the regular RTE ring created by your application logic, not the internal ring of the mempool. Am I missing something?
>
> Thank you.
>
>>
>> Thanks,
>> ed
>>
>> -----Original Message-----
>> From: Stephen Hemminger <stephen@networkplumber.org>
>> Sent: Tuesday, July 8, 2025 9:47 AM
>> To: Lombardo, Ed <Ed.Lombardo@netscout.com>
>> Cc: Ivan Malov <ivan.malov@arknetworks.am>; users <users@dpdk.org>
>> Subject: Re: dpdk Tx falling short
>>
>> External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
>>
>> On Tue, 8 Jul 2025 04:10:05 +0000
>> "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:
>>
>>> Hi Stephen,
>>> I ensured that in every pipeline stage that enqueue or dequeues mbufs it uses the burst version, perf showed the repercussions of doing one mbuf dequeue and enqueue.
>>> For the receive stage rte_eth_rx_burst() is used and Tx stage we use rte_eth_tx_burst(). The burst size used in tx_thread for dequeue burst is 512 Mbufs.
>>
>> You might try buffering like rte_eth_tx_buffer does.
>> Need to add an additional mechanism to ensure that buffer gets flushed when you detect idle period.
>>
>
next prev parent reply other threads:[~2025-07-08 14:49 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-03 20:14 Lombardo, Ed
2025-07-03 21:49 ` Stephen Hemminger
2025-07-04 5:58 ` Rajesh Kumar
2025-07-04 11:44 ` Ivan Malov
2025-07-04 14:49 ` Stephen Hemminger
2025-07-05 17:36 ` Lombardo, Ed
2025-07-05 19:02 ` Stephen Hemminger
2025-07-05 19:08 ` Stephen Hemminger
2025-07-06 0:03 ` Lombardo, Ed
2025-07-06 16:02 ` Stephen Hemminger
2025-07-06 17:44 ` Lombardo, Ed
2025-07-07 3:30 ` Stephen Hemminger
2025-07-07 16:27 ` Lombardo, Ed
2025-07-07 21:00 ` Lombardo, Ed
2025-07-07 21:49 ` Ivan Malov
2025-07-07 23:04 ` Stephen Hemminger
2025-07-08 4:10 ` Lombardo, Ed
2025-07-08 13:47 ` Stephen Hemminger
2025-07-08 14:03 ` Lombardo, Ed
2025-07-08 14:18 ` Ivan Malov
2025-07-08 14:29 ` Lombardo, Ed
2025-07-08 14:49 ` Ivan Malov [this message]
2025-07-08 16:31 ` Lombardo, Ed
2025-07-08 16:53 ` Ivan Malov
2025-07-09 1:09 ` Lombardo, Ed
2025-07-09 21:58 ` Lombardo, Ed
2025-07-10 6:45 ` Ivan Malov
2025-07-05 17:33 ` Lombardo, Ed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1b7533d3-a3de-b5e9-8838-2d6608f2c8e5@arknetworks.am \
--to=ivan.malov@arknetworks.am \
--cc=Ed.Lombardo@netscout.com \
--cc=stephen@networkplumber.org \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).