DPDK patches and discussions
 help / color / mirror / Atom feed
* DPDK Ring Q
@ 2025-05-29  5:20 Lombardo, Ed
  2025-05-29  9:41 ` Bruce Richardson
  0 siblings, 1 reply; 2+ messages in thread
From: Lombardo, Ed @ 2025-05-29  5:20 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 1332 bytes --]

Hi,
I have an issue with DPDK 24.11.1 and 2 port 100G Intel NIC (E810-C) on 22 core CPU dual socket server.

There is a dedicated CPU core to get the packets from DPDK using rte_eth_rx_burst() and enqueue the mbufs into a worker ring Q.  This thread does nothing else.  The NIC is dropping packets at 8.5 Gbps per port.

Studying the perf report, I was interested in the common_ring_mc_dequeue().  Perf tool shows common_ring_mc_dequeue() 92.86% Self and 92.86% Children.

I see further with perf tool rte_ring_enqueue_bulk() and rte_ring_enqueue_bulk_elem().  These are at 0.00% Self and 0.05% Children.
Perf tool shows rte_ring_sp_enqueue_bulk_elem (inlined) which is what I wanted to see (Single producer) representing the enqueue of the mbufs pointers to the worker ring Q.

Is it possible to change the common_ring_mc_dequeue() to common_ring_sc_dequeue()?  Can it be set to one consumer on single Queue 0.

I believe this is limiting DPDK from reaching 90 Gbps or higher in my setup, which is my goal.

I made sure the E810-C firmware was up to date, NIC FW Version: 4.80 0x80020543 1.3805.0

Perf report shows:
   - 99.65% input_thread
      - 99.35% rte_eth_rx_burst (inlined)
         - ice_recv_scattered_pkts
              92.83% common_ring_mc_dequeue

Any thoughts or suggestions?

Thanks,
Ed

[-- Attachment #2: Type: text/html, Size: 6844 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-05-29  9:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-29  5:20 DPDK Ring Q Lombardo, Ed
2025-05-29  9:41 ` Bruce Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).