From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 281C246B2A for ; Tue, 8 Jul 2025 16:49:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DB46440292; Tue, 8 Jul 2025 16:49:14 +0200 (CEST) Received: from agw.arknetworks.am (agw.arknetworks.am [79.141.165.80]) by mails.dpdk.org (Postfix) with ESMTP id 5F21A4025E for ; Tue, 8 Jul 2025 16:49:13 +0200 (CEST) Received: from debian (unknown [78.109.72.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by agw.arknetworks.am (Postfix) with ESMTPSA id B94A6E097F; Tue, 8 Jul 2025 18:49:12 +0400 (+04) DKIM-Filter: OpenDKIM Filter v2.11.0 agw.arknetworks.am B94A6E097F DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arknetworks.am; s=default; t=1751986153; bh=41n2I0dpm3ioWP+w4J+ZOHC+kGvwC5LnktYYyr4wpsY=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=nH+E6IFIabJKUBP70tqO02MxeF2pVqHOHfj1gZLU+VshYqQcjl7EDdsb+AgdRa5sD 3UhDdEfSk6PygXj9Ahz8G4kD7YefLBPsbpPS4OQtfKvMMb1b/LB+JLc6nG4FhnXg+f aRQMcf9IMQNPOI1iuNfBzTwk5Mhd+Jv8PaX7Joxo9lwWpzqNIXOqDX62rULVCFAMTZ Ulg/LrDF4X1nKUweqT4Qjm/mZ2NYXFXTYQZiHMa56bAjUWfXuzl+JVyiRDwA5X0Kq+ DbjhLy9jdBdVmt3SchIyMmCTvtaUUbGXBdZKwMGJCsgPI+7Cff4Y1xY7KECtFJmUPG kl+BYAXHCVHaA== Date: Tue, 8 Jul 2025 18:49:11 +0400 (+04) From: Ivan Malov To: "Lombardo, Ed" cc: Stephen Hemminger , users Subject: RE: dpdk Tx falling short In-Reply-To: Message-ID: <1b7533d3-a3de-b5e9-8838-2d6608f2c8e5@arknetworks.am> References: <20250704074957.5848175a@hermes.local> <20250705120834.78849e56@hermes.local> <20250706090232.635bd36e@hermes.local> <9ae56e38-0d29-4c7c-0bc2-f92912146da2@arknetworks.am> <20250707160409.75fbc2f1@hermes.local> <20250708064707.583df905@hermes.local> <4b43a1ce-2dc6-5d46-12e0-b26d13a60633@arknetworks.am> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org On Tue, 8 Jul 2025, Lombardo, Ed wrote: > Hi Ivan, > Yes, only the user space created rings. > Can you add more to your thoughts? I was seeking to address the probable confusion here. If the application creates a SC / MP ring for its own pipiline logic using API [1] and then invokes another API [2] to create a common "mbuf mempool" to be used with Rx and Tx queues of the network ports, then the observed appearance of "common_ring_mp_enqueue" is likely attributed to the fact that API [2] creates a ring-based mempool internally, and in MP / MC mode by default. And the latter ring is not the same as the one created by the application logic. These are two independent rings. BTW, does your application set RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload flag when configuring Tx port/queue offloads on the network ports? Thank you. [1] https://doc.dpdk.org/api-25.03/rte__ring_8h.html#a155cb48ef311eddae9b2e34808338b17 [2] https://doc.dpdk.org/api-25.03/rte__mbuf_8h.html#a8f4abb0d54753d2fde515f35c1ba402a [3] https://doc.dpdk.org/api-25.03/rte__mempool_8h.html#a0b64d611bc140a4d2a0c94911580efd5 > > Ed > > -----Original Message----- > From: Ivan Malov > Sent: Tuesday, July 8, 2025 10:19 AM > To: Lombardo, Ed > Cc: Stephen Hemminger ; users > Subject: RE: dpdk Tx falling short > > External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > Hi Ed, > > On Tue, 8 Jul 2025, Lombardo, Ed wrote: > >> Hi Stephen, >> When I replace rte_eth_tx_burst() with mbuf free bulk I do not see the tx ring fill up. I think this is valuable information. Also, perf analysis of the tx thread shows common_ring_mp_enqueue and rte_atomic32_cmpset, where I did not expect to see if I created all the Tx rings as SP and SC (and the workers and ack rings as well, essentially all the 16 rings). >> >> Perf report snippet: >> + 57.25% DPDK_TX_1 test [.] common_ring_mp_enqueue >> + 25.51% DPDK_TX_1 test [.] rte_atomic32_cmpset >> + 9.13% DPDK_TX_1 test [.] i40e_xmit_pkts >> + 6.50% DPDK_TX_1 test [.] rte_pause >> 0.21% DPDK_TX_1 test [.] rte_mempool_ops_enqueue_bulk.isra.0 >> 0.20% DPDK_TX_1 test [.] dpdk_tx_thread >> >> The traffic load is constant 10 Gbps 84 bytes packets with no idles. The burst size of 512 is a desired burst of mbufs, however the tx thread will transmit what ever it can get from the Tx ring. >> >> I think if resolving why the perf analysis shows ring is MP when it has been created as SP / SC should resolve this issue. > > The 'common_ring_mp_enqueue' is the enqueue method of mempool variant 'ring', that is, based on RTE Ring internally. When you say that ring has been created as SP / SC you seemingly refer to the regular RTE ring created by your application logic, not the internal ring of the mempool. Am I missing something? > > Thank you. > >> >> Thanks, >> ed >> >> -----Original Message----- >> From: Stephen Hemminger >> Sent: Tuesday, July 8, 2025 9:47 AM >> To: Lombardo, Ed >> Cc: Ivan Malov ; users >> Subject: Re: dpdk Tx falling short >> >> External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. >> >> On Tue, 8 Jul 2025 04:10:05 +0000 >> "Lombardo, Ed" wrote: >> >>> Hi Stephen, >>> I ensured that in every pipeline stage that enqueue or dequeues mbufs it uses the burst version, perf showed the repercussions of doing one mbuf dequeue and enqueue. >>> For the receive stage rte_eth_rx_burst() is used and Tx stage we use rte_eth_tx_burst(). The burst size used in tx_thread for dequeue burst is 512 Mbufs. >> >> You might try buffering like rte_eth_tx_buffer does. >> Need to add an additional mechanism to ensure that buffer gets flushed when you detect idle period. >> >