From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 82CE446B2A for ; Tue, 8 Jul 2025 16:19:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 453D6402A0; Tue, 8 Jul 2025 16:19:09 +0200 (CEST) Received: from agw.arknetworks.am (agw.arknetworks.am [79.141.165.80]) by mails.dpdk.org (Postfix) with ESMTP id 647394025E for ; Tue, 8 Jul 2025 16:19:07 +0200 (CEST) Received: from debian (unknown [78.109.72.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by agw.arknetworks.am (Postfix) with ESMTPSA id F1CC9E097F; Tue, 8 Jul 2025 18:19:05 +0400 (+04) DKIM-Filter: OpenDKIM Filter v2.11.0 agw.arknetworks.am F1CC9E097F DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arknetworks.am; s=default; t=1751984346; bh=HZLYSWhQ7F5hbz44tJj95b2pXFdnnzRvKaET7X0VMFw=; h=Date:From:To:cc:Subject:In-Reply-To:References:From; b=vCtc+nsYuHLIUELtpv4aXZJPOu2ycWpQAHwrBu9kxHNEzJcTZmmgQk4gBCOodfPvX 55er/T4fnVE6TJAMRCzAPcWZLvr/EZpZUKK18hLUECm31h96mi3kCwxSpjdSSXqSms Mqqgj9hhk6rS18XB7eTKwdKhROWo8kkPr01TlfL4nfmbsVCDqbCYnyJlP4Gz4eMR3J o/jlJCpFfzh2B8ZQQ6T/+lfbbKYpEMnU3piTxApcjvVeu+xBMWYk8rnMqEe8bTh7w6 9XRYz0wHwWR39J6gCb5ZkDiBb1a98Mp+8Gi8Ts9QsOloTjmsjHfgRLeI7p+3/mitMN WcAY341oT2nsw== Date: Tue, 8 Jul 2025 18:18:58 +0400 (+04) From: Ivan Malov To: "Lombardo, Ed" cc: Stephen Hemminger , users Subject: RE: dpdk Tx falling short In-Reply-To: Message-ID: <4b43a1ce-2dc6-5d46-12e0-b26d13a60633@arknetworks.am> References: <20250704074957.5848175a@hermes.local> <20250705120834.78849e56@hermes.local> <20250706090232.635bd36e@hermes.local> <9ae56e38-0d29-4c7c-0bc2-f92912146da2@arknetworks.am> <20250707160409.75fbc2f1@hermes.local> <20250708064707.583df905@hermes.local> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Hi Ed, On Tue, 8 Jul 2025, Lombardo, Ed wrote: > Hi Stephen, > When I replace rte_eth_tx_burst() with mbuf free bulk I do not see the tx ring fill up. I think this is valuable information. Also, perf analysis of the tx thread shows common_ring_mp_enqueue and rte_atomic32_cmpset, where I did not expect to see if I created all the Tx rings as SP and SC (and the workers and ack rings as well, essentially all the 16 rings). > > Perf report snippet: > + 57.25% DPDK_TX_1 test [.] common_ring_mp_enqueue > + 25.51% DPDK_TX_1 test [.] rte_atomic32_cmpset > + 9.13% DPDK_TX_1 test [.] i40e_xmit_pkts > + 6.50% DPDK_TX_1 test [.] rte_pause > 0.21% DPDK_TX_1 test [.] rte_mempool_ops_enqueue_bulk.isra.0 > 0.20% DPDK_TX_1 test [.] dpdk_tx_thread > > The traffic load is constant 10 Gbps 84 bytes packets with no idles. The burst size of 512 is a desired burst of mbufs, however the tx thread will transmit what ever it can get from the Tx ring. > > I think if resolving why the perf analysis shows ring is MP when it has been created as SP / SC should resolve this issue. The 'common_ring_mp_enqueue' is the enqueue method of mempool variant 'ring', that is, based on RTE Ring internally. When you say that ring has been created as SP / SC you seemingly refer to the regular RTE ring created by your application logic, not the internal ring of the mempool. Am I missing something? Thank you. > > Thanks, > ed > > -----Original Message----- > From: Stephen Hemminger > Sent: Tuesday, July 8, 2025 9:47 AM > To: Lombardo, Ed > Cc: Ivan Malov ; users > Subject: Re: dpdk Tx falling short > > External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > On Tue, 8 Jul 2025 04:10:05 +0000 > "Lombardo, Ed" wrote: > >> Hi Stephen, >> I ensured that in every pipeline stage that enqueue or dequeues mbufs it uses the burst version, perf showed the repercussions of doing one mbuf dequeue and enqueue. >> For the receive stage rte_eth_rx_burst() is used and Tx stage we use rte_eth_tx_burst(). The burst size used in tx_thread for dequeue burst is 512 Mbufs. > > You might try buffering like rte_eth_tx_buffer does. > Need to add an additional mechanism to ensure that buffer gets flushed when you detect idle period. >