From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5C573A0544; Sat, 3 Dec 2022 18:13:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 47E8240156; Sat, 3 Dec 2022 18:13:03 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id E84F1400D6 for ; Sat, 3 Dec 2022 18:13:01 +0100 (CET) Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Subject: mbuf performance optimization Content-Transfer-Encoding: quoted-printable Date: Sat, 3 Dec 2022 18:13:00 +0100 X-MimeOLE: Produced By Microsoft Exchange V6.5 Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D8754C@smartserver.smartshare.dk> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: mbuf performance optimization Thread-Index: AdkHOn8526x/gxitQWaGrHouFA4cUA== From: =?iso-8859-1?Q?Morten_Br=F8rup?= To: , "Shijith Thotton" , , , , Cc: X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org I have been playing around with the idea to make some changes to avoid = using the mbuf's 2nd cache line in many common cases, which would reduce = the cache pressure significantly, and thus improve performance. I would = like to discuss if it is doable. (And let's just assume that ABI = breakage is an acceptable tradeoff.) Move 'tx_offload' to the 1st cache line --------------------------------------- Under all circumstances: We would need to move the 'tx_offload' field to the 1st cache line. This = field is set by the application's packet forwarding pipeline stage, and = read by the PMD TX function. In most cases, these two stages directly = follow each other. This also means that we must make room for it by moving a 64 bit field = from the 1st to the 2nd cache line. It could be the 'next' or the 'pool' = field, as discussed below. The 'next' field - make it conditional -------------------------------------- Optimization for (1) non-segmented packets: We could avoid touching the 'next' field by making the 'next' field = depend on something in the first cache line. E.g.: - Use the 'ol_flags' field. Add a RTE_MBUF_F_MORE_SEGS flag, to be = set/cleared when setting/clearing the 'next' field. - Use the 'nb_segs' field. Set the 'nb_segs' field to a value >1 when = setting the 'next' field, and set it to 1 when clearing the 'next' = field. The 'pool' field - use it less frequently ----------------------------------------- Optimizations for (2) single-mempool TX queues and (3) single-mempool = applications: The 'pool' field seems to be only used by when a PMD frees a burst of = mbufs that it has finished transmitting. Please correct me if I am wrong = here. We could introduce a sibling to RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, with = the only requirement that the mbufs come from the same mempool. When = set, only the first mbuf in a burst gets its 'pool' field read, thus = avoiding reading it in the remaining mbufs in the burst. For single-mempool applications, we could introduce a global 'mbuf_pool' = variable, to be used instead of the mbuf's 'pool' field, if set. Med venlig hilsen / Kind regards, -Morten Br=F8rup