From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 05D68460D4; Tue, 21 Jan 2025 14:40:37 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8BE9B402E9; Tue, 21 Jan 2025 14:40:37 +0100 (CET) Received: from dkmailrelay1.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 2C5A8402E1 for ; Tue, 21 Jan 2025 14:40:36 +0100 (CET) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesys.local [192.168.4.10]) by dkmailrelay1.smartsharesystems.com (Postfix) with ESMTP id C583D2027D; Tue, 21 Jan 2025 14:40:35 +0100 (CET) Received: from dkrd4.smartsharesys.local ([192.168.4.26]) by smartserver.smartsharesystems.com with Microsoft SMTPSVC(6.0.3790.4675); Tue, 21 Jan 2025 14:40:35 +0100 From: =?UTF-8?q?Morten=20Br=C3=B8rup?= To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , Slava Ovsiienko , Shahaf Shuler , Olivier Matz Cc: dev@dpdk.org, =?UTF-8?q?Morten=20Br=C3=B8rup?= , Dengdui Huang Subject: [PATCH v3] mbuf: add fast free bulk and raw alloc bulk functions Date: Tue, 21 Jan 2025 13:40:28 +0000 Message-ID: <20250121134028.20733-1-mb@smartsharesystems.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250114162544.125448-1-mb@smartsharesystems.com> References: <20250114162544.125448-1-mb@smartsharesystems.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-OriginalArrivalTime: 21 Jan 2025 13:40:35.0556 (UTC) FILETIME=[0CAAE240:01DB6C0A] X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When putting an mbuf back into its mempool, there are certain requirements to the mbuf. Specifically, some of its fields must be initialized. These requirements are in fact invariants about free mbufs, held in mempools, and thus also apply when allocating an mbuf from a mempool. With this in mind, the additional assertions in rte_mbuf_raw_free() were moved to __rte_mbuf_raw_sanity_check(). Furthermore, the assertion regarding pinned external buffer was enhanced; it now also asserts that the referenced pinned external buffer has refcnt == 1. The description of RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE was updated to include the remaining requirements, which were missing here. And finally: A new rte_mbuf_fast_free_bulk() inline function was added for the benefit of ethdev drivers supporting fast release of mbufs. It asserts these requirements and that the mbufs belong to the specified mempool, and then calls rte_mempool_put_bulk(). For symmetry, a new rte_mbuf_raw_alloc_bulk() inline function was also added. Signed-off-by: Morten Brørup Acked-by: Dengdui Huang --- v2: * Fixed missing inline. v3: * Fixed missing experimental warning. (Stephen) * Added raw alloc bulk function. --- lib/ethdev/rte_ethdev.h | 6 ++-- lib/mbuf/rte_mbuf.h | 80 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 82 insertions(+), 4 deletions(-) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 1f71cad244..e9267fca79 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1612,8 +1612,10 @@ struct rte_eth_conf { #define RTE_ETH_TX_OFFLOAD_MULTI_SEGS RTE_BIT64(15) /** * Device supports optimization for fast release of mbufs. - * When set application must guarantee that per-queue all mbufs comes from - * the same mempool and has refcnt = 1. + * When set application must guarantee that per-queue all mbufs come from the same mempool, + * are direct, have refcnt=1, next=NULL and nb_segs=1, as done by rte_pktmbuf_prefree_seg(). + * + * @see rte_mbuf_fast_free_bulk() */ #define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE RTE_BIT64(16) #define RTE_ETH_TX_OFFLOAD_SECURITY RTE_BIT64(17) diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h index 0d2e0e64b3..1e40e7fcf7 100644 --- a/lib/mbuf/rte_mbuf.h +++ b/lib/mbuf/rte_mbuf.h @@ -568,6 +568,10 @@ __rte_mbuf_raw_sanity_check(__rte_unused const struct rte_mbuf *m) RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1); RTE_ASSERT(m->next == NULL); RTE_ASSERT(m->nb_segs == 1); + RTE_ASSERT(!RTE_MBUF_CLONED(m)); + RTE_ASSERT(!RTE_MBUF_HAS_EXTBUF(m) || + (RTE_MBUF_HAS_PINNED_EXTBUF(m) && + rte_mbuf_ext_refcnt_read(m->shinfo) == 1)); __rte_mbuf_sanity_check(m, 0); } @@ -606,6 +610,43 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp) return ret.m; } +/** + * @warning + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. + * + * Allocate a bulk of uninitialized mbufs from mempool *mp*. + * + * This function can be used by PMDs (especially in RX functions) to + * allocate a bulk of uninitialized mbufs. The driver is responsible of + * initializing all the required fields. See rte_pktmbuf_reset(). + * For standard needs, prefer rte_pktmbuf_alloc_bulk(). + * + * The caller can expect that the following fields of the mbuf structure + * are initialized: buf_addr, buf_iova, buf_len, refcnt=1, nb_segs=1, + * next=NULL, pool, priv_size. The other fields must be initialized + * by the caller. + * + * @param mp + * The mempool from which mbufs are allocated. + * @param mbufs + * Array of pointers to mbufs. + * @param count + * Array size. + * @return + * - 0: Success. + * - -ENOENT: Not enough entries in the mempool; no mbufs are retrieved. + */ +__rte_experimental +static __rte_always_inline int +rte_mbuf_raw_alloc_bulk(struct rte_mempool *mp, struct rte_mbuf **mbufs, unsigned int count) +{ + int rc = rte_mempool_get_bulk(mp, (void **)mbufs, count); + if (likely(rc == 0)) + for (unsigned int idx = 0; idx < count; idx++) + __rte_mbuf_raw_sanity_check(mbufs[idx]); + return rc; +} + /** * Put mbuf back into its original mempool. * @@ -623,12 +664,47 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp) static __rte_always_inline void rte_mbuf_raw_free(struct rte_mbuf *m) { - RTE_ASSERT(!RTE_MBUF_CLONED(m) && - (!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m))); __rte_mbuf_raw_sanity_check(m); rte_mempool_put(m->pool, m); } +/** + * @warning + * @b EXPERIMENTAL: This API may change, or be removed, without prior notice. + * + * Put a bulk of mbufs allocated from the same mempool back into the mempool. + * + * The caller must ensure that the mbufs come from the specified mempool, + * are direct and properly reinitialized (refcnt=1, next=NULL, nb_segs=1), as done by + * rte_pktmbuf_prefree_seg(). + * + * This function should be used with care, when optimization is + * required. For standard needs, prefer rte_pktmbuf_free_bulk(). + * + * @see RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE + * + * @param mp + * The mempool to which the mbufs belong. + * @param mbufs + * Array of pointers to packet mbufs. + * The array must not contain NULL pointers. + * @param count + * Array size. + */ +__rte_experimental +static __rte_always_inline void +rte_mbuf_fast_free_bulk(struct rte_mempool *mp, struct rte_mbuf **mbufs, unsigned int count) +{ + for (unsigned int idx = 0; idx < count; idx++) { + const struct rte_mbuf *m = mbufs[idx]; + RTE_ASSERT(m != NULL); + RTE_ASSERT(m->pool == mp); + __rte_mbuf_raw_sanity_check(m); + } + + rte_mempool_put_bulk(mp, (void **)mbufs, count); +} + /** * The packet mbuf constructor. * -- 2.43.0