DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] mbuf: optimize detach direct buffer
@ 2026-01-03 17:40 Morten Brørup
  2026-01-05 10:31 ` Bruce Richardson
  0 siblings, 1 reply; 4+ messages in thread
From: Morten Brørup @ 2026-01-03 17:40 UTC (permalink / raw)
  To: dev; +Cc: Morten Brørup

When rte_pktmbuf_prefree_seg() resets an mbuf about to be freed, it
doesn't write the fields that already have the required values. This
saves a memory store operation when all the fields already have the
required values.

This patch adds the same optimization to __rte_pktmbuf_free_direct(),
to improve the performance for freeing a direct buffer being detached from
a packet mbuf, i.e. saving a memory store operation when all the fields
(of the buffer being freed) already have the required values.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/mbuf/rte_mbuf.h | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 2004391f57..592af2388c 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -1334,17 +1334,23 @@ static inline void
 __rte_pktmbuf_free_direct(struct rte_mbuf *m)
 {
 	struct rte_mbuf *md;
+	bool refcnt_not_one;
 
 	RTE_ASSERT(RTE_MBUF_CLONED(m));
 
 	md = rte_mbuf_from_indirect(m);
 
-	if (rte_mbuf_refcnt_update(md, -1) == 0) {
-		md->next = NULL;
-		md->nb_segs = 1;
+	refcnt_not_one = unlikely(rte_mbuf_refcnt_read(md) != 1);
+	if (refcnt_not_one && __rte_mbuf_refcnt_update(md, -1) != 0)
+		return;
+
+	if (refcnt_not_one)
 		rte_mbuf_refcnt_set(md, 1);
-		rte_mbuf_raw_free(md);
-	}
+	if (md->nb_segs != 1)
+		md->nb_segs = 1;
+	if (md->next != NULL)
+		md->next = NULL;
+	rte_mbuf_raw_free(md);
 }
 
 /**
-- 
2.43.0


^ permalink raw reply	[flat|nested] 4+ messages in thread
* [PATCH] mbuf: optimize detach direct buffer
@ 2026-01-03 17:34 Morten Brørup
  2026-01-03 17:41 ` Morten Brørup
  0 siblings, 1 reply; 4+ messages in thread
From: Morten Brørup @ 2026-01-03 17:34 UTC (permalink / raw)
  To: dev; +Cc: Morten Brørup

doesn't write the fields that already have the required values. This
saves a memory store operation when all the fields already have the
required values.

This patch adds the same optimization to __rte_pktmbuf_free_direct(),
to improve the performance for freeing a direct buffer being detached from
a packet mbuf, i.e. saving a memory store operation when all the fields
(of the buffer being freed) already have the required values.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/mbuf/rte_mbuf.h | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 2004391f57..592af2388c 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -1334,17 +1334,23 @@ static inline void
 __rte_pktmbuf_free_direct(struct rte_mbuf *m)
 {
 	struct rte_mbuf *md;
+	bool refcnt_not_one;
 
 	RTE_ASSERT(RTE_MBUF_CLONED(m));
 
 	md = rte_mbuf_from_indirect(m);
 
-	if (rte_mbuf_refcnt_update(md, -1) == 0) {
-		md->next = NULL;
-		md->nb_segs = 1;
+	refcnt_not_one = unlikely(rte_mbuf_refcnt_read(md) != 1);
+	if (refcnt_not_one && __rte_mbuf_refcnt_update(md, -1) != 0)
+		return;
+
+	if (refcnt_not_one)
 		rte_mbuf_refcnt_set(md, 1);
-		rte_mbuf_raw_free(md);
-	}
+	if (md->nb_segs != 1)
+		md->nb_segs = 1;
+	if (md->next != NULL)
+		md->next = NULL;
+	rte_mbuf_raw_free(md);
 }
 
 /**
-- 
2.43.0


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-01-05 10:32 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-03 17:40 [PATCH] mbuf: optimize detach direct buffer Morten Brørup
2026-01-05 10:31 ` Bruce Richardson
  -- strict thread matches above, loose matches on Subject: below --
2026-01-03 17:34 Morten Brørup
2026-01-03 17:41 ` Morten Brørup

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).