From: Thomas Monjalon <thomas@monjalon.net>
To: "Morten Brørup" <mb@smartsharesystems.com>
Cc: Bruce Richardson <bruce.richardson@intel.com>,
dev@dpdk.org, Stephen Hemminger <stephen@networkplumber.org>,
Konstantin Ananyev <konstantin.ananyev@huawei.com>,
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
Ivan Malov <ivan.malov@arknetworks.am>,
Chengwen Feng <fengchengwen@huawei.com>
Subject: Re: [PATCH v8 3/3] mbuf: optimize reset of reinitialized mbufs
Date: Sun, 19 Oct 2025 18:59:11 +0200 [thread overview]
Message-ID: <2602540.zToM8qfIzz@thomas> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35F654B7@smartserver.smartshare.dk>
09/10/2025 19:35, Morten Brørup:
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > > + m->pkt_len = 0;
> > > + m->tx_offload = 0;
> > > + m->vlan_tci = 0;
> > > + m->vlan_tci_outer = 0;
> > > + m->port = RTE_MBUF_PORT_INVALID;
> >
> > Have you considered doing all initialization using 64-bit stores? It's
> > generally cheaper to do a single 64-bit store than e.g. set of 16-bit
> > ones.
>
> The code is basically copy-paste from rte_pktmbuf_reset().
> I kept it the same way for readability.
>
> > This also means that we could remove the restriction on having refcnt
> > and
> > nb_segs already set. As in PMDs, a single store can init data_off,
> > ref_cnt,
> > nb_segs and port.
>
> Yes, I have given the concept a lot of thought already.
> If we didn't require mbufs residing in the mempool to have any fields initialized, specifically "next" and "nb_segs", it would improve performance for drivers freeing mbufs back to the mempool, because writing to the mbufs would no longer be required at that point; the mbufs could simply be freed back to the mempool. Instead, we would require the driver to initialize these fields - which it probably does on RX anyway, if it supports segmented packets.
> But I consider this concept a major API change, also affecting applications assuming that these fields are initialized when allocating raw mbufs from the mempool. So I haven't pursued it.
>
> >
> > Similarly for packet_type and pkt_len, and data_len/vlan_tci and rss
> > fields
> > etc. For max performance, the whole of the mbuf cleared here can be
> > done in
> > 40 bytes, or 5 64-bit stores. If we do the stores in order, possibly
> > the
> > compiler can even opportunistically coalesce more stores, so we could
> > even
> > end up getting 128-bit or larger stores depending on the ISA compiled
> > for.
> > [Maybe the compiler will do this even if they are not in order, but I'd
> > like to maximize my chances here! :-)]
Morten, you didn't reply to this.
Can we optimize more with big stores?
next prev parent reply other threads:[~2025-10-19 16:59 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-21 15:02 [PATCH v5 0/3] mbuf: simplify handling " Morten Brørup
2025-08-21 15:02 ` [PATCH v5 1/3] mbuf: de-inline sanity checking a reinitialized mbuf Morten Brørup
2025-08-21 15:02 ` [PATCH v5 2/3] promote reinitialized mbuf free and alloc bulk functions as stable Morten Brørup
2025-08-21 15:02 ` [PATCH v5 3/3] mbuf: no need to reset all fields on reinitialized mbufs Morten Brørup
2025-08-22 12:47 ` [PATCH v6 0/3] mbuf: simplify handling of " Morten Brørup
2025-08-22 12:47 ` [PATCH v6 1/3] mbuf: de-inline sanity checking a reinitialized mbuf Morten Brørup
2025-08-22 14:26 ` Morten Brørup
2025-08-22 12:47 ` [PATCH v6 2/3] mbuf: promote raw free and alloc bulk functions as stable Morten Brørup
2025-08-22 12:47 ` [PATCH v6 3/3] mbuf: no need to reset all fields on reinitialized mbufs Morten Brørup
2025-08-22 23:45 ` [PATCH v7 0/3] mbuf: simplify handling of " Morten Brørup
2025-08-22 23:45 ` [PATCH v7 1/3] mbuf: de-inline sanity checking a reinitialized mbuf Morten Brørup
2025-08-22 23:45 ` [PATCH v7 2/3] mbuf: promote raw free and alloc bulk functions as stable Morten Brørup
2025-08-22 23:45 ` [PATCH v7 3/3] mbuf: optimize reset of reinitialized mbufs Morten Brørup
2025-08-23 6:29 ` [PATCH v8 0/3] mbuf: simplify handling " Morten Brørup
2025-08-23 6:30 ` [PATCH v8 1/3] mbuf: de-inline sanity checking a reinitialized mbuf Morten Brørup
2025-10-09 16:49 ` Bruce Richardson
2025-10-09 17:12 ` Morten Brørup
2025-10-09 17:29 ` Thomas Monjalon
2025-10-09 17:55 ` Morten Brørup
2025-10-19 20:22 ` Thomas Monjalon
2025-08-23 6:30 ` [PATCH v8 2/3] mbuf: promote raw free and alloc bulk functions as stable Morten Brørup
2025-10-09 16:53 ` Bruce Richardson
2025-08-23 6:30 ` [PATCH v8 3/3] mbuf: optimize reset of reinitialized mbufs Morten Brørup
2025-08-23 14:28 ` Stephen Hemminger
2025-10-09 17:15 ` Bruce Richardson
2025-10-09 17:35 ` Morten Brørup
2025-10-10 7:43 ` Bruce Richardson
2025-10-10 9:22 ` Morten Brørup
2025-10-19 16:59 ` Thomas Monjalon [this message]
2025-10-19 18:45 ` Morten Brørup
2025-10-19 20:45 ` Stephen Hemminger
2025-10-06 14:43 ` [PATCH v8 0/3] mbuf: simplify handling " Morten Brørup
2025-10-19 20:42 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2602540.zToM8qfIzz@thomas \
--to=thomas@monjalon.net \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=fengchengwen@huawei.com \
--cc=ivan.malov@arknetworks.am \
--cc=konstantin.ananyev@huawei.com \
--cc=mb@smartsharesystems.com \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).