To help with definition of fast free flag, I would like the documentation to be better. Something like: Fast free flag allows driver to avoid expensive atomic operators on ref count and assume all mbufs in a burst are from the same pool. Should also add debug asserts in drivers implenting fast free. On Mon, Oct 6, 2025, 16:40 Morten Brørup wrote: > > From: Bruce Richardson [mailto:bruce.richardson@intel.com] > > Sent: Friday, 3 October 2025 11.18 > > Subject: Minutes of techboard meeting, 2025-10-01 > > > * Use of FAST_FREE and multi-buffer/scattered mbuf flags > > - The flags for enabling fast-free and supporting multi-mbuf packets > > are > > now documented incompatible > > - Previously they were not defined as incompatible, but that seems to > > have been assumed for some usages. > > - Techboard discussed how best to resolve this incompatibility with > > regards to: > > - ensuring correctness > > - avoiding major churn to DPDK code > > - avoiding churn to end-user code > > - Options discussed: > > 1 change definition back to not have the settings incompatible: > > this > > necessitates checking drivers for correctness > > 2 keep as explicitly incompatible and report error if both > > specified: > > this could break end-user apps, and requires changes to example > > apps > > 3 drop the fast-free flag if multi-segment mbufs are also > > specified: > > "hides" the issue, but probably minimises changes. Would need to > > decide whether the dropping of flag done in drivers vs ethdev > > level. > > Pros and cons to both options. Needs clear documenting. > > - No firm decision reached, will discuss more over email. > > IMO, the patch [1] making MBUF_FAST_FREE and MULTI_SEGS explicitly > incompatible should be reverted, at least for RC1. > That will take the project back to the state it was in before we started > this discussion. > And all the examples broken by the patch (because they use both TX > offloads) will not need fixing. > > [1]: > https://patchwork.dpdk.org/project/dpdk/patch/20250803194218.683318-3-mb@smartsharesystems.com/ > >