From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3AEA6A0032; Tue, 28 Sep 2021 11:39:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0550B410D7; Tue, 28 Sep 2021 11:39:41 +0200 (CEST) Received: from smartserver.smartsharesystems.com (smartserver.smartsharesystems.com [77.243.40.215]) by mails.dpdk.org (Postfix) with ESMTP id 5870F40E3C for ; Tue, 28 Sep 2021 11:39:39 +0200 (CEST) Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Date: Tue, 28 Sep 2021 11:39:36 +0200 Message-ID: <98CBD80474FA8B44BF855DF32C47DC35C61ACA@smartserver.smartshare.dk> In-Reply-To: X-MS-Has-Attach: X-MS-TNEF-Correlator: X-MimeOLE: Produced By Microsoft Exchange V6.5 Thread-Topic: [dpdk-dev] [dpdk-stable] [PATCH v4] mbuf: fix reset on mbuf free Thread-Index: AQHW623VmsTXVlWnIEigt6XNenzAZ6otrogAgAD11wCAAFyesIEj+T4AgAmuCQCAACE0AIAABSiAgF3gOICAAAVDwIAACMww References: <20201104170007.8026-1-olivier.matz@6wind.com> <98CBD80474FA8B44BF855DF32C47DC35C61945@smartserver.smartshare.dk> <2065212.rItNS1eAF1@thomas> <3491197.H0bSahjnX1@thomas> From: =?iso-8859-1?Q?Morten_Br=F8rup?= To: "Slava Ovsiienko" , "NBU-Contact-Thomas Monjalon" , "Olivier Matz" , "Ali Alnubani" Cc: , "David Marchand" , "Alexander Kozyrev" , "Ferruh Yigit" , , "Andrew Rybchenko" , "Ananyev, Konstantin" , "Ajit Khaparde" , Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH v4] mbuf: fix reset on mbuf free X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Slava Ovsiienko > Sent: Tuesday, 28 September 2021 11.01 >=20 > Hi, >=20 > I've re-read the entire thread. > If I understand correctly, the root problem was (in initial patch): >=20 > > m1 =3D rte_pktmbuf_alloc(mp); > > rte_pktmbuf_append(m1, 500); > > m2 =3D rte_pktmbuf_alloc(mp); > > rte_pktmbuf_append(m2, 500); > > rte_pktmbuf_chain(m1, m2); > > m0 =3D rte_pktmbuf_alloc(mp); > > rte_pktmbuf_append(m0, 500); > > rte_pktmbuf_chain(m0, m1); > > > > As rte_pktmbuf_chain() does not reset nb_seg in the initial m1 > segment > > (this is not required), after this code the mbuf chain have 3 > > segments: > > - m0: next=3Dm1, nb_seg=3D3 > > - m1: next=3Dm2, nb_seg=3D2 > > - m2: next=3DNULL, nb_seg=3D1 > > > The proposed fix was to ALWAYS set next and nb_seg fields on > mbuf_free(), > regardless next field content. That would perform unconditional write > to mbuf, and might affect the configurations, where are no multi- > segment > packets at al. mbuf_free() is "backbone" API, it is used by all cases, > all > scenaries are affected. >=20 > As far as I know, the current approach for nb_seg field - it contains > other > value than 1 only in the first mbuf , for the following segments, it > should > not be considered at all (only the first segment fields are valid), = and > it is > supposed to contain 1, as it was initially allocated from the pool. >=20 > In the example above the problem was introduced by > rte_pktmbuf_chain(). Could we consider fixing the rte_pktmbuf_chain() > (used in potentially fewer common sceneries) instead of touching > the extremely common rte_mbuf_free() ? >=20 > With best regards, > Slava Great idea, Slava! Changing the invariant for 'nb_segs', so it must be 1, except in the = first segment of a segmented packet. Thinking further about it, perhaps we can achieve even higher = performance by a minor additional modification: Use 0 instead of 1? Or = offset 'nb_segs' by -1, so it reflects the number of additional = segments? And perhaps combining the invariants for 'nb_segs' and 'next' could = provide even more performance improvements. I don't know, just sharing a = thought. Anyway, I vote for fixing the bug. One way or the other! -Morten >=20 > > -----Original Message----- > > From: Thomas Monjalon > > Sent: Tuesday, September 28, 2021 11:29 > > > > Follow-up again: > > We have added a note in 21.08, we should fix it in 21.11. > > If there are no counter proposal, I suggest applying this patch, no > matter the > > performance regression. > > > > > > 30/07/2021 16:54, Thomas Monjalon: > > > 30/07/2021 16:35, Morten Br=F8rup: > > > > > From: Olivier Matz [mailto:olivier.matz@6wind.com] > > > > > Sent: Friday, 30 July 2021 14.37 > > > > > > > > > > Hi Thomas, > > > > > > > > > > On Sat, Jul 24, 2021 at 10:47:34AM +0200, Thomas Monjalon > wrote: > > > > > > What's the follow-up for this patch? > > > > > > > > > > Unfortunatly, I still don't have the time to work on this = topic > yet. > > > > > > > > > > In my initial tests, in our lab, I didn't notice any > performance > > > > > regression, but Ali has seen an impact (0.5M PPS, but I don't > know > > > > > how much in percent). > > > > > > > > > > > > > > > > 19/01/2021 15:04, Slava Ovsiienko: > > > > > > > Hi, All > > > > > > > > > > > > > > Could we postpose this patch at least to rc2? We would = like > to > > > > > conduct more investigations? > > > > > > > > > > > > > > With best regards, Slava > > > > > > > > > > > > > > From: Olivier Matz > > > > > > > > On Mon, Jan 18, 2021 at 05:52:32PM +0000, Ali Alnubani > wrote: > > > > > > > > > Hi, > > > > > > > > > (Sorry had to resend this to some recipients due to > mail > > > > > > > > > server > > > > > problems). > > > > > > > > > > > > > > > > > > Just confirming that I can still reproduce the > regression > > > > > > > > > with > > > > > single core and > > > > > > > > 64B frames on other servers. > > > > > > > > > > > > > > > > Many thanks for the feedback. Can you please detail what > is > > > > > > > > the > > > > > amount of > > > > > > > > performance loss in percent, and confirm the test case? > (I > > > > > suppose it is > > > > > > > > testpmd io forward). > > > > > > > > > > > > > > > > Unfortunatly, I won't be able to spend a lot of time on > this > > > > > > > > soon > > > > > (sorry for > > > > > > > > that). So I see at least these 2 options: > > > > > > > > > > > > > > > > - postpone the patch again, until I can find more time = to > analyze > > > > > > > > and optimize > > > > > > > > - apply the patch if the performance loss is acceptable > > > > > > > > compared > > > > > to > > > > > > > > the added value of fixing a bug > > > > > > > > > > > > > > [...] > > > > > > > > > > Statu quo... > > > > > > > > > > Olivier > > > > > > > > > > > > > The decision should be simple: > > > > > > > > Does the DPDK project support segmented packets? > > > > If yes, then apply the patch to fix the bug! > > > > > > > > If anyone seriously cares about the regression it introduces, > optimization > > patches are welcome later. We shouldn't wait for it. > > > > > > You're right, but the regression is flagged to a 4-years old = patch, > > > that's why I don't consider it as urgent. > > > > > > > If the patch is not applied, the documentation must be updated = to > > mention that we are releasing DPDK with a known bug: that segmented > > packets are handled incorrectly in the scenario described in this > patch. > > > > > > Yes, would be good to document the known issue, no matter how old > it > > > is. > > > > > > > Generally, there could be some performance to gain by not > supporting > > segmented packets at all, as a compile time option. But that is a > different > > discussion. > > > > > > >=20