From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 758A11B1C1 for ; Wed, 15 Nov 2017 18:31:10 +0100 (CET) Received: from lfbn-1-6068-189.w90-110.abo.wanadoo.fr ([90.110.3.189] helo=droids-corp.org) by mail.droids-corp.org with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256) (Exim 4.84_2) (envelope-from ) id 1eF1cd-0005pa-RM; Wed, 15 Nov 2017 18:37:13 +0100 Received: by droids-corp.org (sSMTP sendmail emulation); Wed, 15 Nov 2017 18:30:59 +0100 Date: Wed, 15 Nov 2017 18:30:59 +0100 From: Olivier MATZ To: "Hanoch Haim (hhaim)" Cc: Ilya Matveychikov , "dev@dpdk.org" , Konstantin Ananyev Message-ID: <20171115173058.mrkrv3usbl5sfw3h@platinum> References: <20171115091413.27119-1-hhaim@cisco.com> <1D98684F-B8A9-4037-8534-0D4E3A1FD34C@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH v3] mbuf: cleanup rte_pktmbuf_lastseg(), fix atomic usage X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Nov 2017 17:31:10 -0000 Hi, On Wed, Nov 15, 2017 at 12:46:15PM +0000, Hanoch Haim (hhaim) wrote: > +Oliver, > Ilia, I assume there is a reason why Oliver did that, I just consolidate the code. > He didn't want to *write* the same value to mbuf field. > In the common case that pointer was already cleared by the driver, it is better to just read it. From cache synchronization perspective it will run faster. > > Thanks, > Hanoh > > > -----Original Message----- > From: Ilya Matveychikov [mailto:matvejchikov@gmail.com] > Sent: Wednesday, November 15, 2017 1:14 PM > To: Hanoch Haim (hhaim) > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH v3] mbuf: cleanup rte_pktmbuf_lastseg(), fix atomic usage > > > > On Nov 15, 2017, at 1:14 PM, Hanoh Haim wrote: > > I think the patch should be renamed in something like: mbuf: fix mbuf free performance with non atomic refcnt A description of the problem in the commit log would also be welcome. It looks it is a regression introduced by commit 8f094a9ac5d7. In that case, we should also have: Fixes: 8f094a9ac5d7 ("mbuf: set mbuf fields while in pool") > > Signed-off-by: Hanoh Haim > > --- > > lib/librte_mbuf/rte_mbuf.h | 27 +++++++++++++-------------- > > 1 file changed, 13 insertions(+), 14 deletions(-) > > > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > index 7e326bb..ab110f8 100644 > > --- a/lib/librte_mbuf/rte_mbuf.h > > +++ b/lib/librte_mbuf/rte_mbuf.h > > @@ -1159,6 +1159,15 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m) > > __rte_mbuf_sanity_check(m, 1); > > } > > > > + > > +static __rte_always_inline void rte_pktmbuf_reset_before_free(struct > > +rte_mbuf *m) { > > + if (m->next != NULL) { > > + m->next = NULL; > > + m->nb_segs = 1; > > + } > > +} > > + > > Probably it will be more clean to add something __te_pktmbuf_reset_nb_segs() without check for (m->next != NULL) and use it everywhere in mbuf’s the code, not only in > rte_pktmbuf_prefree_seg() function. And I think it will be better to have separate patch for that. I'm not convinced that: __rte_pktmbuf_reset_nb_segs(m); is clearer than: m->next = NULL; m->nb_segs = 1; Anyway, I agree this should not be part of this patch. We should only keep the fix. > > > /** > > * Allocate a new mbuf from a mempool. > > * > > @@ -1323,8 +1332,7 @@ static inline void rte_pktmbuf_detach(struct rte_mbuf *m) > > m->ol_flags = 0; > > > > if (rte_mbuf_refcnt_update(md, -1) == 0) { > > - md->next = NULL; > > - md->nb_segs = 1; > > Using rte_pktmbuf_reset_before_free() here adds unnecessary check for m->next in that path. Yes, agree with Ilya. > > > + rte_pktmbuf_reset_before_free(md); > > rte_mbuf_refcnt_set(md, 1); > > rte_mbuf_raw_free(md); > > } > > @@ -1354,25 +1362,16 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m) > > if (RTE_MBUF_INDIRECT(m)) > > rte_pktmbuf_detach(m); > > > > - if (m->next != NULL) { > > - m->next = NULL; > > - m->nb_segs = 1; > > - } > > - > > + rte_pktmbuf_reset_before_free(m); > > return m; > > > > - } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) { > > - > > + } else if (rte_mbuf_refcnt_update(m, -1) == 0) { I agree with Konstantin's comment done in another thread [1]: ''' That would cause extra read; cmp (and possible slowdown) for atomic refcnt. If that really need to be fixed - probably we need to introduce a new function that would do update without trying to read refctn first - rte_mbuf_refcnt_update_blind() or so. ''' However I'm not sure a new function is really needed: the name is not ideal, and it would only be used once. What about the patch below? ============================== --- a/lib/librte_mbuf/rte_mbuf.h +++ b/lib/librte_mbuf/rte_mbuf.h @@ -1361,8 +1361,18 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m) return m; - } else if (rte_atomic16_add_return(&m->refcnt_atomic, -1) == 0) { + } else { + /* We don't use rte_mbuf_refcnt_update() because we already + * tested that refcnt != 1. + */ +#ifdef RTE_MBUF_REFCNT_ATOMIC + ret = rte_atomic16_add_return(&m->refcnt_atomic, -1); +#else + ret = --m->refcnt; +#endif + if (ret != 0) + return NULL; if (RTE_MBUF_INDIRECT(m)) rte_pktmbuf_detach(m); @@ -1375,7 +1385,6 @@ rte_pktmbuf_prefree_seg(struct rte_mbuf *m) return m; } - return NULL; } /* deprecated, replaced by rte_pktmbuf_prefree_seg() */ ============================== [1] http://dpdk.org/dev/patchwork/patch/31378/ Regards, Olivier