From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-f174.google.com (mail-ob0-f174.google.com [209.85.214.174]) by dpdk.org (Postfix) with ESMTP id 21E043005 for ; Fri, 27 Feb 2015 13:18:41 +0100 (CET) Received: by mail-ob0-f174.google.com with SMTP id wo20so17726440obc.5 for ; Fri, 27 Feb 2015 04:18:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=RSETCaTZQt6y9gX8UF+AURkvqSCFOZ8j6+TTVljFtX4=; b=NGh74dKiSaYedTBkeZcPkfJVeee0t5Mtx7Rc5H8Exqq7fvCTdkbT6zQtc227PLOjiv c1FjCmS66DL6m+VRN/awK7pnYdisbjwHQ0zCjxFX6h3XLK7jKpf9+Zr+Lib8zJ42hOTJ UrumjeGF1HG8GtIC7GbT+nEQbdc0nlC7tA2xHH0OVnLXLg1Bvnpah5MFgAgw+RwkIY2+ CN259ylkjVOdAv66/774YS6/Xaw6adnuTuk4TewdwDe878j4EdodBUKQjQDQmS+k1rbP UlzDdXIPPm21WukI5AazlCFRP1Xt4WfOhRt1VuMHFpCKVWh5S9p28+NX8Z+94eeE9+Z7 oD6g== MIME-Version: 1.0 X-Received: by 10.182.97.41 with SMTP id dx9mr9873399obb.4.1425039520547; Fri, 27 Feb 2015 04:18:40 -0800 (PST) Received: by 10.202.105.138 with HTTP; Fri, 27 Feb 2015 04:18:40 -0800 (PST) In-Reply-To: <2601191342CEEE43887BDE71AB977258213F2C93@irsmsx105.ger.corp.intel.com> References: <1424992506-20484-1-git-send-email-vadim.suraev@gmail.com> <2601191342CEEE43887BDE71AB977258213F2C93@irsmsx105.ger.corp.intel.com> Date: Fri, 27 Feb 2015 14:18:40 +0200 Message-ID: From: Vadim Suraev To: "Ananyev, Konstantin" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Feb 2015 12:18:41 -0000 Hi, Konstantin, >Seems really useful. >One thought - why to introduce the limitation that all mbufs have to be from the same mempool? >I think you can reorder it a bit, so it can handle situation when chained mbufs belong to different mempools. I had a doubt, my concern was how practical is that (multiple mempools) case? Do you think there should be two versions: lightweight (with the restriction) and generic? >Actually probably would be another useful function to have: >rte_pktmbuf_free_seg_bulk(struct rte_mbuf *m[], uint32_t num); Yes, this could be a sub-routine of rte_pktmbuf_free_chain() Regards, Vadim. On Feb 27, 2015 3:18 PM, "Ananyev, Konstantin" wrote: > Hi Vadim, > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of > vadim.suraev@gmail.com > > Sent: Thursday, February 26, 2015 11:15 PM > > To: dev@dpdk.org > > Subject: [dpdk-dev] [PATCH] rte_mbuf: scattered pktmbufs freeing > optimization > > > > From: "vadim.suraev@gmail.com" > > > > new function - rte_pktmbuf_free_bulk makes freeing long > > scattered (chained) pktmbufs belonging to the same pool > > more optimal using rte_mempool_put_bulk rather than calling > > rte_mempool_put for each segment. > > Inlike rte_pktmbuf_free, which calls rte_pktmbuf_free_seg, > > this function calls __rte_pktmbuf_prefree_seg. If non-NULL > > returned, the pointer is placed in an array. When array is > > filled or when the last segment is processed, rte_mempool_put_bulk > > is called. In case of multiple producers, performs 3 times better. > > > > > > Signed-off-by: vadim.suraev@gmail.com > > --- > > lib/librte_mbuf/rte_mbuf.h | 55 > ++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 55 insertions(+) > > > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > index 17ba791..1d6f848 100644 > > --- a/lib/librte_mbuf/rte_mbuf.h > > +++ b/lib/librte_mbuf/rte_mbuf.h > > @@ -824,6 +824,61 @@ static inline void rte_pktmbuf_free(struct rte_mbuf > *m) > > } > > } > > > > +/* This macro defines the size of max bulk of mbufs to free for > rte_pktmbuf_free_bulk */ > > +#define MAX_MBUF_FREE_SIZE 32 > > + > > +/* If RTE_LIBRTE_MBUF_DEBUG is enabled, checks if all mbufs must belong > to the same mempool */ > > +#ifdef RTE_LIBRTE_MBUF_DEBUG > > + > > +#define RTE_MBUF_MEMPOOL_CHECK1(m) struct rte_mempool > *first_buffers_mempool = (m) ? (m)->pool : NULL > > + > > +#define RTE_MBUF_MEMPOOL_CHECK2(m) > RTE_MBUF_ASSERT(first_buffers_mempool == (m)->pool) > > + > > +#else > > + > > +#define RTE_MBUF_MEMPOOL_CHECK1(m) > > + > > +#define RTE_MBUF_MEMPOOL_CHECK2(m) > > + > > +#endif > > + > > +/** > > + * Free chained (scattered) mbuf into its original mempool. > > + * > > + * All the mbufs in the chain must belong to the same mempool. > > Seems really useful. > One thought - why to introduce the limitation that all mbufs have to be > from the same mempool? > I think you can reorder it a bit, so it can handle situation when chained > mbufs belong to different mempools. > Something like: > ... > mbufs[mbufs_count] = head; > if (unlikely (head->mp != mbufs[0]->mp || mbufs_count == RTE_DIM(mbufs) - > 1)) { > rte_mempool_put_bulk(mbufs[0]->pool, mbufs, mbufs_count); > mbufs[0] = mbufs[mbufs_count]; > mbufs_count = 0; > } > mbufs_count++; > ... > > Another nit: probably better name it rte_pktmbuf_free_chain() or something? > For me _bulk implies that we have an array of mbufs that we need to free. > Actually probably would be another useful function to have: > rte_pktmbuf_free_seg_bulk(struct rte_mbuf *m[], uint32_t num); > > Konstantin > > > + * > > + * @param head > > + * The head of mbufs to be freed chain > > + */ > > + > > +static inline void __attribute__((always_inline)) > > +rte_pktmbuf_free_bulk(struct rte_mbuf *head) > > +{ > > + void *mbufs[MAX_MBUF_FREE_SIZE]; > > + unsigned mbufs_count = 0; > > + struct rte_mbuf *next; > > + > > + RTE_MBUF_MEMPOOL_CHECK1(head); > > + > > + while(head) { > > + next = head->next; > > + head->next = NULL; > > + if(__rte_pktmbuf_prefree_seg(head)) { > > + RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(head) == 0); > > + RTE_MBUF_MEMPOOL_CHECK2(head); > > + mbufs[mbufs_count++] = head; > > + } > > + head = next; > > + if(mbufs_count == MAX_MBUF_FREE_SIZE) { > > + rte_mempool_put_bulk(((struct rte_mbuf > *)mbufs[0])->pool,mbufs,mbufs_count); > > + mbufs_count = 0; > > + } > > + } > > + if(mbufs_count > 0) { > > + rte_mempool_put_bulk(((struct rte_mbuf > *)mbufs[0])->pool,mbufs,mbufs_count); > > + } > > +} > > + > > /** > > * Creates a "clone" of the given packet mbuf. > > * > > -- > > 1.7.9.5 > >