DPDK patches and discussions
 help / color / mirror / Atom feed
From: Neil Horman <nhorman@tuxdriver.com>
To: Vadim Suraev <vadim.suraev@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest
Date: Mon, 30 Mar 2015 16:15:19 -0400	[thread overview]
Message-ID: <20150330201519.GB24100@hmsreliant.think-freely.org> (raw)
In-Reply-To: <CAJ0CJ8mn9R4CAKp-M1NE2Td=L+GQnxGvpdeBOVQKDa0jYg9RZw@mail.gmail.com>

On Mon, Mar 30, 2015 at 10:04:20PM +0300, Vadim Suraev wrote:
> Hi, Neil
> 
> >I think what you need to do here is enhance the underlying pktmbuf
> interface
> >such that an rte_mbuf structure has a destructor method association with it
> >which is called when its refcnt reaches zero.  That way the
> >rte_pktmbuf_bulk_free function can just decrement the refcnt on each
> >mbuf_structure, and the pool as a whole can be returned when the destructor
> >function discovers that all mbufs in that bulk pool are freed.
> 
> I thought again and it looks to me that if mempool_cache is enabled,
> rte_pktmbuf_bulk_free and  are redundant because the logic would be very
> similar to already implemented in rte_mempool. Probably the only
> rte_pktmbuf_alloc_bulk makes sense in this patch?
> 
> Regards,
>  Vadim.
> 
Looking at it, yes, I agree, using an externally allocated large contiguous
block of memory, mapped with rte_mempool_xmem_create, then allocating with
rte_pktmbuf_alloc would likely work in exactly the same way.  I'd argue that
even the bulk alloc function isn't really needed, as its implementation seems
like it would just be a for loop with 2-3 lines in it.

Neil

> On Wed, Mar 18, 2015 at 10:58 PM, Neil Horman <nhorman@tuxdriver.com> wrote:
> 
> > On Wed, Mar 18, 2015 at 10:21:18PM +0200, vadim.suraev@gmail.com wrote:
> > > From: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com>
> > >
> > > This patch adds mbuf bulk allocation/freeing functions and unittest
> > >
> > > Signed-off-by: Vadim Suraev
> > > <vadim.suraev@gmail.com>
> > > ---
> > > New in v2:
> > >     - function rte_pktmbuf_alloc_bulk added
> > >     - function rte_pktmbuf_bulk_free added
> > >     - function rte_pktmbuf_free_chain added
> > >     - applied reviewers' comments
> > >
> > >  app/test/test_mbuf.c       |   94
> > +++++++++++++++++++++++++++++++++++++++++++-
> > >  lib/librte_mbuf/rte_mbuf.h |   91
> > ++++++++++++++++++++++++++++++++++++++++++
> > >  2 files changed, 184 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > > index 1ff66cb..b20c6a4 100644
> > > --- a/app/test/test_mbuf.c
> > > +++ b/app/test/test_mbuf.c
> > > @@ -77,6 +77,7 @@
> > >  #define REFCNT_RING_SIZE        (REFCNT_MBUF_NUM * REFCNT_MAX_REF)
> > >
> > >  #define MAKE_STRING(x)          # x
> > > +#define MBUF_POOL_LOCAL_CACHE_SIZE 32
> > >
> > >  static struct rte_mempool *pktmbuf_pool = NULL;
> > >
> > > @@ -405,6 +406,84 @@ test_pktmbuf_pool(void)
> > >       return ret;
> > >  }
> > >
> > ><snip>
> > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > > index 17ba791..fabeae2 100644
> > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > @@ -825,6 +825,97 @@ static inline void rte_pktmbuf_free(struct rte_mbuf
> > *m)
> > >  }
> > >
> > >  /**
> > > + * Allocate a bulk of mbufs, initiate refcnt and resets
> > > + *
> > > + * @param pool
> > > + *    memory pool to allocate from
> > > + * @param mbufs
> > > + *    Array of pointers to mbuf
> > > + * @param count
> > > + *    Array size
> > > + */
> > > +static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
> > > +                                      struct rte_mbuf **mbufs,
> > > +                                      unsigned count)
> > > +{
> > > +     unsigned idx;
> > > +     int rc = 0;
> > > +
> > > +     rc = rte_mempool_get_bulk(pool, (void **)mbufs, count);
> > > +     if (unlikely(rc))
> > > +             return rc;
> > > +
> > > +     for (idx = 0; idx < count; idx++) {
> > > +             RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 0);
> > > +             rte_mbuf_refcnt_set(mbufs[idx], 1);
> > > +             rte_pktmbuf_reset(mbufs[idx]);
> > > +     }
> > > +     return rc;
> > > +}
> > > +
> > > +/**
> > > + * Free a bulk of mbufs into its original mempool.
> > > + * This function assumes:
> > > + * - refcnt equals 1
> > > + * - mbufs are direct
> > > + * - all mbufs must belong to the same mempool
> > > + *
> > > + * @param mbufs
> > > + *    Array of pointers to mbuf
> > > + * @param count
> > > + *    Array size
> > > + */
> > > +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs,
> > > +                                      unsigned count)
> > > +{
> > > +     unsigned idx;
> > > +
> > > +     RTE_MBUF_ASSERT(count > 0);
> > > +
> > > +     for (idx = 0; idx < count; idx++) {
> > > +             RTE_MBUF_ASSERT(mbufs[idx]->pool == mbufs[0]->pool);
> > > +             RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 1);
> > > +             rte_mbuf_refcnt_set(mbufs[idx], 0);
> > This is really a misuse of the API.  The entire point of reference
> > counting is
> > to know when an mbuf has no more references and can be freed.  By forcing
> > all
> > the reference counts to zero here, you allow the refcnt infrastructure to
> > be
> > circumvented, causing memory leaks.
> >
> > I think what you need to do here is enhance the underlying pktmbuf
> > interface
> > such that an rte_mbuf structure has a destructor method association with it
> > which is called when its refcnt reaches zero.  That way the
> > rte_pktmbuf_bulk_free function can just decrement the refcnt on each
> > mbuf_structure, and the pool as a whole can be returned when the destructor
> > function discovers that all mbufs in that bulk pool are freed.
> >
> > Neil
> >
> >

  reply	other threads:[~2015-03-30 20:15 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-18 20:21 vadim.suraev
2015-03-18 20:58 ` Neil Horman
2015-03-19  8:41   ` Olivier MATZ
2015-03-19 10:06     ` Ananyev, Konstantin
2015-03-19 13:16     ` Neil Horman
2015-03-23 16:44       ` Olivier MATZ
2015-03-23 17:31         ` Vadim Suraev
2015-03-23 23:48           ` Ananyev, Konstantin
2015-03-24  7:53             ` Vadim Suraev
     [not found]               ` <2601191342CEEE43887BDE71AB977258214071C0@irsmsx105.ger.corp.intel.com>
2015-03-24 11:00                 ` Ananyev, Konstantin
2015-03-23 18:45         ` Neil Horman
2015-03-30 19:04   ` Vadim Suraev
2015-03-30 20:15     ` Neil Horman [this message]
  -- strict thread matches above, loose matches on Subject: below --
2015-03-17 21:36 vadim.suraev
2015-03-17 23:46 ` Ananyev, Konstantin
2015-03-18  5:19   ` Vadim Suraev
     [not found]     ` <2601191342CEEE43887BDE71AB977258213F7053@irsmsx105.ger.corp.intel.com>
2015-03-18  9:56       ` Ananyev, Konstantin
2015-03-18 10:41         ` Vadim Suraev
     [not found]           ` <2601191342CEEE43887BDE71AB977258213F7136@irsmsx105.ger.corp.intel.com>
2015-03-18 15:13             ` Ananyev, Konstantin
2015-03-19  8:13               ` Olivier MATZ
2015-03-19 10:47                 ` Ananyev, Konstantin
2015-03-19 10:54                   ` Olivier MATZ

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150330201519.GB24100@hmsreliant.think-freely.org \
    --to=nhorman@tuxdriver.com \
    --cc=dev@dpdk.org \
    --cc=vadim.suraev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).