DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "vadim.suraev@gmail.com" <vadim.suraev@gmail.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest
Date: Tue, 24 Mar 2015 11:00:07 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB97725821407246@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <2601191342CEEE43887BDE71AB977258214071C0@irsmsx105.ger.corp.intel.com>



Hi Vadim,
 
> From: Vadim Suraev [mailto:vadim.suraev@gmail.com]
> Sent: Tuesday, March 24, 2015 7:53 AM
> To: Ananyev, Konstantin
> Cc: Olivier MATZ; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest
> 
> Hi, Konstantin,
> 
> >Though from my point, such function should be generic as rte_pktmbuf_free_chain() -
> >no special limitations like all mbufs from one pool, refcnt==1, etc.
> I misunderstood, I'll rework.

Ok, thanks.
Konstantin

> Regards,
>  Vadim.
> 
> On Tue, Mar 24, 2015 at 1:48 AM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote:
> Hi Vadim,
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vadim Suraev
> > Sent: Monday, March 23, 2015 5:31 PM
> > To: Olivier MATZ
> > Cc: dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2] rte_mbuf: mbuf bulk alloc/free functions added + unittest
> >
> > Hi, Olivier,
> > No, I personally need to free a chain using mempool bulk. If I'm not
> > mistaken, it has been proposed by one of reviewers to have lower level
> > function, so it was done (I'm sorry if misunderstood)
> 
> Was it me?
> As I remember, I said it would be good to create rte_pktmbuf_bulk_free() or rte_pktmbuf_seg_bulk_free() -
> that would free a bulk of mbufs segments in the same manner as rte_pktmbuf_free_chain() does:
> count number of consecutive mbufs from the same mempool to be freed and then put them back into the pool at one go.
> Such function would be useful inside PMD code.
> In fact we already using analogy of such function inside vPMD TX code.
> Though from my point, such function should be generic as rte_pktmbuf_free_chain() -
> no special limitations like all mbufs from one pool, refcnt==1, etc.
> So if it was me who confused you - I am sorry.
> Konstantin
> 
> > Regards,
> > Vadim.
> > On Mar 23, 2015 8:44 PM, "Olivier MATZ" <olivier.matz@6wind.com> wrote:
> >
> > > Hi Neil,
> > >
> > > On 03/19/2015 02:16 PM, Neil Horman wrote:
> > > >> On 03/18/2015 09:58 PM, Neil Horman wrote:
> > > >>>> +/**
> > > >>>> + * Free a bulk of mbufs into its original mempool.
> > > >>>> + * This function assumes:
> > > >>>> + * - refcnt equals 1
> > > >>>> + * - mbufs are direct
> > > >>>> + * - all mbufs must belong to the same mempool
> > > >>>> + *
> > > >>>> + * @param mbufs
> > > >>>> + *    Array of pointers to mbuf
> > > >>>> + * @param count
> > > >>>> + *    Array size
> > > >>>> + */
> > > >>>> +static inline void rte_pktmbuf_bulk_free(struct rte_mbuf **mbufs,
> > > >>>> +                                   unsigned count)
> > > >>>> +{
> > > >>>> +  unsigned idx;
> > > >>>> +
> > > >>>> +  RTE_MBUF_ASSERT(count > 0);
> > > >>>> +
> > > >>>> +  for (idx = 0; idx < count; idx++) {
> > > >>>> +          RTE_MBUF_ASSERT(mbufs[idx]->pool == mbufs[0]->pool);
> > > >>>> +          RTE_MBUF_ASSERT(rte_mbuf_refcnt_read(mbufs[idx]) == 1);
> > > >>>> +          rte_mbuf_refcnt_set(mbufs[idx], 0);
> > > >>> This is really a misuse of the API.  The entire point of reference
> > > counting is
> > > >>> to know when an mbuf has no more references and can be freed.  By
> > > forcing all
> > > >>> the reference counts to zero here, you allow the refcnt infrastructure
> > > to be
> > > >>> circumvented, causing memory leaks.
> > > >>>
> > > >>> I think what you need to do here is enhance the underlying pktmbuf
> > > interface
> > > >>> such that an rte_mbuf structure has a destructor method association
> > > with it
> > > >>> which is called when its refcnt reaches zero.  That way the
> > > >>> rte_pktmbuf_bulk_free function can just decrement the refcnt on each
> > > >>> mbuf_structure, and the pool as a whole can be returned when the
> > > destructor
> > > >>> function discovers that all mbufs in that bulk pool are freed.
> > > >>
> > > >> I don't really understand what's the problem here. The API explicitly
> > > >> describes the conditions for calling this functions: the segments are
> > > >> directs, they belong to the same mempool, and their refcnt is 1.
> > > >>
> > > >> This function could be useful in a driver which knows that the mbuf
> > > >> it allocated matches this conditions. I think an application that
> > > >> only uses direct mbufs and one mempool could also use this function.
> > > >
> > > >
> > > > That last condition is my issue with this patch, that the user has to
> > > know what
> > > > refcnts are.  It makes this api useful for little more than the test
> > > case that
> > > > is provided with it.  Its irritating enough that for singly allocated
> > > mbufs the
> > > > user has to know what the refcount of a buffer is before freeing, but at
> > > least
> > > > they can macrotize a {rte_pktmbuf_refcnt_update;
> > > if(rte_pktmbuf_refct_read) then
> > > > free} operation.
> > > >
> > > > With this, you've placed the user in charge of not only doing that, but
> > > also of
> > > > managing the relationship between pktmbufs and the pool they came from.
> > > while
> > > > that makes sense for the test case, it really doesn't in any general use
> > > case in
> > > > which packet processing is ever deferred or queued, because it means
> > > that the
> > > > application is now responsible for holding a pointer to every packet it
> > > > allocates and checking its refcount periodically until it completes.
> > > >
> > > > There is never any reason that an application won't need to do this
> > > management,
> > > > so making it the purview of the application to handle rather than
> > > properly
> > > > integrating that functionality in the library is really a false savings.
> > >
> > > There are some places where you know that the prerequisites are met,
> > > so you can save cycles by using this function.
> > >
> > > From what I imagine, if in a driver you allocate mbufs, chain them and
> > > for some reason you realize you have to free them, you can use this
> > > function instead of freeing them one by one.
> > >
> > > Also, as it's up to the application to decide how many mbuf pools are
> > > created, and whether indirect mbufs are used or not, the application
> > > can take the short path of using this function in some conditions.
> > >
> > > Vadim, maybe you have another reason or use case for adding this
> > > function? Could you detail why you need it and how it improves your
> > > use case?
> > >
> > > Regards,
> > > Olivier
> > >


  parent reply	other threads:[~2015-03-24 11:03 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-18 20:21 vadim.suraev
2015-03-18 20:58 ` Neil Horman
2015-03-19  8:41   ` Olivier MATZ
2015-03-19 10:06     ` Ananyev, Konstantin
2015-03-19 13:16     ` Neil Horman
2015-03-23 16:44       ` Olivier MATZ
2015-03-23 17:31         ` Vadim Suraev
2015-03-23 23:48           ` Ananyev, Konstantin
2015-03-24  7:53             ` Vadim Suraev
     [not found]               ` <2601191342CEEE43887BDE71AB977258214071C0@irsmsx105.ger.corp.intel.com>
2015-03-24 11:00                 ` Ananyev, Konstantin [this message]
2015-03-23 18:45         ` Neil Horman
2015-03-30 19:04   ` Vadim Suraev
2015-03-30 20:15     ` Neil Horman
  -- strict thread matches above, loose matches on Subject: below --
2015-03-17 21:36 vadim.suraev
2015-03-17 23:46 ` Ananyev, Konstantin
2015-03-18  5:19   ` Vadim Suraev
     [not found]     ` <2601191342CEEE43887BDE71AB977258213F7053@irsmsx105.ger.corp.intel.com>
2015-03-18  9:56       ` Ananyev, Konstantin
2015-03-18 10:41         ` Vadim Suraev
     [not found]           ` <2601191342CEEE43887BDE71AB977258213F7136@irsmsx105.ger.corp.intel.com>
2015-03-18 15:13             ` Ananyev, Konstantin
2015-03-19  8:13               ` Olivier MATZ
2015-03-19 10:47                 ` Ananyev, Konstantin
2015-03-19 10:54                   ` Olivier MATZ

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB97725821407246@irsmsx105.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=dev@dpdk.org \
    --cc=vadim.suraev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).