DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "Richardson, Bruce" <bruce.richardson@intel.com>
Cc: "<dev@dpdk.org>" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
Date: Mon, 29 Sep 2014 13:31:59 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB97725821387685@IRSMSX104.ger.corp.intel.com> (raw)
In-Reply-To: <20140929123415.GA16652@BRICHA3-MOBL>



> -----Original Message-----
> From: Richardson, Bruce
> Sent: Monday, September 29, 2014 1:34 PM
> To: Ananyev, Konstantin
> Cc: Wiles, Roger Keith (Wind River); <dev@dpdk.org>
> Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> 
> On Mon, Sep 29, 2014 at 01:25:11PM +0100, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Monday, September 29, 2014 1:06 PM
> > > To: Wiles, Roger Keith (Wind River)
> > > Cc: Ananyev, Konstantin; <dev@dpdk.org>
> > > Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> > >
> > > On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote:
> > > >
> > > > On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote:
> > > >
> > > > >
> > > > >
> > > > >> -----Original Message-----
> > > > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith
> > > > >> Sent: Sunday, September 28, 2014 6:52 PM
> > > > >> To: <dev@dpdk.org>
> > > > >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> > > > >>
> > > > >> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine,
> please
> > > look
> > > > >> at the code below and see if these seem to fix some concerns in how the ring is handled.
> > > > >>
> > > > >> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the
> cache-
> > > >len
> > > > >> may still not satisfy the request from the cache.
> > > > >>
> > > > >> The second issue is if you believe the above code then we have to account for that issue in the stats.
> > > > >>
> > > > >> Let me know what you think?
> > > > >> ++Keith
> > > > >> ---
> > > > >>
> > > > >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > > > >> index 199a493..b1b1f7a 100644
> > > > >> --- a/lib/librte_mempool/rte_mempool.h
> > > > >> +++ b/lib/librte_mempool/rte_mempool.h
> > > > >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> > > > >>                   unsigned n, int is_mc)
> > > > >> {
> > > > >>        int ret;
> > > > >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> > > > >> -       unsigned n_orig = n;
> > > > >> -#endif
> > > > >
> > > > > Yep, as I said in my previous mail n_orig could be removed in total.
> > > > > Though from other side - it is harmless.
> > > > >
> > > > >> +
> > > > >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> > > > >>        struct rte_mempool_cache *cache;
> > > > >>        uint32_t index, len;
> > > > >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> > > > >>                        goto ring_dequeue;
> > > > >>                }
> > > > >>
> > > > >> -               cache->len += req;
> > > > >> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
> > > > >> +
> > > > >
> > > > > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success),
> > > > > or wouldn't get any entry from the ring and return negative value (failure).
> > > > > So  this change is erroneous.
> > > >
> > > > Sorry, I combined my thoughts on changing the get_bulk behavior and you would be correct for the current design. This is why I
> > > decided to make it an RFC :-)
> > > > >
> > > > >> +               if ( cache->len < n ) {
> > > > >
> > > > > If n > cache_size, then we will go straight to  'ring_dequeue' see line 959.
> > > > > So no need for that check here.
> > > >
> > > > My thinking (at the time) was get_bulk should return ’n’ instead of zero, which I feel is the better coding. You are correct it does
> not
> > > make sense unless you factor in my thinking at time :-(
> > > > >
> > > > >> +                       /*
> > > > >> +                        * Number (ret + cache->len) may not be >= n. As
> > > > >> +                        * the 'ret' value maybe zero or less then 'req'.
> > > > >> +                        *
> > > > >> +                        * Note:
> > > > >> +                        * An issue of order from the cache and common pool could
> > > > >> +                        * be an issue if (cache->len != 0 and less then n), but the
> > > > >> +                        * normal case it should be OK. If the user needs to preserve
> > > > >> +                        * the order of packets then he must set cache_size == 0.
> > > > >> +                        */
> > > > >> +                       goto ring_dequeue;
> > > > >> +               }
> > > > >>        }
> > > > >>
> > > > >>        /* Now fill in the response ... */
> > > > >> @@ -1002,9 +1014,12 @@ ring_dequeue:
> > > > >>                ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> > > > >>
> > > > >>        if (ret < 0)
> > > > >> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
> > > > >> -       else
> > > > >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
> > > > >> +       else {
> > > > >>                __MEMPOOL_STAT_ADD(mp, get_success, ret);
> > > > >> +               // Catch the case when ret != n, adding zero should not be a problem.
> > > > >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
> > > > >
> > > > > As I said above, ret == 0 on success, so need for that change.
> > > > > Just n (or n_orig) is ok here.
> > > > >
> > > > >> +       }
> > > > >>
> > > > >>        return ret;
> > > > >> }
> > > > >>
> > > > >> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> > > >
> > > > Do we think it is worth it to change the behavior of get_bulk returning ’n’ instead of zero on success? It would remove a few test
> > > IMO in a couple of places. We could also return <0 on the zero case as well, just to make sure code did not try to follow the success
> > > case by mistake.
> > >
> > > If you want to have such a function, i think it should align with the
> > > functions on the rings. In this case, this would mean having a get_burst
> > > function, which returns less than or equal to the number of elements
> > > requested.  I would not change the behaviour of the existing function
> > > without also changing the rings "bulk" function to match.
> >
> > Do you mean mempool_get_burst() that could return less number of objects then you requested?
> > If so, I wonder what will be the usage model for it?
> > To me it sounds a bit strange - like malloc() (or mmap) that could allocate to you only part of the memory you requested?
> >
> 
> I would expect it to be as useful as the rings versions. :-)

My understanding is that ring_*_burst() functions are mainly used for the
passing packets between different threads/processes (sort of IPC).
For that case burst() seems much better choice, then bulk().

> I don't actually see a problem with it. For example: if you have 10 messages
> you want to create and send, I see no reason why, if there were only 8
> buffers free, you can't create and send 8 and then try again for the last 2
> once you were finished with those 8.

The whole idea of partially successful memory allocation still sounds a bit strange to me.
But if you believe there is a use case for it - I am ok with it :)
 
Konstantin

> That being said, the reason for suggesting that behaviour was to align with
> the rings APIs. If we are looking at breaking backward compatibility (for
> both rings and mempools) other options can be considered too.
> 
> /Bruce
> 
> > Konstantin
> >
> >
> > > /Bruce
> > >
> > > > >
> > > > > NACK in summary.
> > > > > Konstantin
> > > >
> > > > Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> > > >

      reply	other threads:[~2014-09-29 13:27 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-28 17:52 Wiles, Roger Keith
2014-09-28 20:42 ` Neil Horman
2014-09-28 22:41 ` Ananyev, Konstantin
2014-09-28 23:17   ` Wiles, Roger Keith
2014-09-29 12:06     ` Bruce Richardson
2014-09-29 12:25       ` Ananyev, Konstantin
2014-09-29 12:34         ` Bruce Richardson
2014-09-29 13:31           ` Ananyev, Konstantin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB97725821387685@IRSMSX104.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).