DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
@ 2014-09-28 17:52 Wiles, Roger Keith
  2014-09-28 20:42 ` Neil Horman
  2014-09-28 22:41 ` Ananyev, Konstantin
  0 siblings, 2 replies; 8+ messages in thread
From: Wiles, Roger Keith @ 2014-09-28 17:52 UTC (permalink / raw)
  To: <dev@dpdk.org>

Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look at the code below and see if these seem to fix some concerns in how the ring is handled.

The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len may still not satisfy the request from the cache.

The second issue is if you believe the above code then we have to account for that issue in the stats.

Let me know what you think?
++Keith
———

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 199a493..b1b1f7a 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
                   unsigned n, int is_mc)
 {
        int ret;
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-       unsigned n_orig = n;
-#endif
+
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
        struct rte_mempool_cache *cache;
        uint32_t index, len;
@@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
                        goto ring_dequeue;
                }
 
-               cache->len += req;
+               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
+
+               if ( cache->len < n ) {
+                       /*
+                        * Number (ret + cache->len) may not be >= n. As
+                        * the 'ret' value maybe zero or less then 'req'.
+                        *
+                        * Note:
+                        * An issue of order from the cache and common pool could
+                        * be an issue if (cache->len != 0 and less then n), but the
+                        * normal case it should be OK. If the user needs to preserve
+                        * the order of packets then he must set cache_size == 0.
+                        */
+                       goto ring_dequeue;
+               }
        }
 
        /* Now fill in the response ... */
@@ -1002,9 +1014,12 @@ ring_dequeue:
                ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
 
        if (ret < 0)
-               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
-       else
+               __MEMPOOL_STAT_ADD(mp, get_fail, n);
+       else {
                __MEMPOOL_STAT_ADD(mp, get_success, ret);
+               // Catch the case when ret != n, adding zero should not be a problem.
+               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
+       }
 
        return ret;
 }

Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
  2014-09-28 17:52 [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() Wiles, Roger Keith
@ 2014-09-28 20:42 ` Neil Horman
  2014-09-28 22:41 ` Ananyev, Konstantin
  1 sibling, 0 replies; 8+ messages in thread
From: Neil Horman @ 2014-09-28 20:42 UTC (permalink / raw)
  To: Wiles, Roger Keith; +Cc: <dev@dpdk.org>

On Sun, Sep 28, 2014 at 05:52:16PM +0000, Wiles, Roger Keith wrote:
> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look at the code below and see if these seem to fix some concerns in how the ring is handled.
> 
> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len may still not satisfy the request from the cache.
> 
ret == req should be guaranteed.  As documented, rte_ring_mc_dequeue_bulk, when
called with behavior == FIXED (which it is internally), returns 0 iff the entire
request was satisfied, so we can safely add req.  That said, I've not validated
that it always does whats documented, but if it doesn't, the fix should likely
be internal to the function, not external to it.
Neil

> The second issue is if you believe the above code then we have to account for that issue in the stats.
> 
> Let me know what you think?
> ++Keith
> ———
> 
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 199a493..b1b1f7a 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>                    unsigned n, int is_mc)
>  {
>         int ret;
> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -       unsigned n_orig = n;
> -#endif
> +
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>         struct rte_mempool_cache *cache;
>         uint32_t index, len;
> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>                         goto ring_dequeue;
>                 }
>  
> -               cache->len += req;
> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
> +
> +               if ( cache->len < n ) {
> +                       /*
> +                        * Number (ret + cache->len) may not be >= n. As
> +                        * the 'ret' value maybe zero or less then 'req'.
> +                        *
> +                        * Note:
> +                        * An issue of order from the cache and common pool could
> +                        * be an issue if (cache->len != 0 and less then n), but the
> +                        * normal case it should be OK. If the user needs to preserve
> +                        * the order of packets then he must set cache_size == 0.
> +                        */
> +                       goto ring_dequeue;
> +               }
>         }
>  
>         /* Now fill in the response ... */
> @@ -1002,9 +1014,12 @@ ring_dequeue:
>                 ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
>  
>         if (ret < 0)
> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
> -       else
> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
> +       else {
>                 __MEMPOOL_STAT_ADD(mp, get_success, ret);
> +               // Catch the case when ret != n, adding zero should not be a problem.
> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
> +       }
>  
>         return ret;
>  }
> 
> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> 
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
  2014-09-28 17:52 [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() Wiles, Roger Keith
  2014-09-28 20:42 ` Neil Horman
@ 2014-09-28 22:41 ` Ananyev, Konstantin
  2014-09-28 23:17   ` Wiles, Roger Keith
  1 sibling, 1 reply; 8+ messages in thread
From: Ananyev, Konstantin @ 2014-09-28 22:41 UTC (permalink / raw)
  To: Wiles, Roger Keith (Wind River), <dev@dpdk.org>



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith
> Sent: Sunday, September 28, 2014 6:52 PM
> To: <dev@dpdk.org>
> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> 
> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look
> at the code below and see if these seem to fix some concerns in how the ring is handled.
> 
> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len
> may still not satisfy the request from the cache.
> 
> The second issue is if you believe the above code then we have to account for that issue in the stats.
> 
> Let me know what you think?
> ++Keith
> ---
> 
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 199a493..b1b1f7a 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>                    unsigned n, int is_mc)
>  {
>         int ret;
> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> -       unsigned n_orig = n;
> -#endif

Yep, as I said in my previous mail n_orig could be removed in total.
Though from other side - it is harmless.

> +
>  #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>         struct rte_mempool_cache *cache;
>         uint32_t index, len;
> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>                         goto ring_dequeue;
>                 }
> 
> -               cache->len += req;
> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
> +

rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success),
or wouldn't get any entry from the ring and return negative value (failure).
So  this change is erroneous.

> +               if ( cache->len < n ) {

If n > cache_size, then we will go straight to  'ring_dequeue' see line 959.
So no need for that check here.

> +                       /*
> +                        * Number (ret + cache->len) may not be >= n. As
> +                        * the 'ret' value maybe zero or less then 'req'.
> +                        *
> +                        * Note:
> +                        * An issue of order from the cache and common pool could
> +                        * be an issue if (cache->len != 0 and less then n), but the
> +                        * normal case it should be OK. If the user needs to preserve
> +                        * the order of packets then he must set cache_size == 0.
> +                        */
> +                       goto ring_dequeue;
> +               }
>         }
> 
>         /* Now fill in the response ... */
> @@ -1002,9 +1014,12 @@ ring_dequeue:
>                 ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> 
>         if (ret < 0)
> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
> -       else
> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
> +       else {
>                 __MEMPOOL_STAT_ADD(mp, get_success, ret);
> +               // Catch the case when ret != n, adding zero should not be a problem.
> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);

As I said above, ret == 0 on success, so need for that change.
Just n (or n_orig) is ok here.

> +       }
> 
>         return ret;
>  }
> 
> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533

NACK in summary.
Konstantin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
  2014-09-28 22:41 ` Ananyev, Konstantin
@ 2014-09-28 23:17   ` Wiles, Roger Keith
  2014-09-29 12:06     ` Bruce Richardson
  0 siblings, 1 reply; 8+ messages in thread
From: Wiles, Roger Keith @ 2014-09-28 23:17 UTC (permalink / raw)
  To: ANANYEV, KONSTANTIN; +Cc: <dev@dpdk.org>


On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote:

> 
> 
>> -----Original Message-----
>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith
>> Sent: Sunday, September 28, 2014 6:52 PM
>> To: <dev@dpdk.org>
>> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
>> 
>> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look
>> at the code below and see if these seem to fix some concerns in how the ring is handled.
>> 
>> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len
>> may still not satisfy the request from the cache.
>> 
>> The second issue is if you believe the above code then we have to account for that issue in the stats.
>> 
>> Let me know what you think?
>> ++Keith
>> ---
>> 
>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>> index 199a493..b1b1f7a 100644
>> --- a/lib/librte_mempool/rte_mempool.h
>> +++ b/lib/librte_mempool/rte_mempool.h
>> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>>                   unsigned n, int is_mc)
>> {
>>        int ret;
>> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
>> -       unsigned n_orig = n;
>> -#endif
> 
> Yep, as I said in my previous mail n_orig could be removed in total.
> Though from other side - it is harmless.
> 
>> +
>> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>        struct rte_mempool_cache *cache;
>>        uint32_t index, len;
>> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>>                        goto ring_dequeue;
>>                }
>> 
>> -               cache->len += req;
>> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
>> +
> 
> rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success),
> or wouldn't get any entry from the ring and return negative value (failure).
> So  this change is erroneous.

Sorry, I combined my thoughts on changing the get_bulk behavior and you would be correct for the current design. This is why I decided to make it an RFC :-)
> 
>> +               if ( cache->len < n ) {
> 
> If n > cache_size, then we will go straight to  'ring_dequeue' see line 959.
> So no need for that check here.

My thinking (at the time) was get_bulk should return ’n’ instead of zero, which I feel is the better coding. You are correct it does not make sense unless you factor in my thinking at time :-(
> 
>> +                       /*
>> +                        * Number (ret + cache->len) may not be >= n. As
>> +                        * the 'ret' value maybe zero or less then 'req'.
>> +                        *
>> +                        * Note:
>> +                        * An issue of order from the cache and common pool could
>> +                        * be an issue if (cache->len != 0 and less then n), but the
>> +                        * normal case it should be OK. If the user needs to preserve
>> +                        * the order of packets then he must set cache_size == 0.
>> +                        */
>> +                       goto ring_dequeue;
>> +               }
>>        }
>> 
>>        /* Now fill in the response ... */
>> @@ -1002,9 +1014,12 @@ ring_dequeue:
>>                ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
>> 
>>        if (ret < 0)
>> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
>> -       else
>> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
>> +       else {
>>                __MEMPOOL_STAT_ADD(mp, get_success, ret);
>> +               // Catch the case when ret != n, adding zero should not be a problem.
>> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
> 
> As I said above, ret == 0 on success, so need for that change.
> Just n (or n_orig) is ok here.
> 
>> +       }
>> 
>>        return ret;
>> }
>> 
>> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533

Do we think it is worth it to change the behavior of get_bulk returning ’n’ instead of zero on success? It would remove a few test IMO in a couple of places. We could also return <0 on the zero case as well, just to make sure code did not try to follow the success case by mistake.
> 
> NACK in summary.
> Konstantin

Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
  2014-09-28 23:17   ` Wiles, Roger Keith
@ 2014-09-29 12:06     ` Bruce Richardson
  2014-09-29 12:25       ` Ananyev, Konstantin
  0 siblings, 1 reply; 8+ messages in thread
From: Bruce Richardson @ 2014-09-29 12:06 UTC (permalink / raw)
  To: Wiles, Roger Keith; +Cc: <dev@dpdk.org>

On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote:
> 
> On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote:
> 
> > 
> > 
> >> -----Original Message-----
> >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith
> >> Sent: Sunday, September 28, 2014 6:52 PM
> >> To: <dev@dpdk.org>
> >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> >> 
> >> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look
> >> at the code below and see if these seem to fix some concerns in how the ring is handled.
> >> 
> >> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len
> >> may still not satisfy the request from the cache.
> >> 
> >> The second issue is if you believe the above code then we have to account for that issue in the stats.
> >> 
> >> Let me know what you think?
> >> ++Keith
> >> ---
> >> 
> >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> >> index 199a493..b1b1f7a 100644
> >> --- a/lib/librte_mempool/rte_mempool.h
> >> +++ b/lib/librte_mempool/rte_mempool.h
> >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> >>                   unsigned n, int is_mc)
> >> {
> >>        int ret;
> >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> >> -       unsigned n_orig = n;
> >> -#endif
> > 
> > Yep, as I said in my previous mail n_orig could be removed in total.
> > Though from other side - it is harmless.
> > 
> >> +
> >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> >>        struct rte_mempool_cache *cache;
> >>        uint32_t index, len;
> >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> >>                        goto ring_dequeue;
> >>                }
> >> 
> >> -               cache->len += req;
> >> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
> >> +
> > 
> > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success),
> > or wouldn't get any entry from the ring and return negative value (failure).
> > So  this change is erroneous.
> 
> Sorry, I combined my thoughts on changing the get_bulk behavior and you would be correct for the current design. This is why I decided to make it an RFC :-)
> > 
> >> +               if ( cache->len < n ) {
> > 
> > If n > cache_size, then we will go straight to  'ring_dequeue' see line 959.
> > So no need for that check here.
> 
> My thinking (at the time) was get_bulk should return ’n’ instead of zero, which I feel is the better coding. You are correct it does not make sense unless you factor in my thinking at time :-(
> > 
> >> +                       /*
> >> +                        * Number (ret + cache->len) may not be >= n. As
> >> +                        * the 'ret' value maybe zero or less then 'req'.
> >> +                        *
> >> +                        * Note:
> >> +                        * An issue of order from the cache and common pool could
> >> +                        * be an issue if (cache->len != 0 and less then n), but the
> >> +                        * normal case it should be OK. If the user needs to preserve
> >> +                        * the order of packets then he must set cache_size == 0.
> >> +                        */
> >> +                       goto ring_dequeue;
> >> +               }
> >>        }
> >> 
> >>        /* Now fill in the response ... */
> >> @@ -1002,9 +1014,12 @@ ring_dequeue:
> >>                ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> >> 
> >>        if (ret < 0)
> >> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
> >> -       else
> >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
> >> +       else {
> >>                __MEMPOOL_STAT_ADD(mp, get_success, ret);
> >> +               // Catch the case when ret != n, adding zero should not be a problem.
> >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
> > 
> > As I said above, ret == 0 on success, so need for that change.
> > Just n (or n_orig) is ok here.
> > 
> >> +       }
> >> 
> >>        return ret;
> >> }
> >> 
> >> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> 
> Do we think it is worth it to change the behavior of get_bulk returning ’n’ instead of zero on success? It would remove a few test IMO in a couple of places. We could also return <0 on the zero case as well, just to make sure code did not try to follow the success case by mistake.

If you want to have such a function, i think it should align with the 
functions on the rings. In this case, this would mean having a get_burst 
function, which returns less than or equal to the number of elements 
requested.  I would not change the behaviour of the existing function 
without also changing the rings "bulk" function to match.

/Bruce

> > 
> > NACK in summary.
> > Konstantin
> 
> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
  2014-09-29 12:06     ` Bruce Richardson
@ 2014-09-29 12:25       ` Ananyev, Konstantin
  2014-09-29 12:34         ` Bruce Richardson
  0 siblings, 1 reply; 8+ messages in thread
From: Ananyev, Konstantin @ 2014-09-29 12:25 UTC (permalink / raw)
  To: Richardson, Bruce, Wiles, Roger Keith (Wind River); +Cc: <dev@dpdk.org>



> -----Original Message-----
> From: Richardson, Bruce
> Sent: Monday, September 29, 2014 1:06 PM
> To: Wiles, Roger Keith (Wind River)
> Cc: Ananyev, Konstantin; <dev@dpdk.org>
> Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> 
> On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote:
> >
> > On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote:
> >
> > >
> > >
> > >> -----Original Message-----
> > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith
> > >> Sent: Sunday, September 28, 2014 6:52 PM
> > >> To: <dev@dpdk.org>
> > >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> > >>
> > >> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please
> look
> > >> at the code below and see if these seem to fix some concerns in how the ring is handled.
> > >>
> > >> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache-
> >len
> > >> may still not satisfy the request from the cache.
> > >>
> > >> The second issue is if you believe the above code then we have to account for that issue in the stats.
> > >>
> > >> Let me know what you think?
> > >> ++Keith
> > >> ---
> > >>
> > >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > >> index 199a493..b1b1f7a 100644
> > >> --- a/lib/librte_mempool/rte_mempool.h
> > >> +++ b/lib/librte_mempool/rte_mempool.h
> > >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> > >>                   unsigned n, int is_mc)
> > >> {
> > >>        int ret;
> > >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> > >> -       unsigned n_orig = n;
> > >> -#endif
> > >
> > > Yep, as I said in my previous mail n_orig could be removed in total.
> > > Though from other side - it is harmless.
> > >
> > >> +
> > >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> > >>        struct rte_mempool_cache *cache;
> > >>        uint32_t index, len;
> > >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> > >>                        goto ring_dequeue;
> > >>                }
> > >>
> > >> -               cache->len += req;
> > >> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
> > >> +
> > >
> > > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success),
> > > or wouldn't get any entry from the ring and return negative value (failure).
> > > So  this change is erroneous.
> >
> > Sorry, I combined my thoughts on changing the get_bulk behavior and you would be correct for the current design. This is why I
> decided to make it an RFC :-)
> > >
> > >> +               if ( cache->len < n ) {
> > >
> > > If n > cache_size, then we will go straight to  'ring_dequeue' see line 959.
> > > So no need for that check here.
> >
> > My thinking (at the time) was get_bulk should return ’n’ instead of zero, which I feel is the better coding. You are correct it does not
> make sense unless you factor in my thinking at time :-(
> > >
> > >> +                       /*
> > >> +                        * Number (ret + cache->len) may not be >= n. As
> > >> +                        * the 'ret' value maybe zero or less then 'req'.
> > >> +                        *
> > >> +                        * Note:
> > >> +                        * An issue of order from the cache and common pool could
> > >> +                        * be an issue if (cache->len != 0 and less then n), but the
> > >> +                        * normal case it should be OK. If the user needs to preserve
> > >> +                        * the order of packets then he must set cache_size == 0.
> > >> +                        */
> > >> +                       goto ring_dequeue;
> > >> +               }
> > >>        }
> > >>
> > >>        /* Now fill in the response ... */
> > >> @@ -1002,9 +1014,12 @@ ring_dequeue:
> > >>                ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> > >>
> > >>        if (ret < 0)
> > >> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
> > >> -       else
> > >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
> > >> +       else {
> > >>                __MEMPOOL_STAT_ADD(mp, get_success, ret);
> > >> +               // Catch the case when ret != n, adding zero should not be a problem.
> > >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
> > >
> > > As I said above, ret == 0 on success, so need for that change.
> > > Just n (or n_orig) is ok here.
> > >
> > >> +       }
> > >>
> > >>        return ret;
> > >> }
> > >>
> > >> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> >
> > Do we think it is worth it to change the behavior of get_bulk returning ’n’ instead of zero on success? It would remove a few test
> IMO in a couple of places. We could also return <0 on the zero case as well, just to make sure code did not try to follow the success
> case by mistake.
> 
> If you want to have such a function, i think it should align with the
> functions on the rings. In this case, this would mean having a get_burst
> function, which returns less than or equal to the number of elements
> requested.  I would not change the behaviour of the existing function
> without also changing the rings "bulk" function to match.

Do you mean mempool_get_burst() that could return less number of objects then you requested?
If so, I wonder what will be the usage model for it?
To me it sounds a bit strange - like malloc() (or mmap) that could allocate to you only part of the memory you requested?

Konstantin

 
> /Bruce
> 
> > >
> > > NACK in summary.
> > > Konstantin
> >
> > Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> >

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
  2014-09-29 12:25       ` Ananyev, Konstantin
@ 2014-09-29 12:34         ` Bruce Richardson
  2014-09-29 13:31           ` Ananyev, Konstantin
  0 siblings, 1 reply; 8+ messages in thread
From: Bruce Richardson @ 2014-09-29 12:34 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: <dev@dpdk.org>

On Mon, Sep 29, 2014 at 01:25:11PM +0100, Ananyev, Konstantin wrote:
> 
> 
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Monday, September 29, 2014 1:06 PM
> > To: Wiles, Roger Keith (Wind River)
> > Cc: Ananyev, Konstantin; <dev@dpdk.org>
> > Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> > 
> > On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote:
> > >
> > > On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote:
> > >
> > > >
> > > >
> > > >> -----Original Message-----
> > > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith
> > > >> Sent: Sunday, September 28, 2014 6:52 PM
> > > >> To: <dev@dpdk.org>
> > > >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> > > >>
> > > >> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please
> > look
> > > >> at the code below and see if these seem to fix some concerns in how the ring is handled.
> > > >>
> > > >> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache-
> > >len
> > > >> may still not satisfy the request from the cache.
> > > >>
> > > >> The second issue is if you believe the above code then we have to account for that issue in the stats.
> > > >>
> > > >> Let me know what you think?
> > > >> ++Keith
> > > >> ---
> > > >>
> > > >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > > >> index 199a493..b1b1f7a 100644
> > > >> --- a/lib/librte_mempool/rte_mempool.h
> > > >> +++ b/lib/librte_mempool/rte_mempool.h
> > > >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> > > >>                   unsigned n, int is_mc)
> > > >> {
> > > >>        int ret;
> > > >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> > > >> -       unsigned n_orig = n;
> > > >> -#endif
> > > >
> > > > Yep, as I said in my previous mail n_orig could be removed in total.
> > > > Though from other side - it is harmless.
> > > >
> > > >> +
> > > >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> > > >>        struct rte_mempool_cache *cache;
> > > >>        uint32_t index, len;
> > > >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> > > >>                        goto ring_dequeue;
> > > >>                }
> > > >>
> > > >> -               cache->len += req;
> > > >> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
> > > >> +
> > > >
> > > > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success),
> > > > or wouldn't get any entry from the ring and return negative value (failure).
> > > > So  this change is erroneous.
> > >
> > > Sorry, I combined my thoughts on changing the get_bulk behavior and you would be correct for the current design. This is why I
> > decided to make it an RFC :-)
> > > >
> > > >> +               if ( cache->len < n ) {
> > > >
> > > > If n > cache_size, then we will go straight to  'ring_dequeue' see line 959.
> > > > So no need for that check here.
> > >
> > > My thinking (at the time) was get_bulk should return ’n’ instead of zero, which I feel is the better coding. You are correct it does not
> > make sense unless you factor in my thinking at time :-(
> > > >
> > > >> +                       /*
> > > >> +                        * Number (ret + cache->len) may not be >= n. As
> > > >> +                        * the 'ret' value maybe zero or less then 'req'.
> > > >> +                        *
> > > >> +                        * Note:
> > > >> +                        * An issue of order from the cache and common pool could
> > > >> +                        * be an issue if (cache->len != 0 and less then n), but the
> > > >> +                        * normal case it should be OK. If the user needs to preserve
> > > >> +                        * the order of packets then he must set cache_size == 0.
> > > >> +                        */
> > > >> +                       goto ring_dequeue;
> > > >> +               }
> > > >>        }
> > > >>
> > > >>        /* Now fill in the response ... */
> > > >> @@ -1002,9 +1014,12 @@ ring_dequeue:
> > > >>                ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> > > >>
> > > >>        if (ret < 0)
> > > >> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
> > > >> -       else
> > > >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
> > > >> +       else {
> > > >>                __MEMPOOL_STAT_ADD(mp, get_success, ret);
> > > >> +               // Catch the case when ret != n, adding zero should not be a problem.
> > > >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
> > > >
> > > > As I said above, ret == 0 on success, so need for that change.
> > > > Just n (or n_orig) is ok here.
> > > >
> > > >> +       }
> > > >>
> > > >>        return ret;
> > > >> }
> > > >>
> > > >> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> > >
> > > Do we think it is worth it to change the behavior of get_bulk returning ’n’ instead of zero on success? It would remove a few test
> > IMO in a couple of places. We could also return <0 on the zero case as well, just to make sure code did not try to follow the success
> > case by mistake.
> > 
> > If you want to have such a function, i think it should align with the
> > functions on the rings. In this case, this would mean having a get_burst
> > function, which returns less than or equal to the number of elements
> > requested.  I would not change the behaviour of the existing function
> > without also changing the rings "bulk" function to match.
> 
> Do you mean mempool_get_burst() that could return less number of objects then you requested?
> If so, I wonder what will be the usage model for it?
> To me it sounds a bit strange - like malloc() (or mmap) that could allocate to you only part of the memory you requested?
> 

I would expect it to be as useful as the rings versions. :-)
I don't actually see a problem with it. For example: if you have 10 messages 
you want to create and send, I see no reason why, if there were only 8 
buffers free, you can't create and send 8 and then try again for the last 2 
once you were finished with those 8.

That being said, the reason for suggesting that behaviour was to align with 
the rings APIs. If we are looking at breaking backward compatibility (for 
both rings and mempools) other options can be considered too.

/Bruce

> Konstantin
> 
>  
> > /Bruce
> > 
> > > >
> > > > NACK in summary.
> > > > Konstantin
> > >
> > > Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> > >

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
  2014-09-29 12:34         ` Bruce Richardson
@ 2014-09-29 13:31           ` Ananyev, Konstantin
  0 siblings, 0 replies; 8+ messages in thread
From: Ananyev, Konstantin @ 2014-09-29 13:31 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: <dev@dpdk.org>



> -----Original Message-----
> From: Richardson, Bruce
> Sent: Monday, September 29, 2014 1:34 PM
> To: Ananyev, Konstantin
> Cc: Wiles, Roger Keith (Wind River); <dev@dpdk.org>
> Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> 
> On Mon, Sep 29, 2014 at 01:25:11PM +0100, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Monday, September 29, 2014 1:06 PM
> > > To: Wiles, Roger Keith (Wind River)
> > > Cc: Ananyev, Konstantin; <dev@dpdk.org>
> > > Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> > >
> > > On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote:
> > > >
> > > > On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote:
> > > >
> > > > >
> > > > >
> > > > >> -----Original Message-----
> > > > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith
> > > > >> Sent: Sunday, September 28, 2014 6:52 PM
> > > > >> To: <dev@dpdk.org>
> > > > >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
> > > > >>
> > > > >> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine,
> please
> > > look
> > > > >> at the code below and see if these seem to fix some concerns in how the ring is handled.
> > > > >>
> > > > >> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the
> cache-
> > > >len
> > > > >> may still not satisfy the request from the cache.
> > > > >>
> > > > >> The second issue is if you believe the above code then we have to account for that issue in the stats.
> > > > >>
> > > > >> Let me know what you think?
> > > > >> ++Keith
> > > > >> ---
> > > > >>
> > > > >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > > > >> index 199a493..b1b1f7a 100644
> > > > >> --- a/lib/librte_mempool/rte_mempool.h
> > > > >> +++ b/lib/librte_mempool/rte_mempool.h
> > > > >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> > > > >>                   unsigned n, int is_mc)
> > > > >> {
> > > > >>        int ret;
> > > > >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> > > > >> -       unsigned n_orig = n;
> > > > >> -#endif
> > > > >
> > > > > Yep, as I said in my previous mail n_orig could be removed in total.
> > > > > Though from other side - it is harmless.
> > > > >
> > > > >> +
> > > > >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
> > > > >>        struct rte_mempool_cache *cache;
> > > > >>        uint32_t index, len;
> > > > >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
> > > > >>                        goto ring_dequeue;
> > > > >>                }
> > > > >>
> > > > >> -               cache->len += req;
> > > > >> +               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
> > > > >> +
> > > > >
> > > > > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success),
> > > > > or wouldn't get any entry from the ring and return negative value (failure).
> > > > > So  this change is erroneous.
> > > >
> > > > Sorry, I combined my thoughts on changing the get_bulk behavior and you would be correct for the current design. This is why I
> > > decided to make it an RFC :-)
> > > > >
> > > > >> +               if ( cache->len < n ) {
> > > > >
> > > > > If n > cache_size, then we will go straight to  'ring_dequeue' see line 959.
> > > > > So no need for that check here.
> > > >
> > > > My thinking (at the time) was get_bulk should return ’n’ instead of zero, which I feel is the better coding. You are correct it does
> not
> > > make sense unless you factor in my thinking at time :-(
> > > > >
> > > > >> +                       /*
> > > > >> +                        * Number (ret + cache->len) may not be >= n. As
> > > > >> +                        * the 'ret' value maybe zero or less then 'req'.
> > > > >> +                        *
> > > > >> +                        * Note:
> > > > >> +                        * An issue of order from the cache and common pool could
> > > > >> +                        * be an issue if (cache->len != 0 and less then n), but the
> > > > >> +                        * normal case it should be OK. If the user needs to preserve
> > > > >> +                        * the order of packets then he must set cache_size == 0.
> > > > >> +                        */
> > > > >> +                       goto ring_dequeue;
> > > > >> +               }
> > > > >>        }
> > > > >>
> > > > >>        /* Now fill in the response ... */
> > > > >> @@ -1002,9 +1014,12 @@ ring_dequeue:
> > > > >>                ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
> > > > >>
> > > > >>        if (ret < 0)
> > > > >> -               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
> > > > >> -       else
> > > > >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n);
> > > > >> +       else {
> > > > >>                __MEMPOOL_STAT_ADD(mp, get_success, ret);
> > > > >> +               // Catch the case when ret != n, adding zero should not be a problem.
> > > > >> +               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
> > > > >
> > > > > As I said above, ret == 0 on success, so need for that change.
> > > > > Just n (or n_orig) is ok here.
> > > > >
> > > > >> +       }
> > > > >>
> > > > >>        return ret;
> > > > >> }
> > > > >>
> > > > >> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> > > >
> > > > Do we think it is worth it to change the behavior of get_bulk returning ’n’ instead of zero on success? It would remove a few test
> > > IMO in a couple of places. We could also return <0 on the zero case as well, just to make sure code did not try to follow the success
> > > case by mistake.
> > >
> > > If you want to have such a function, i think it should align with the
> > > functions on the rings. In this case, this would mean having a get_burst
> > > function, which returns less than or equal to the number of elements
> > > requested.  I would not change the behaviour of the existing function
> > > without also changing the rings "bulk" function to match.
> >
> > Do you mean mempool_get_burst() that could return less number of objects then you requested?
> > If so, I wonder what will be the usage model for it?
> > To me it sounds a bit strange - like malloc() (or mmap) that could allocate to you only part of the memory you requested?
> >
> 
> I would expect it to be as useful as the rings versions. :-)

My understanding is that ring_*_burst() functions are mainly used for the
passing packets between different threads/processes (sort of IPC).
For that case burst() seems much better choice, then bulk().

> I don't actually see a problem with it. For example: if you have 10 messages
> you want to create and send, I see no reason why, if there were only 8
> buffers free, you can't create and send 8 and then try again for the last 2
> once you were finished with those 8.

The whole idea of partially successful memory allocation still sounds a bit strange to me.
But if you believe there is a use case for it - I am ok with it :)
 
Konstantin

> That being said, the reason for suggesting that behaviour was to align with
> the rings APIs. If we are looking at breaking backward compatibility (for
> both rings and mempools) other options can be considered too.
> 
> /Bruce
> 
> > Konstantin
> >
> >
> > > /Bruce
> > >
> > > > >
> > > > > NACK in summary.
> > > > > Konstantin
> > > >
> > > > Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533
> > > >

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-09-29 13:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-28 17:52 [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() Wiles, Roger Keith
2014-09-28 20:42 ` Neil Horman
2014-09-28 22:41 ` Ananyev, Konstantin
2014-09-28 23:17   ` Wiles, Roger Keith
2014-09-29 12:06     ` Bruce Richardson
2014-09-29 12:25       ` Ananyev, Konstantin
2014-09-29 12:34         ` Bruce Richardson
2014-09-29 13:31           ` Ananyev, Konstantin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).