DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk()
@ 2014-09-28 17:52 Wiles, Roger Keith
  2014-09-28 20:42 ` Neil Horman
  2014-09-28 22:41 ` Ananyev, Konstantin
  0 siblings, 2 replies; 8+ messages in thread
From: Wiles, Roger Keith @ 2014-09-28 17:52 UTC (permalink / raw)
  To: <dev@dpdk.org>

Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look at the code below and see if these seem to fix some concerns in how the ring is handled.

The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len may still not satisfy the request from the cache.

The second issue is if you believe the above code then we have to account for that issue in the stats.

Let me know what you think?
++Keith
———

diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 199a493..b1b1f7a 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
                   unsigned n, int is_mc)
 {
        int ret;
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
-       unsigned n_orig = n;
-#endif
+
 #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
        struct rte_mempool_cache *cache;
        uint32_t index, len;
@@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
                        goto ring_dequeue;
                }
 
-               cache->len += req;
+               cache->len += ret;      // Need to adjust len by ret not req, as (ret != req)
+
+               if ( cache->len < n ) {
+                       /*
+                        * Number (ret + cache->len) may not be >= n. As
+                        * the 'ret' value maybe zero or less then 'req'.
+                        *
+                        * Note:
+                        * An issue of order from the cache and common pool could
+                        * be an issue if (cache->len != 0 and less then n), but the
+                        * normal case it should be OK. If the user needs to preserve
+                        * the order of packets then he must set cache_size == 0.
+                        */
+                       goto ring_dequeue;
+               }
        }
 
        /* Now fill in the response ... */
@@ -1002,9 +1014,12 @@ ring_dequeue:
                ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n);
 
        if (ret < 0)
-               __MEMPOOL_STAT_ADD(mp, get_fail, n_orig);
-       else
+               __MEMPOOL_STAT_ADD(mp, get_fail, n);
+       else {
                __MEMPOOL_STAT_ADD(mp, get_success, ret);
+               // Catch the case when ret != n, adding zero should not be a problem.
+               __MEMPOOL_STAT_ADD(mp, get_fail, n - ret);
+       }
 
        return ret;
 }

Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-09-29 13:27 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-28 17:52 [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() Wiles, Roger Keith
2014-09-28 20:42 ` Neil Horman
2014-09-28 22:41 ` Ananyev, Konstantin
2014-09-28 23:17   ` Wiles, Roger Keith
2014-09-29 12:06     ` Bruce Richardson
2014-09-29 12:25       ` Ananyev, Konstantin
2014-09-29 12:34         ` Bruce Richardson
2014-09-29 13:31           ` Ananyev, Konstantin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).