From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.tuxdriver.com (charlotte.tuxdriver.com [70.61.120.58]) by dpdk.org (Postfix) with ESMTP id 7A87611F5 for ; Sun, 28 Sep 2014 22:36:10 +0200 (CEST) Received: from [2001:470:8:a08:215:ff:fecc:4872] (helo=localhost) by smtp.tuxdriver.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.63) (envelope-from ) id 1XYLIi-0006vk-Ix; Sun, 28 Sep 2014 16:42:41 -0400 Date: Sun, 28 Sep 2014 16:42:33 -0400 From: Neil Horman To: "Wiles, Roger Keith" Message-ID: <20140928204233.GA4012@localhost.localdomain> References: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Spam-Score: -2.9 (--) X-Spam-Status: No Cc: "" Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Sep 2014 20:36:10 -0000 On Sun, Sep 28, 2014 at 05:52:16PM +0000, Wiles, Roger Keith wrote: > Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look at the code below and see if these seem to fix some concerns in how the ring is handled. > > The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len may still not satisfy the request from the cache. > ret == req should be guaranteed. As documented, rte_ring_mc_dequeue_bulk, when called with behavior == FIXED (which it is internally), returns 0 iff the entire request was satisfied, so we can safely add req. That said, I've not validated that it always does whats documented, but if it doesn't, the fix should likely be internal to the function, not external to it. Neil > The second issue is if you believe the above code then we have to account for that issue in the stats. > > Let me know what you think? > ++Keith > ——— > > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > index 199a493..b1b1f7a 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > unsigned n, int is_mc) > { > int ret; > -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG > - unsigned n_orig = n; > -#endif > + > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > struct rte_mempool_cache *cache; > uint32_t index, len; > @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > goto ring_dequeue; > } > > - cache->len += req; > + cache->len += ret; // Need to adjust len by ret not req, as (ret != req) > + > + if ( cache->len < n ) { > + /* > + * Number (ret + cache->len) may not be >= n. As > + * the 'ret' value maybe zero or less then 'req'. > + * > + * Note: > + * An issue of order from the cache and common pool could > + * be an issue if (cache->len != 0 and less then n), but the > + * normal case it should be OK. If the user needs to preserve > + * the order of packets then he must set cache_size == 0. > + */ > + goto ring_dequeue; > + } > } > > /* Now fill in the response ... */ > @@ -1002,9 +1014,12 @@ ring_dequeue: > ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n); > > if (ret < 0) > - __MEMPOOL_STAT_ADD(mp, get_fail, n_orig); > - else > + __MEMPOOL_STAT_ADD(mp, get_fail, n); > + else { > __MEMPOOL_STAT_ADD(mp, get_success, ret); > + // Catch the case when ret != n, adding zero should not be a problem. > + __MEMPOOL_STAT_ADD(mp, get_fail, n - ret); > + } > > return ret; > } > > Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533 > >