From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 6F5CF71 for ; Mon, 29 Sep 2014 14:00:38 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 29 Sep 2014 05:06:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,620,1406617200"; d="scan'208";a="606948924" Received: from bricha3-mobl.ger.corp.intel.com (HELO bricha3-mobl.ir.intel.com) ([10.243.20.21]) by fmsmga002.fm.intel.com with SMTP; 29 Sep 2014 05:06:14 -0700 Received: by bricha3-mobl.ir.intel.com (sSMTP sendmail emulation); Mon, 29 Sep 2014 13:06:13 +0001 Date: Mon, 29 Sep 2014 13:06:13 +0100 From: Bruce Richardson To: "Wiles, Roger Keith" Message-ID: <20140929120613.GG12072@BRICHA3-MOBL> References: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> <2601191342CEEE43887BDE71AB977258213851D2@IRSMSX104.ger.corp.intel.com> <4F9CE4A3-600B-42E0-B5C0-71D3AF7F0CF5@windriver.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <4F9CE4A3-600B-42E0-B5C0-71D3AF7F0CF5@windriver.com> Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.22 (2013-10-16) Cc: "" Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Sep 2014 12:00:39 -0000 On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote: > > On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin wrote: > > > > > > >> -----Original Message----- > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith > >> Sent: Sunday, September 28, 2014 6:52 PM > >> To: > >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() > >> > >> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please look > >> at the code below and see if these seem to fix some concerns in how the ring is handled. > >> > >> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache->len > >> may still not satisfy the request from the cache. > >> > >> The second issue is if you believe the above code then we have to account for that issue in the stats. > >> > >> Let me know what you think? > >> ++Keith > >> --- > >> > >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > >> index 199a493..b1b1f7a 100644 > >> --- a/lib/librte_mempool/rte_mempool.h > >> +++ b/lib/librte_mempool/rte_mempool.h > >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > >> unsigned n, int is_mc) > >> { > >> int ret; > >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG > >> - unsigned n_orig = n; > >> -#endif > > > > Yep, as I said in my previous mail n_orig could be removed in total. > > Though from other side - it is harmless. > > > >> + > >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > >> struct rte_mempool_cache *cache; > >> uint32_t index, len; > >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > >> goto ring_dequeue; > >> } > >> > >> - cache->len += req; > >> + cache->len += ret; // Need to adjust len by ret not req, as (ret != req) > >> + > > > > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success), > > or wouldn't get any entry from the ring and return negative value (failure). > > So this change is erroneous. > > Sorry, I combined my thoughts on changing the get_bulk behavior and you would be correct for the current design. This is why I decided to make it an RFC :-) > > > >> + if ( cache->len < n ) { > > > > If n > cache_size, then we will go straight to 'ring_dequeue' see line 959. > > So no need for that check here. > > My thinking (at the time) was get_bulk should return ā€™nā€™ instead of zero, which I feel is the better coding. You are correct it does not make sense unless you factor in my thinking at time :-( > > > >> + /* > >> + * Number (ret + cache->len) may not be >= n. As > >> + * the 'ret' value maybe zero or less then 'req'. > >> + * > >> + * Note: > >> + * An issue of order from the cache and common pool could > >> + * be an issue if (cache->len != 0 and less then n), but the > >> + * normal case it should be OK. If the user needs to preserve > >> + * the order of packets then he must set cache_size == 0. > >> + */ > >> + goto ring_dequeue; > >> + } > >> } > >> > >> /* Now fill in the response ... */ > >> @@ -1002,9 +1014,12 @@ ring_dequeue: > >> ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n); > >> > >> if (ret < 0) > >> - __MEMPOOL_STAT_ADD(mp, get_fail, n_orig); > >> - else > >> + __MEMPOOL_STAT_ADD(mp, get_fail, n); > >> + else { > >> __MEMPOOL_STAT_ADD(mp, get_success, ret); > >> + // Catch the case when ret != n, adding zero should not be a problem. > >> + __MEMPOOL_STAT_ADD(mp, get_fail, n - ret); > > > > As I said above, ret == 0 on success, so need for that change. > > Just n (or n_orig) is ok here. > > > >> + } > >> > >> return ret; > >> } > >> > >> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533 > > Do we think it is worth it to change the behavior of get_bulk returning ā€™nā€™ instead of zero on success? It would remove a few test IMO in a couple of places. We could also return <0 on the zero case as well, just to make sure code did not try to follow the success case by mistake. If you want to have such a function, i think it should align with the functions on the rings. In this case, this would mean having a get_burst function, which returns less than or equal to the number of elements requested. I would not change the behaviour of the existing function without also changing the rings "bulk" function to match. /Bruce > > > > NACK in summary. > > Konstantin > > Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533 >