From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 1CA086A9B for ; Mon, 29 Sep 2014 14:27:43 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP; 29 Sep 2014 05:34:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,862,1389772800"; d="scan'208";a="393006071" Received: from bricha3-mobl.ger.corp.intel.com (HELO bricha3-mobl.ir.intel.com) ([10.243.20.21]) by FMSMGA003.fm.intel.com with SMTP; 29 Sep 2014 05:28:02 -0700 Received: by bricha3-mobl.ir.intel.com (sSMTP sendmail emulation); Mon, 29 Sep 2014 13:34:15 +0001 Date: Mon, 29 Sep 2014 13:34:15 +0100 From: Bruce Richardson To: "Ananyev, Konstantin" Message-ID: <20140929123415.GA16652@BRICHA3-MOBL> References: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> <2601191342CEEE43887BDE71AB977258213851D2@IRSMSX104.ger.corp.intel.com> <4F9CE4A3-600B-42E0-B5C0-71D3AF7F0CF5@windriver.com> <20140929120613.GG12072@BRICHA3-MOBL> <2601191342CEEE43887BDE71AB97725821387573@IRSMSX104.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2601191342CEEE43887BDE71AB97725821387573@IRSMSX104.ger.corp.intel.com> Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.22 (2013-10-16) Cc: "" Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Sep 2014 12:27:44 -0000 On Mon, Sep 29, 2014 at 01:25:11PM +0100, Ananyev, Konstantin wrote: > > > > -----Original Message----- > > From: Richardson, Bruce > > Sent: Monday, September 29, 2014 1:06 PM > > To: Wiles, Roger Keith (Wind River) > > Cc: Ananyev, Konstantin; > > Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() > > > > On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote: > > > > > > On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin wrote: > > > > > > > > > > > > > > >> -----Original Message----- > > > >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith > > > >> Sent: Sunday, September 28, 2014 6:52 PM > > > >> To: > > > >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() > > > >> > > > >> Here is a Request for Comment on __mempool_get_bulk() routine. I believe I am seeing a few more issues in this routine, please > > look > > > >> at the code below and see if these seem to fix some concerns in how the ring is handled. > > > >> > > > >> The first issue I believe is cache->len is increased by ret and not req as we do not know if ret == req. This also means the cache- > > >len > > > >> may still not satisfy the request from the cache. > > > >> > > > >> The second issue is if you believe the above code then we have to account for that issue in the stats. > > > >> > > > >> Let me know what you think? > > > >> ++Keith > > > >> --- > > > >> > > > >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > > > >> index 199a493..b1b1f7a 100644 > > > >> --- a/lib/librte_mempool/rte_mempool.h > > > >> +++ b/lib/librte_mempool/rte_mempool.h > > > >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > > > >> unsigned n, int is_mc) > > > >> { > > > >> int ret; > > > >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG > > > >> - unsigned n_orig = n; > > > >> -#endif > > > > > > > > Yep, as I said in my previous mail n_orig could be removed in total. > > > > Though from other side - it is harmless. > > > > > > > >> + > > > >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > > > >> struct rte_mempool_cache *cache; > > > >> uint32_t index, len; > > > >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, > > > >> goto ring_dequeue; > > > >> } > > > >> > > > >> - cache->len += req; > > > >> + cache->len += ret; // Need to adjust len by ret not req, as (ret != req) > > > >> + > > > > > > > > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req objects from the ring and return 0 (success), > > > > or wouldn't get any entry from the ring and return negative value (failure). > > > > So this change is erroneous. > > > > > > Sorry, I combined my thoughts on changing the get_bulk behavior and you would be correct for the current design. This is why I > > decided to make it an RFC :-) > > > > > > > >> + if ( cache->len < n ) { > > > > > > > > If n > cache_size, then we will go straight to 'ring_dequeue' see line 959. > > > > So no need for that check here. > > > > > > My thinking (at the time) was get_bulk should return ā€™nā€™ instead of zero, which I feel is the better coding. You are correct it does not > > make sense unless you factor in my thinking at time :-( > > > > > > > >> + /* > > > >> + * Number (ret + cache->len) may not be >= n. As > > > >> + * the 'ret' value maybe zero or less then 'req'. > > > >> + * > > > >> + * Note: > > > >> + * An issue of order from the cache and common pool could > > > >> + * be an issue if (cache->len != 0 and less then n), but the > > > >> + * normal case it should be OK. If the user needs to preserve > > > >> + * the order of packets then he must set cache_size == 0. > > > >> + */ > > > >> + goto ring_dequeue; > > > >> + } > > > >> } > > > >> > > > >> /* Now fill in the response ... */ > > > >> @@ -1002,9 +1014,12 @@ ring_dequeue: > > > >> ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n); > > > >> > > > >> if (ret < 0) > > > >> - __MEMPOOL_STAT_ADD(mp, get_fail, n_orig); > > > >> - else > > > >> + __MEMPOOL_STAT_ADD(mp, get_fail, n); > > > >> + else { > > > >> __MEMPOOL_STAT_ADD(mp, get_success, ret); > > > >> + // Catch the case when ret != n, adding zero should not be a problem. > > > >> + __MEMPOOL_STAT_ADD(mp, get_fail, n - ret); > > > > > > > > As I said above, ret == 0 on success, so need for that change. > > > > Just n (or n_orig) is ok here. > > > > > > > >> + } > > > >> > > > >> return ret; > > > >> } > > > >> > > > >> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533 > > > > > > Do we think it is worth it to change the behavior of get_bulk returning ā€™nā€™ instead of zero on success? It would remove a few test > > IMO in a couple of places. We could also return <0 on the zero case as well, just to make sure code did not try to follow the success > > case by mistake. > > > > If you want to have such a function, i think it should align with the > > functions on the rings. In this case, this would mean having a get_burst > > function, which returns less than or equal to the number of elements > > requested. I would not change the behaviour of the existing function > > without also changing the rings "bulk" function to match. > > Do you mean mempool_get_burst() that could return less number of objects then you requested? > If so, I wonder what will be the usage model for it? > To me it sounds a bit strange - like malloc() (or mmap) that could allocate to you only part of the memory you requested? > I would expect it to be as useful as the rings versions. :-) I don't actually see a problem with it. For example: if you have 10 messages you want to create and send, I see no reason why, if there were only 8 buffers free, you can't create and send 8 and then try again for the last 2 once you were finished with those 8. That being said, the reason for suggesting that behaviour was to align with the rings APIs. If we are looking at breaking backward compatibility (for both rings and mempools) other options can be considered too. /Bruce > Konstantin > > > > /Bruce > > > > > > > > > > NACK in summary. > > > > Konstantin > > > > > > Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-213-5533 > > >