From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail1.windriver.com (mail1.windriver.com [147.11.146.13]) by dpdk.org (Postfix) with ESMTP id 9769A5902 for ; Mon, 29 Sep 2014 01:11:03 +0200 (CEST) Received: from ALA-HCB.corp.ad.wrs.com (ala-hcb.corp.ad.wrs.com [147.11.189.41]) by mail1.windriver.com (8.14.9/8.14.5) with ESMTP id s8SNHapW001093 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL) for ; Sun, 28 Sep 2014 16:17:36 -0700 (PDT) Received: from ALA-MBB.corp.ad.wrs.com ([169.254.1.18]) by ALA-HCB.corp.ad.wrs.com ([147.11.189.41]) with mapi id 14.03.0174.001; Sun, 28 Sep 2014 16:17:35 -0700 From: "Wiles, Roger Keith" To: "ANANYEV, KONSTANTIN" Thread-Topic: [RFC] More changes for rte_mempool.h:__mempool_get_bulk() Thread-Index: AQHP20TxUYvnLZ5sIU2OzJoGryugjZwXIfcggACBnAA= Date: Sun, 28 Sep 2014 23:17:34 +0000 Message-ID: <4F9CE4A3-600B-42E0-B5C0-71D3AF7F0CF5@windriver.com> References: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> <2601191342CEEE43887BDE71AB977258213851D2@IRSMSX104.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB977258213851D2@IRSMSX104.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.25.40.166] Content-Type: text/plain; charset="Windows-1252" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "" Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Sep 2014 23:11:04 -0000 On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin wrote: >=20 >=20 >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith >> Sent: Sunday, September 28, 2014 6:52 PM >> To: >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_b= ulk() >>=20 >> Here is a Request for Comment on __mempool_get_bulk() routine. I believe= I am seeing a few more issues in this routine, please look >> at the code below and see if these seem to fix some concerns in how the = ring is handled. >>=20 >> The first issue I believe is cache->len is increased by ret and not req = as we do not know if ret =3D=3D req. This also means the cache->len >> may still not satisfy the request from the cache. >>=20 >> The second issue is if you believe the above code then we have to accoun= t for that issue in the stats. >>=20 >> Let me know what you think? >> ++Keith >> --- >>=20 >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_m= empool.h >> index 199a493..b1b1f7a 100644 >> --- a/lib/librte_mempool/rte_mempool.h >> +++ b/lib/librte_mempool/rte_mempool.h >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **ob= j_table, >> unsigned n, int is_mc) >> { >> int ret; >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG >> - unsigned n_orig =3D n; >> -#endif >=20 > Yep, as I said in my previous mail n_orig could be removed in total. > Though from other side - it is harmless. >=20 >> + >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 >> struct rte_mempool_cache *cache; >> uint32_t index, len; >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **o= bj_table, >> goto ring_dequeue; >> } >>=20 >> - cache->len +=3D req; >> + cache->len +=3D ret; // Need to adjust len by ret n= ot req, as (ret !=3D req) >> + >=20 > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req o= bjects from the ring and return 0 (success), > or wouldn't get any entry from the ring and return negative value (failur= e). > So this change is erroneous. Sorry, I combined my thoughts on changing the get_bulk behavior and you wou= ld be correct for the current design. This is why I decided to make it an R= FC :-) >=20 >> + if ( cache->len < n ) { >=20 > If n > cache_size, then we will go straight to 'ring_dequeue' see line 9= 59. > So no need for that check here. My thinking (at the time) was get_bulk should return =92n=92 instead of zer= o, which I feel is the better coding. You are correct it does not make sens= e unless you factor in my thinking at time :-( >=20 >> + /* >> + * Number (ret + cache->len) may not be >=3D n. = As >> + * the 'ret' value maybe zero or less then 'req'= . >> + * >> + * Note: >> + * An issue of order from the cache and common p= ool could >> + * be an issue if (cache->len !=3D 0 and less th= en n), but the >> + * normal case it should be OK. If the user need= s to preserve >> + * the order of packets then he must set cache_s= ize =3D=3D 0. >> + */ >> + goto ring_dequeue; >> + } >> } >>=20 >> /* Now fill in the response ... */ >> @@ -1002,9 +1014,12 @@ ring_dequeue: >> ret =3D rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n); >>=20 >> if (ret < 0) >> - __MEMPOOL_STAT_ADD(mp, get_fail, n_orig); >> - else >> + __MEMPOOL_STAT_ADD(mp, get_fail, n); >> + else { >> __MEMPOOL_STAT_ADD(mp, get_success, ret); >> + // Catch the case when ret !=3D n, adding zero should no= t be a problem. >> + __MEMPOOL_STAT_ADD(mp, get_fail, n - ret); >=20 > As I said above, ret =3D=3D 0 on success, so need for that change. > Just n (or n_orig) is ok here. >=20 >> + } >>=20 >> return ret; >> } >>=20 >> Keith Wiles, Principal Technologist with CTO office, Wind River mobile 9= 72-213-5533 Do we think it is worth it to change the behavior of get_bulk returning =92= n=92 instead of zero on success? It would remove a few test IMO in a couple= of places. We could also return <0 on the zero case as well, just to make = sure code did not try to follow the success case by mistake. >=20 > NACK in summary. > Konstantin Keith Wiles, Principal Technologist with CTO office, Wind River mobile 972-= 213-5533