From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 1CD095902 for ; Mon, 29 Sep 2014 00:35:19 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP; 28 Sep 2014 15:41:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.04,616,1406617200"; d="scan'208";a="598034829" Received: from irsmsx102.ger.corp.intel.com ([163.33.3.155]) by fmsmga001.fm.intel.com with ESMTP; 28 Sep 2014 15:41:51 -0700 Received: from irsmsx104.ger.corp.intel.com ([169.254.5.248]) by IRSMSX102.ger.corp.intel.com ([169.254.2.200]) with mapi id 14.03.0195.001; Sun, 28 Sep 2014 23:41:50 +0100 From: "Ananyev, Konstantin" To: "Wiles, Roger Keith (Wind River)" , "" Thread-Topic: [RFC] More changes for rte_mempool.h:__mempool_get_bulk() Thread-Index: AQHP20TxUYvnLZ5sIU2OzJoGryugjZwXIfcg Date: Sun, 28 Sep 2014 22:41:49 +0000 Message-ID: <2601191342CEEE43887BDE71AB977258213851D2@IRSMSX104.ger.corp.intel.com> References: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> In-Reply-To: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bulk() X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Sep 2014 22:35:20 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wiles, Roger Keith > Sent: Sunday, September 28, 2014 6:52 PM > To: > Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_get_bu= lk() >=20 > Here is a Request for Comment on __mempool_get_bulk() routine. I believe = I am seeing a few more issues in this routine, please look > at the code below and see if these seem to fix some concerns in how the r= ing is handled. >=20 > The first issue I believe is cache->len is increased by ret and not req a= s we do not know if ret =3D=3D req. This also means the cache->len > may still not satisfy the request from the cache. >=20 > The second issue is if you believe the above code then we have to account= for that issue in the stats. >=20 > Let me know what you think? > ++Keith > --- >=20 > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_me= mpool.h > index 199a493..b1b1f7a 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj= _table, > unsigned n, int is_mc) > { > int ret; > -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG > - unsigned n_orig =3D n; > -#endif Yep, as I said in my previous mail n_orig could be removed in total. Though from other side - it is harmless. > + > #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > struct rte_mempool_cache *cache; > uint32_t index, len; > @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void **ob= j_table, > goto ring_dequeue; > } >=20 > - cache->len +=3D req; > + cache->len +=3D ret; // Need to adjust len by ret no= t req, as (ret !=3D req) > + rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all req obj= ects from the ring and return 0 (success), or wouldn't get any entry from the ring and return negative value (failure)= . So this change is erroneous. > + if ( cache->len < n ) { If n > cache_size, then we will go straight to 'ring_dequeue' see line 959= . So no need for that check here. > + /* > + * Number (ret + cache->len) may not be >=3D n. A= s > + * the 'ret' value maybe zero or less then 'req'. > + * > + * Note: > + * An issue of order from the cache and common po= ol could > + * be an issue if (cache->len !=3D 0 and less the= n n), but the > + * normal case it should be OK. If the user needs= to preserve > + * the order of packets then he must set cache_si= ze =3D=3D 0. > + */ > + goto ring_dequeue; > + } > } >=20 > /* Now fill in the response ... */ > @@ -1002,9 +1014,12 @@ ring_dequeue: > ret =3D rte_ring_sc_dequeue_bulk(mp->ring, obj_table, n); >=20 > if (ret < 0) > - __MEMPOOL_STAT_ADD(mp, get_fail, n_orig); > - else > + __MEMPOOL_STAT_ADD(mp, get_fail, n); > + else { > __MEMPOOL_STAT_ADD(mp, get_success, ret); > + // Catch the case when ret !=3D n, adding zero should not= be a problem. > + __MEMPOOL_STAT_ADD(mp, get_fail, n - ret); As I said above, ret =3D=3D 0 on success, so need for that change. Just n (or n_orig) is ok here. > + } >=20 > return ret; > } >=20 > Keith Wiles, Principal Technologist with CTO office, Wind River mobile 97= 2-213-5533 NACK in summary. Konstantin