From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D06A3A00E6 for ; Wed, 7 Aug 2019 17:22:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A12F22C58; Wed, 7 Aug 2019 17:22:11 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 7708A2BF3 for ; Wed, 7 Aug 2019 17:22:10 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 109A4600061; Wed, 7 Aug 2019 15:22:09 +0000 (UTC) Received: from [192.168.1.11] (85.187.13.152) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 7 Aug 2019 16:22:02 +0100 To: Olivier Matz , Vamsi Krishna Attunuru , CC: Thomas Monjalon , Anatoly Burakov , Jerin Jacob Kollanukkaran , Kokkilagadda , Ferruh Yigit References: <20190719133845.32432-1-olivier.matz@6wind.com> <20190719133845.32432-5-olivier.matz@6wind.com> From: Andrew Rybchenko Message-ID: <002c206e-b963-a932-1f57-6e7edb17c74b@solarflare.com> Date: Wed, 7 Aug 2019 18:21:58 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190719133845.32432-5-olivier.matz@6wind.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [85.187.13.152] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24828.000 X-TM-AS-Result: No-9.721000-8.000000-10 X-TMASE-MatchedRID: QW5G6BKkLTrmLzc6AOD8DfHkpkyUphL9BnIRIVcCWN9HZg0gWH5yUQWW 5akXXx2kSLX/wjk94KZJX9JkDz0D/kHGTQqAQaePKrDHzH6zmUUA+JHhu0IR5rjOUXWmQ3OWw8N rJJ26PmWt2gtuWr1Lmu/7OzGFceIVeMlE5uS3mzoaPMGCcVm9DpiiE4pQC7td33Nl3elSfsoyBl uKp/jPy/lf//t8Plzg0rPkQeOeT+OBb/EzaZ7GupK9FvwQx1hFY7/qt1fyNyckt9BigJAcVs5vf 6jxN94O4vM1YF6AJbZFi+KwZZttL7ew1twePJJB3QfwsVk0UbuGrPnef/I+etR3p9uDGieSfMjm tKh4PV9g8l2fcPIDGvL7UyIQYaOGdGByp+zdaDg= X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--9.721000-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24828.000 X-MDID: 1565191329-GGMnedutY3mJ Subject: Re: [dpdk-dev] [RFC 4/4] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/19/19 4:38 PM, Olivier Matz wrote: > When using iova contiguous memory and objets smaller than page size, > ensure that objects are not located across several pages. It looks like as an attempt to make exception a generic rule and I think it is not a good idea. mempool has a notion of IOVA contiguous/non-contiguous objects depending if PA or VA. rte_mempool_op_populate_default() gets a memory chunk which is contiguous in VA and, if iova is not bad, IOVA-contiguous. The patch always enforces page boundaries even if it is not required. For example, if memory chunk is IOVA_PA contiguous, the patch could result in holes and extra memory usage > Signed-off-by: Vamsi Krishna Attunuru > Signed-off-by: Olivier Matz > --- > lib/librte_mempool/rte_mempool_ops_default.c | 39 ++++++++++++++++++++++++++-- > 1 file changed, 37 insertions(+), 2 deletions(-) > > diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c > index 4e2bfc82d..2bbd67367 100644 > --- a/lib/librte_mempool/rte_mempool_ops_default.c > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > @@ -45,19 +45,54 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, > return mem_size; > } > > +/* Returns -1 if object falls on a page boundary, else returns 0 */ > +static inline int > +mempool_check_obj_bounds(void *obj, uint64_t pg_sz, size_t elt_sz) > +{ > + uintptr_t page_end, elt_addr = (uintptr_t)obj; > + uint32_t pg_shift; > + uint64_t page_mask; > + > + if (pg_sz == 0) > + return 0; > + if (elt_sz > pg_sz) > + return 0; > + > + pg_shift = rte_bsf32(pg_sz); > + page_mask = ~((1ull << pg_shift) - 1); > + page_end = (elt_addr & page_mask) + pg_sz; > + > + if (elt_addr + elt_sz > page_end) > + return -1; > + > + return 0; > +} > + > int > rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, > void *vaddr, rte_iova_t iova, size_t len, > rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) > { > - size_t total_elt_sz; > + size_t total_elt_sz, pg_sz; > size_t off; > unsigned int i; > void *obj; > > + rte_mempool_get_page_size(mp, &pg_sz); > + > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > > - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { > + for (off = 0, i = 0; i < max_objs; i++) { > + /* align offset to next page start if required */ > + if (mempool_check_obj_bounds((char *)vaddr + off, > + pg_sz, total_elt_sz) < 0) { > + off += RTE_PTR_ALIGN_CEIL((char *)vaddr + off, pg_sz) - > + ((char *)vaddr + off); > + } > + > + if (off + total_elt_sz > len) > + break; > + > off += mp->header_size; > obj = (char *)vaddr + off; > obj_cb(mp, obj_cb_arg, obj,