From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 84C39A00BE; Tue, 29 Oct 2019 11:09:16 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C2CE4397D; Tue, 29 Oct 2019 11:09:15 +0100 (CET) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id E6F712C16 for ; Tue, 29 Oct 2019 11:09:14 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 9510580073; Tue, 29 Oct 2019 10:09:12 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 29 Oct 2019 10:09:05 +0000 To: Olivier Matz , CC: Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , "Thomas Monjalon" , Vamsi Krishna Attunuru References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-3-olivier.matz@6wind.com> From: Andrew Rybchenko Message-ID: <7b34ec2a-5ec8-043d-ec5c-811984ce6171@solarflare.com> Date: Tue, 29 Oct 2019 13:09:01 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20191028140122.9592-3-olivier.matz@6wind.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25008.003 X-TM-AS-Result: No-12.650600-8.000000-10 X-TMASE-MatchedRID: vbSD0OnL8/Kzv7XXw+tJovR90P44vh4N69aS+7/zbj+qvcIF1TcLYKEG Khm9baaN0wKYUXiwDuspYPdJucL2NzfOOPFtR30Sgq+cWtkrZSEgT/sXtGXrf0+86maMM3aSYOb LIBJK7EE5+h7fOIvM3eGdV/1V0QzqRCgY1kY2HbNNa4UOfkJSNGnjGis711KZ7VWmYaIPG1zDw2 sknbo+Za3aC25avUuar15/GZU7YAyOLbpsQSXUTuhsg0dmQfnGMHi1Ydy2WEhuTuwr8VcPenUsA HsYpbAToM2WU561NqjsBwW2XmiL4KVQ2UDgU7NDLV6LoOnjWZnoGgE9MtAleur0jxUdV3oneb2O GIQiIG7VDyGoMoEmvUU+cXkpeSCipZGvlx+d164shGpBsK6H7qIik2/euMx1qTXsQkqaaUqjxYy RBa/qJQPTK4qtAgwIAYt5KiTiutkLbigRnpKlKSPzRlrdFGDwA6ENhY9m/3BA6GlXQpGQP0KnvB F4uanDe5HVI4y07Sbb4eRTZfitXA== X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--12.650600-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25008.003 X-MDID: 1572343754-FXtrBMhvVDhr Subject: Re: [dpdk-dev] [PATCH 2/5] mempool: reduce wasted space on mempool populate X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/28/19 5:01 PM, Olivier Matz wrote: > The size returned by rte_mempool_op_calc_mem_size_default() is aligned > to the specified page size. Therefore, with big pages, the returned size > can be much more that what we really need to populate the mempool. > > For instance, populating a mempool that requires 1.1GB of memory with > 1GB hugepages can result in allocating 2GB of memory. > > This problem is hidden most of the time due to the allocation method of > rte_mempool_populate_default(): when try_iova_contig_mempool=true, it > first tries to allocate an iova contiguous area, without the alignment > constraint. If it fails, it fallbacks to an aligned allocation that does > not require to be iova-contiguous. This can also fallback into several > smaller aligned allocations. > > This commit changes rte_mempool_op_calc_mem_size_default() to relax the > alignment constraint to a cache line and to return a smaller size. > > Signed-off-by: Olivier Matz One may be unrelated questions below Reviewed-by: Andrew Rybdhenko [snip] > diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c > index 4e2bfc82d..f6aea7662 100644 > --- a/lib/librte_mempool/rte_mempool_ops_default.c > +++ b/lib/librte_mempool/rte_mempool_ops_default.c > @@ -12,7 +12,7 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, > size_t *min_chunk_size, size_t *align) > { > size_t total_elt_sz; > - size_t obj_per_page, pg_num, pg_sz; > + size_t obj_per_page, pg_sz, objs_in_last_page; > size_t mem_size; > > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; > @@ -33,14 +33,30 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, > mem_size = > RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * obj_num; > } else { > - pg_num = (obj_num + obj_per_page - 1) / obj_per_page; > - mem_size = pg_num << pg_shift; > + /* In the best case, the allocator will return a > + * page-aligned address. For example, with 5 objs, > + * the required space is as below: > + * | page0 | page1 | page2 (last) | > + * |obj0 |obj1 |xxx|obj2 |obj3 |xxx|obj4| > + * <------------- mem_size -------------> > + */ > + objs_in_last_page = ((obj_num - 1) % obj_per_page) + 1; > + /* room required for the last page */ > + mem_size = objs_in_last_page * total_elt_sz; > + /* room required for other pages */ > + mem_size += ((obj_num - objs_in_last_page) / > + obj_per_page) << pg_shift; > + > + /* In the worst case, the allocator returns a > + * non-aligned pointer, wasting up to > + * total_elt_sz. Add a margin for that. > + */ > + mem_size += total_elt_sz - 1; > } > } > > - *min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz); > - > - *align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift); > + *min_chunk_size = total_elt_sz; > + *align = RTE_CACHE_LINE_SIZE; Not directly related to the patch, but may be RTE_MEMPOOL_ALIGN should be used? > > return mem_size; > }