From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A0B37A00BE; Tue, 29 Oct 2019 10:21:42 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 24FFC1BED2; Tue, 29 Oct 2019 10:21:42 +0100 (CET) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 452011BEC8 for ; Tue, 29 Oct 2019 10:21:40 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id E52FFB00053; Tue, 29 Oct 2019 09:21:38 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 29 Oct 2019 09:21:32 +0000 To: Olivier Matz , CC: Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , "Thomas Monjalon" , Vamsi Krishna Attunuru References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-2-olivier.matz@6wind.com> From: Andrew Rybchenko Message-ID: <6db9f943-235c-7093-4e9d-eacbc6114a80@solarflare.com> Date: Tue, 29 Oct 2019 12:21:27 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20191028140122.9592-2-olivier.matz@6wind.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25008.003 X-TM-AS-Result: No-12.042800-8.000000-10 X-TMASE-MatchedRID: UuaOI1zLN1jmLzc6AOD8DfHkpkyUphL9MVx/3ZYby7/BSouFRjh0AuZ5 Gn23AeDZfGzuoVn0Vs6PQi9XuOWoONS7ezKc1AokogGd8wIUGII7IFMOvFEK2NHHJnNDqaTcMra J67wznLEEU3W14mAjkg1LFi5cAiFQOkIYx/2S1NB9ztxfzbIcec05KBm2/UMh33Nl3elSfsrOZs 2LnC3mNAwYw195supMfCGkWff1JzUiRWw1gHaWS44JuYe1skaVAPiR4btCEeYELMPQNzyJS9yTD xzrJPm4odiosDnV/N9Nr08ppx/XZqLAJpg+fmwmD3uYMxd01bd8s0cy6t/KSPojjYH6jMkLzZed 9XvcyXrTJr4UE5DyjJGTpe1iiCJq71zr0FZRMbBQlMmlBevIRfoA9r2LThYYKrauXd3MZDWCtur RWuLXZJedb3QYgSbDBiiL0MZ3YDc+CYA7TlUfg9BtuMQpt+pn X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--12.042800-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25008.003 X-MDID: 1572340900-9EsKEdXjr2bZ Subject: Re: [dpdk-dev] [PATCH 1/5] mempool: allow unaligned addr/len in populate virt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/28/19 5:01 PM, Olivier Matz wrote: > rte_mempool_populate_virt() currently requires that both addr > and length are page-aligned. > > Remove this uneeded constraint which can be annoying with big > hugepages (ex: 1GB). > > Signed-off-by: Olivier Matz One note below, other than that Reviewed-by: Andrew Rybchenko > --- > lib/librte_mempool/rte_mempool.c | 18 +++++++----------- > lib/librte_mempool/rte_mempool.h | 3 +-- > 2 files changed, 8 insertions(+), 13 deletions(-) > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index 0f29e8712..76cbacdf3 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -368,17 +368,11 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, > size_t off, phys_len; > int ret, cnt = 0; > > - /* address and len must be page-aligned */ > - if (RTE_PTR_ALIGN_CEIL(addr, pg_sz) != addr) > - return -EINVAL; > - if (RTE_ALIGN_CEIL(len, pg_sz) != len) > - return -EINVAL; > - > if (mp->flags & MEMPOOL_F_NO_IOVA_CONTIG) > return rte_mempool_populate_iova(mp, addr, RTE_BAD_IOVA, > len, free_cb, opaque); > > - for (off = 0; off + pg_sz <= len && > + for (off = 0; off < len && > mp->populated_size < mp->size; off += phys_len) { > > iova = rte_mem_virt2iova(addr + off); > @@ -389,7 +383,10 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, > } > > /* populate with the largest group of contiguous pages */ > - for (phys_len = pg_sz; off + phys_len < len; phys_len += pg_sz) { > + for (phys_len = RTE_PTR_ALIGN_CEIL(addr + off + 1, pg_sz) - > + (addr + off); > + off + phys_len < len; If the condition is false on the first check, below we'll populate memory outside of specified length. So, we should either apply MIN above when phys_len is initialized or drop MIN in the next line, but apply MIN when rte_mempool_populate_iova() is called. Bonus question not directly related to the patch is iova == RTE_BAD_IOVA case when !rte_eal_has_hugepages(): Is it expected that we still use arithmetic iova + phys_len in this case? I guess comparison will always be false and pages never merged, but it looks suspicious anyway. > + phys_len = RTE_MIN(phys_len + pg_sz, len - off)) { > rte_iova_t iova_tmp; > > iova_tmp = rte_mem_virt2iova(addr + off + phys_len); > @@ -575,8 +572,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) > * have > */ > mz = rte_memzone_reserve_aligned(mz_name, 0, > - mp->socket_id, flags, > - RTE_MAX(pg_sz, align)); > + mp->socket_id, flags, align); > } > if (mz == NULL) { > ret = -rte_errno; > @@ -601,7 +597,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) > (void *)(uintptr_t)mz); > else > ret = rte_mempool_populate_virt(mp, mz->addr, > - RTE_ALIGN_FLOOR(mz->len, pg_sz), pg_sz, > + mz->len, pg_sz, > rte_mempool_memchunk_mz_free, > (void *)(uintptr_t)mz); > if (ret < 0) { > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > index 8053f7a04..0fe8aa7b8 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -1042,9 +1042,8 @@ int rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, > * A pointer to the mempool structure. > * @param addr > * The virtual address of memory that should be used to store objects. > - * Must be page-aligned. > * @param len > - * The length of memory in bytes. Must be page-aligned. > + * The length of memory in bytes. > * @param pg_sz > * The size of memory pages in this virtual area. > * @param free_cb