From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C104AA00BE; Mon, 28 Oct 2019 15:01:52 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8D7371BEC6; Mon, 28 Oct 2019 15:01:52 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 519321BEC5 for ; Mon, 28 Oct 2019 15:01:51 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 1BB473373B1; Mon, 28 Oct 2019 15:01:51 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru Date: Mon, 28 Oct 2019 15:01:17 +0100 Message-Id: <20191028140122.9592-1-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190719133845.32432-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH 0/5] mempool: avoid objects allocations across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" KNI supposes that mbufs are contiguous in kernel virtual memory. This may not be true when using the IOVA=VA mode. To fix this, a possibility is to ensure that objects do not cross page boundaries in mempool. This patchset implements this in the last patch (5/5). The previous patches prepare the job: - allow to populate with an unaligned virtual area (1/5). - reduce spaced wasted in the mempool size calculation when not using the iova-contiguous allocation (2/5). - remove the iova-contiguous allocation when populating mempool (3/5): a va-contiguous alloc does the job as well if we want to populate without crossing page boundaries, so simplify the mempool populate function. - export a function to get the minimum page used in a mempool (4/5) Memory consumption impact when using hugepages: - worst case: + ~0.1% for a mbuf pool (objsize ~= 2368) - best case: -50% for if pool size is just above page size The memory consumption impact with 4K pages in IOVA=VA mode could however consume up to 75% more memory for mbuf pool, because there will be only 1 mbuf per page. Not sure how common this usecase is. Caveat: this changes the behavior of the mempool (calc_mem_size and populate), and there is a small risk to break things, especially with alternate mempool drivers. rfc -> v1 * remove first cleanup patch, it was pushed separately a2b5a8722f20 ("mempool: clarify default populate function") * add missing change in rte_mempool_op_calc_mem_size_default() * allow unaligned addr/len in populate virt * better split patches * try to better explain the change * use DPDK align macros when relevant Olivier Matz (5): mempool: allow unaligned addr/len in populate virt mempool: reduce wasted space on mempool populate mempool: remove optimistic IOVA-contiguous allocation mempool: introduce function to get mempool page size mempool: prevent objects from being across pages lib/librte_mempool/rte_mempool.c | 140 +++++++------------ lib/librte_mempool/rte_mempool.h | 12 +- lib/librte_mempool/rte_mempool_ops.c | 4 +- lib/librte_mempool/rte_mempool_ops_default.c | 57 ++++++-- lib/librte_mempool/rte_mempool_version.map | 1 + 5 files changed, 114 insertions(+), 100 deletions(-) -- 2.20.1