From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 1B7F8A0352;
	Mon,  4 Nov 2019 16:13:15 +0100 (CET)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 9CA8E378E;
	Mon,  4 Nov 2019 16:13:08 +0100 (CET)
Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com
 [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 2ECBA343C
 for <dev@dpdk.org>; Mon,  4 Nov 2019 16:13:05 +0100 (CET)
Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195])
 by proxy.6wind.com (Postfix) with ESMTP id DE49E33A833;
 Mon,  4 Nov 2019 16:13:04 +0100 (CET)
From: Olivier Matz <olivier.matz@6wind.com>
To: dev@dpdk.org
Cc: Anatoly Burakov <anatoly.burakov@intel.com>,
 Andrew Rybchenko <arybchenko@solarflare.com>,
 Ferruh Yigit <ferruh.yigit@linux.intel.com>,
 "Giridharan, Ganesan" <ggiridharan@rbbn.com>,
 Jerin Jacob Kollanukkaran <jerinj@marvell.com>,
 "Kiran Kumar Kokkilagadda" <kirankumark@marvell.com>,
 Stephen Hemminger <sthemmin@microsoft.com>,
 Thomas Monjalon <thomas@monjalon.net>,
 Vamsi Krishna Attunuru <vattunuru@marvell.com>,
 Hemant Agrawal <hemant.agrawal@nxp.com>, Nipun Gupta <nipun.gupta@nxp.com>
Date: Mon,  4 Nov 2019 16:12:47 +0100
Message-Id: <20191104151254.6354-1-olivier.matz@6wind.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20190719133845.32432-1-olivier.matz@6wind.com>
References: <20190719133845.32432-1-olivier.matz@6wind.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Subject: [dpdk-dev] [PATCH v3 0/7] mempool: avoid objects allocations across
	pages
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

KNI supposes that mbufs are contiguous in kernel virtual memory. This
may not be true when using the IOVA=VA mode. To fix this, a possibility
is to ensure that objects do not cross page boundaries in mempool. This
patchset implements this in the last patch (5/5).

The previous patches prepare the job:
- allow to populate with an unaligned virtual area (1/5).
- reduce spaced wasted in the mempool size calculation when not using
  the iova-contiguous allocation (2/5).
- remove the iova-contiguous allocation when populating mempool (3/5):
  a va-contiguous alloc does the job as well if we want to populate
  without crossing page boundaries, so simplify the mempool populate
  function.
- export a function to get the minimum page used in a mempool (4/5)

Memory consumption impact when using hugepages:
- worst case: + ~0.1% for a mbuf pool (objsize ~= 2368)
- best case: -50% for if pool size is just above page size

The memory consumption impact with 4K pages in IOVA=VA mode could
however consume up to 75% more memory for mbuf pool, because there will
be only 1 mbuf per page. Not sure how common this usecase is.

Caveat: this changes the behavior of the mempool (calc_mem_size and
populate), and there is a small risk to break things, especially with
alternate mempool drivers.

v3

* introduce new helpers to calculate required memory size and to
  populate mempool, use them in drivers: the alignment constraint
  of octeontx/octeontx2 is managed in this common code.
* fix octeontx mempool driver by taking alignment constraint in account
  like in octeontx2
* fix bucket mempool driver with 4K pages: limit bucket size in this
  case to ensure that objects do not cross page boundaries. With larger
  pages, it was already ok, because bucket size (64K) is smaller than
  a page.
* fix some api comments in mempool header file

v2

* update octeontx2 driver to keep alignment constraint (issue seen by
  Vamsi)
* add a new patch to use RTE_MEMPOOL_ALIGN (Andrew)
* fix initialization of for loop in rte_mempool_populate_virt() (Andrew)
* use rte_mempool_populate_iova() if mz_flags has
  RTE_MEMZONE_IOVA_CONTIG (Andrew)
* check rte_mempool_get_page_size() return value (Andrew)
* some other minor style improvements

rfc -> v1

* remove first cleanup patch, it was pushed separately
  a2b5a8722f20 ("mempool: clarify default populate function")
* add missing change in rte_mempool_op_calc_mem_size_default()
* allow unaligned addr/len in populate virt
* better split patches
* try to better explain the change
* use DPDK align macros when relevant

Olivier Matz (7):
  mempool: allow unaligned addr/len in populate virt
  mempool: reduce wasted space on mempool populate
  mempool: remove optimistic IOVA-contiguous allocation
  mempool: introduce function to get mempool page size
  mempool: introduce helpers for populate and calc mem size
  mempool: prevent objects from being across pages
  mempool: use the specific macro for object alignment

 drivers/mempool/bucket/Makefile               |   2 +
 drivers/mempool/bucket/meson.build            |   3 +
 drivers/mempool/bucket/rte_mempool_bucket.c   |  10 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |   4 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   4 +-
 drivers/mempool/octeontx/Makefile             |   3 +
 drivers/mempool/octeontx/meson.build          |   3 +
 .../mempool/octeontx/rte_mempool_octeontx.c   |  21 +--
 drivers/mempool/octeontx2/Makefile            |   6 +
 drivers/mempool/octeontx2/meson.build         |   6 +
 drivers/mempool/octeontx2/otx2_mempool_ops.c  |  21 ++-
 lib/librte_mempool/rte_mempool.c              | 147 +++++++-----------
 lib/librte_mempool/rte_mempool.h              |  64 ++++++--
 lib/librte_mempool/rte_mempool_ops.c          |   4 +-
 lib/librte_mempool/rte_mempool_ops_default.c  | 113 +++++++++++---
 lib/librte_mempool/rte_mempool_version.map    |   3 +
 16 files changed, 272 insertions(+), 142 deletions(-)

-- 
2.20.1