DPDK patches and discussions
 help / color / mirror / Atom feed
From: Shreyansh Jain <shreyansh.jain@nxp.com>
To: thomas@monjalon.net, dev@dpdk.org
Cc: hemant.agrawal@nxp.com, akhil.goyal@nxp.com,
	anatoly.burakov@intel.com,
	Shreyansh Jain <shreyansh.jain@nxp.com>
Subject: [dpdk-dev] [PATCH v2 3/3] bus/dpaa: optimize physical to virtual address searching
Date: Fri, 27 Apr 2018 22:50:58 +0530	[thread overview]
Message-ID: <20180427172058.26850-4-shreyansh.jain@nxp.com> (raw)
In-Reply-To: <20180427172058.26850-1-shreyansh.jain@nxp.com>

With Hotplugging memory support, the order of memseg has been changed
from physically contiguous to virtual contiguous. DPAA bus and drivers
depend on PA to VA address conversion for I/O.

This patch creates a list of blocks requested to be pinned to the
DPAA mempool. For searching physical addresses, it is expected that
it would belong to this list (from hardware pool) and hence it is
less expensive than memseg walks. Though, there is a marginal drop
in performance vis-a-vis the legacy mode with physically contiguous
memsegs.

Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>

---
An optimized algorithm is being worked upon based on some recent patches
in hotplugging. That would improve/recover the performance. Until that
time, this patch is to be treated a stop-gap solution.
---
 drivers/bus/dpaa/rte_dpaa_bus.h     | 27 ++++++++++++++++++++++++++-
 drivers/mempool/dpaa/dpaa_mempool.c | 33 ++++++++++++++++++++++++++++++++-
 2 files changed, 58 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h
index 89aeac2d1..ca32b7f2f 100644
--- a/drivers/bus/dpaa/rte_dpaa_bus.h
+++ b/drivers/bus/dpaa/rte_dpaa_bus.h
@@ -95,9 +95,34 @@ struct dpaa_portal {
 	uint64_t tid;/**< Parent Thread id for this portal */
 };
 
-/* TODO - this is costly, need to write a fast coversion routine */
+/* Various structures representing contiguous memory maps */
+struct dpaa_memseg {
+	TAILQ_ENTRY(dpaa_memseg) next;
+	char *vaddr;
+	rte_iova_t iova;
+	size_t len;
+};
+
+TAILQ_HEAD(dpaa_memseg_list, dpaa_memseg);
+extern struct dpaa_memseg_list dpaa_memsegs;
+
+/* Either iterate over the list of internal memseg references or fallback to
+ * EAL memseg based iova2virt.
+ */
 static inline void *rte_dpaa_mem_ptov(phys_addr_t paddr)
 {
+	struct dpaa_memseg *ms;
+
+	/* Check if the address is already part of the memseg list internally
+	 * maintained by the dpaa driver.
+	 */
+	TAILQ_FOREACH(ms, &dpaa_memsegs, next) {
+		if (paddr >= ms->iova && paddr <
+			ms->iova + ms->len)
+			return RTE_PTR_ADD(ms->vaddr, (paddr - ms->iova));
+	}
+
+	/* If not, Fallback to full memseg list searching */
 	return rte_mem_iova2virt(paddr);
 }
 
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 580e4640c..9d6277f82 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -27,6 +27,13 @@
 
 #include <dpaa_mempool.h>
 
+/* List of all the memseg information locally maintained in dpaa driver. This
+ * is to optimize the PA_to_VA searches until a better mechanism (algo) is
+ * available.
+ */
+struct dpaa_memseg_list dpaa_memsegs
+	= TAILQ_HEAD_INITIALIZER(dpaa_memsegs);
+
 struct dpaa_bp_info rte_dpaa_bpid_info[DPAA_MAX_BPOOLS];
 
 static int
@@ -287,10 +294,34 @@ dpaa_populate(struct rte_mempool *mp, unsigned int max_objs,
 	/* Detect pool area has sufficient space for elements in this memzone */
 	if (len >= total_elt_sz * mp->size)
 		bp_info->flags |= DPAA_MPOOL_SINGLE_SEGMENT;
+	struct dpaa_memseg *ms;
+
+	/* For each memory chunk pinned to the Mempool, a linked list of the
+	 * contained memsegs is created for searching when PA to VA
+	 * conversion is required.
+	 */
+	ms = rte_zmalloc(NULL, sizeof(struct dpaa_memseg), 0);
+	if (!ms) {
+		DPAA_MEMPOOL_ERR("Unable to allocate internal memory.");
+		DPAA_MEMPOOL_WARN("Fast Physical to Virtual Addr translation would not be available.");
+		/* If the element is not added, it would only lead to failure
+		 * in searching for the element and the logic would Fallback
+		 * to traditional DPDK memseg traversal code. So, this is not
+		 * a blocking error - but, error would be printed on screen.
+		 */
+		return 0;
+	}
+
+	ms->vaddr = vaddr;
+	ms->iova = paddr;
+	ms->len = len;
+	/* Head insertions are generally faster than tail insertions as the
+	 * buffers pinned are picked from rear end.
+	 */
+	TAILQ_INSERT_HEAD(&dpaa_memsegs, ms, next);
 
 	return rte_mempool_op_populate_default(mp, max_objs, vaddr, paddr, len,
 					       obj_cb, obj_cb_arg);
-
 }
 
 struct rte_mempool_ops dpaa_mpool_ops = {
-- 
2.14.1

  parent reply	other threads:[~2018-04-27 17:05 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-27 16:25 [dpdk-dev] [PATCH 0/3] Optimization for DPAA/DPAA2 for PA/VA Addressing Shreyansh Jain
2018-04-27 16:25 ` [dpdk-dev] [PATCH 1/3] crypto/dpaa_sec: remove ctx based offset for PA-VA conversion Shreyansh Jain
2018-04-27 16:25 ` [dpdk-dev] [PATCH 2/3] bus/fslmc: optimize physical to virtual address searching Shreyansh Jain
2018-04-27 16:25 ` [dpdk-dev] [PATCH 3/3] bus/dpaa: " Shreyansh Jain
2018-04-27 17:20 ` [dpdk-dev] [PATCH v2 0/3] Optimization for DPAA/DPAA2 for PA/VA Addressing Shreyansh Jain
2018-04-27 17:20   ` [dpdk-dev] [PATCH v2 1/3] crypto/dpaa_sec: remove ctx based offset for PA-VA conversion Shreyansh Jain
2018-04-27 17:20   ` [dpdk-dev] [PATCH v2 2/3] bus/fslmc: optimize physical to virtual address searching Shreyansh Jain
2018-04-27 18:49     ` Thomas Monjalon
2018-04-27 19:24       ` Thomas Monjalon
2018-04-27 17:20   ` Shreyansh Jain [this message]
2018-04-27 19:32     ` [dpdk-dev] [PATCH v2 3/3] bus/dpaa: " Thomas Monjalon
2018-04-27 19:38   ` [dpdk-dev] [PATCH v2 0/3] Optimization for DPAA/DPAA2 for PA/VA Addressing Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180427172058.26850-4-shreyansh.jain@nxp.com \
    --to=shreyansh.jain@nxp.com \
    --cc=akhil.goyal@nxp.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).