DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olivier Matz <olivier.matz@6wind.com>
To: dev@dpdk.org, huilongx.xu@intel.com, waterman.cao@intel.com,
	yuanhan.liu@intel.com, weichunx.chen@intel.com,
	yu.y.liu@intel.com
Cc: thomas.monjalon@6wind.com
Subject: [dpdk-dev] [PATCH 3/4] eal: fix retrieval of phys address with Xen Dom0
Date: Mon, 11 Jul 2016 12:20:27 +0200	[thread overview]
Message-ID: <1468232428-24088-4-git-send-email-olivier.matz@6wind.com> (raw)
In-Reply-To: <1468232428-24088-1-git-send-email-olivier.matz@6wind.com>

When using Xen Dom0, it looks that /proc/self/pagemap returns 0.
This breaks the creation of mbufs pool.

We can workaround this in rte_mem_virt2phy() by browsing the dpdk memory
segments. This only works for dpdk memory, but it's enough to fix the
mempool creation.

Fixes: c042ba20674a ("mempool: rework support of Xen dom0")
Fixes: 3097de6e6bfb ("mem: get physical address of any pointer")

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index b663244..42a29fa 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -164,6 +164,29 @@ rte_mem_virt2phy(const void *virtaddr)
 	int page_size;
 	off_t offset;
 
+	/* when using dom0, /proc/self/pagemap always returns 0, check in
+	 * dpdk memory by browsing the memsegs */
+	if (rte_xen_dom0_supported()) {
+		struct rte_mem_config *mcfg;
+		struct rte_memseg *memseg;
+		unsigned i;
+
+		mcfg = rte_eal_get_configuration()->mem_config;
+		for (i = 0; i < RTE_MAX_MEMSEG; i++) {
+			memseg = &mcfg->memseg[i];
+			if (memseg->addr == NULL)
+				break;
+			if (virtaddr > memseg->addr &&
+					virtaddr < RTE_PTR_ADD(memseg->addr,
+						memseg->len)) {
+				return memseg->phys_addr +
+					RTE_PTR_DIFF(virtaddr, memseg->addr);
+			}
+		}
+
+		return RTE_BAD_PHYS_ADDR;
+	}
+
 	/* Cannot parse /proc/self/pagemap, no need to log errors everywhere */
 	if (!proc_pagemap_readable)
 		return RTE_BAD_PHYS_ADDR;
-- 
2.8.1

  parent reply	other threads:[~2016-07-11 10:20 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-11 10:20 [dpdk-dev] [PATCH 0/4] fix mempool creation " Olivier Matz
2016-07-11 10:20 ` [dpdk-dev] [PATCH 1/4] eal: fix typo in Xen Dom0 specific code Olivier Matz
2016-07-11 10:20 ` [dpdk-dev] [PATCH 2/4] mbuf: set errno on pool creation error Olivier Matz
2016-07-11 10:20 ` Olivier Matz [this message]
2016-07-11 10:20 ` [dpdk-dev] [PATCH 4/4] mempool: fix creation with Xen Dom0 Olivier Matz
2016-07-11 16:23 ` [dpdk-dev] [PATCH 0/4] fix mempool " Olivier Matz
2016-07-11 17:09 ` Thomas Monjalon
2016-07-12  0:31   ` Chen, WeichunX

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1468232428-24088-4-git-send-email-olivier.matz@6wind.com \
    --to=olivier.matz@6wind.com \
    --cc=dev@dpdk.org \
    --cc=huilongx.xu@intel.com \
    --cc=thomas.monjalon@6wind.com \
    --cc=waterman.cao@intel.com \
    --cc=weichunx.chen@intel.com \
    --cc=yu.y.liu@intel.com \
    --cc=yuanhan.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).