DPDK patches and discussions
 help / color / mirror / Atom feed
From: Hemant Agrawal <hemant.agrawal@nxp.com>
To: <olivier.matz@6wind.com>
Cc: <dev@dpdk.org>
Subject: [dpdk-dev] [PATCH 1/2] mempool: indicate the usages of multi memzones
Date: Wed, 6 Dec 2017 18:01:12 +0530	[thread overview]
Message-ID: <1512563473-19969-1-git-send-email-hemant.agrawal@nxp.com> (raw)

This is required for the optimizations w.r.t hw mempools.
They will use different kind of optimizations if the buffers
are from single contiguous memzone.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 lib/librte_mempool/rte_mempool.c | 7 +++++--
 lib/librte_mempool/rte_mempool.h | 5 +++++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index d50dba4..9d3737c 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -387,13 +387,16 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
 	/* Detect pool area has sufficient space for elements */
-	if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
-		if (len < total_elt_sz * mp->size) {
+	if (len < total_elt_sz * mp->size) {
+		if (mp->flags & MEMPOOL_F_CAPA_PHYS_CONTIG) {
 			RTE_LOG(ERR, MEMPOOL,
 				"pool area %" PRIx64 " not enough\n",
 				(uint64_t)len);
 			return -ENOSPC;
 		}
+	} else {
+		/* Memory will be allocated from multiple memzones */
+		mp->flags |= MEMPOOL_F_MULTI_MEMZONE;
 	}
 
 	memhdr = rte_zmalloc("MEMPOOL_MEMHDR", sizeof(*memhdr), 0);
diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
index 721227f..394a4fe 100644
--- a/lib/librte_mempool/rte_mempool.h
+++ b/lib/librte_mempool/rte_mempool.h
@@ -292,6 +292,11 @@ struct rte_mempool {
  */
 #define MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS 0x0080
 
+/* Indicates that the mempool buffers are allocated from multiple memzones
+ * the buffer may or may not be physically contiguous.
+ */
+#define MEMPOOL_F_MULTI_MEMZONE 0x0100
+
 /**
  * @internal When debug is enabled, store some statistics.
  *
-- 
2.7.4

             reply	other threads:[~2017-12-06 12:32 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-06 12:31 Hemant Agrawal [this message]
2017-12-06 12:31 ` [dpdk-dev] [PATCH 2/2] mempool/dpaa: optimize phy to virt conversion Hemant Agrawal
2017-12-19 10:24 ` [dpdk-dev] [PATCH 1/2] mempool: indicate the usages of multi memzones Olivier MATZ
2017-12-19 10:46   ` Hemant Agrawal
2017-12-19 11:02     ` Olivier MATZ
2017-12-19 13:08       ` Hemant Agrawal
2017-12-20 11:59         ` Hemant Agrawal
2017-12-22 13:59           ` Olivier MATZ
2017-12-22 16:18             ` Hemant Agrawal
2018-01-16 13:51               ` Olivier Matz
2018-01-17  7:49                 ` Hemant Agrawal
2018-01-05 10:52       ` santosh
2018-01-17  8:51 ` [dpdk-dev] [PATCH v2] mempool/dpaa: optimize phy to virt conversion Hemant Agrawal
2018-01-18 23:28   ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1512563473-19969-1-git-send-email-hemant.agrawal@nxp.com \
    --to=hemant.agrawal@nxp.com \
    --cc=dev@dpdk.org \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).