DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olivier Matz <olivier.matz@6wind.com>
To: <santosh.shukla@caviumnetworks.com>
Cc: <dev@dpdk.org>, <thomas.monjalon@6wind.com>,
	<jerin.jacob@caviumnetworks.com>, <hemant.agrawal@nxp.com>
Subject: Re: [dpdk-dev] [PATCH v2] mempool: Introduce _populate_mz_range api
Date: Tue, 31 Jan 2017 11:31:51 +0100	[thread overview]
Message-ID: <20170131113151.3f8e07a0@platinum> (raw)
In-Reply-To: <1484925221-18431-1-git-send-email-santosh.shukla@caviumnetworks.com>

Hi Santosh,

I guess this patch is targeted for 17.05, right?

Please see some other comments below.

On Fri, 20 Jan 2017 20:43:41 +0530, <santosh.shukla@caviumnetworks.com>
wrote:
> From: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> 
> HW pool manager e.g. Cavium SoC need s/w to program start and
> end address of pool. Currently there is no such api in ext-mempool.

Today, the mempool objects are not necessarily contiguous in
virtual or physical memory. The only assumption that can be made is
that each object is contiguous (virtually and physically). If the flag
MEMPOOL_F_NO_PHYS_CONTIG is passed, each object is assured to be
contiguous virtually.

> So introducing _populate_mz_range API which will let HW(pool manager)
> know about hugepage mapped virtual start and end address.

rte_mempool_ops_populate_mz_range() looks a bit long. What about
rte_mempool_ops_populate() instead?

> diff --git a/lib/librte_mempool/rte_mempool.c
> b/lib/librte_mempool/rte_mempool.c index 1c2aed8..9a39f5c 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -568,6 +568,10 @@ static unsigned optimize_object_size(unsigned
> obj_size) else
>  			paddr = mz->phys_addr;
>  
> +		/* Populate mz range */
> +		if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0)
> +			rte_mempool_ops_populate_mz_range(mp, mz);
> +
>  		if (rte_eal_has_hugepages()

Given what I've said above, I think the populate() callback should be
in rte_mempool_populate_phys() instead of
rte_mempool_populate_default(). It would be called for each contiguous
zone.

> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -387,6 +387,12 @@ typedef int (*rte_mempool_dequeue_t)(struct
> rte_mempool *mp, */
>  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool
> *mp); +/**
> + * Set the memzone va/pa addr range and len in the external pool.
> + */
> +typedef void (*rte_mempool_populate_mz_range_t)(struct rte_mempool
> *mp,
> +		const struct rte_memzone *mz);
> +

And this API would be:
typedef void (*rte_mempool_populate_t)(struct rte_mempool *mp,
		char *vaddr, phys_addr_t paddr, size_t len)



If your hw absolutly needs a contiguous memory, a solution could be:

- add a new flag MEMPOOL_F_CONTIG (maybe a better nale could be
  found) saying that all the mempool objects must be contiguous.
- add the ops_populate() callback in rte_mempool_populate_phys(), as
  suggested above

Then:

  /* create an empty mempool */
  rte_mempool_create_empty(...);

  /* set the handler:
   *   in the ext handler, the mempool flags are updated with
   *   MEMPOOL_F_CONTIG
   */
  rte_mempool_set_ops_byname(..., "my_hardware");

  /* if MEMPOOL_F_CONTIG is set, all populate() functions should ensure
   * that there is only one contiguous zone
   */
  rte_mempool_populate_default(...);

 
Regards,
Olivier

  reply	other threads:[~2017-01-31 10:31 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-20 14:20 [dpdk-dev] [PATCH] " santosh.shukla
2017-01-20 14:38 ` Jerin Jacob
2017-01-20 15:13 ` [dpdk-dev] [PATCH v2] " santosh.shukla
2017-01-31 10:31   ` Olivier Matz [this message]
2017-01-31 14:32     ` Santosh Shukla
2017-02-06 17:01       ` Olivier Matz
2017-02-07  4:00         ` Santosh Shukla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170131113151.3f8e07a0@platinum \
    --to=olivier.matz@6wind.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=santosh.shukla@caviumnetworks.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).