DPDK patches and discussions
 help / color / mirror / Atom feed
From: Santosh Shukla <santosh.shukla@caviumnetworks.com>
To: Olivier Matz <olivier.matz@6wind.com>
Cc: <dev@dpdk.org>, <thomas.monjalon@6wind.com>,
	<jerin.jacob@caviumnetworks.com>, <hemant.agrawal@nxp.com>
Subject: Re: [dpdk-dev] [PATCH v2] mempool: Introduce _populate_mz_range api
Date: Tue, 31 Jan 2017 20:02:26 +0530	[thread overview]
Message-ID: <20170131143211.GA31464@santosh-Latitude-E5530-non-vPro> (raw)
In-Reply-To: <20170131113151.3f8e07a0@platinum>

Hi Olivier,

Reply inline.

On Tue, Jan 31, 2017 at 11:31:51AM +0100, Olivier Matz wrote:
> Hi Santosh,
> 
> I guess this patch is targeted for 17.05, right?
>

Yes.

> Please see some other comments below.
> 
> On Fri, 20 Jan 2017 20:43:41 +0530, <santosh.shukla@caviumnetworks.com>
> wrote:
> > From: Santosh Shukla <santosh.shukla@caviumnetworks.com>
> > 
> > HW pool manager e.g. Cavium SoC need s/w to program start and
> > end address of pool. Currently there is no such api in ext-mempool.
> 
> Today, the mempool objects are not necessarily contiguous in
> virtual or physical memory. The only assumption that can be made is
> that each object is contiguous (virtually and physically). If the flag
> MEMPOOL_F_NO_PHYS_CONTIG is passed, each object is assured to be
> contiguous virtually.
>

Some clarification: IIUC then pa/va addr for each hugepage (aka mz)
is contiguous but no gaurantee of contiguity across the hugepages (aka
muli-mzs), Right?

If above said is correct then case like pool start addr in one mz and
end addr is on another mz is a problem scenario. Correct? That said then
ext-mempool drivers will get affected in such cases.

> > So introducing _populate_mz_range API which will let HW(pool manager)
> > know about hugepage mapped virtual start and end address.
> 
> rte_mempool_ops_populate_mz_range() looks a bit long. What about
> rte_mempool_ops_populate() instead?

Ok.

> > diff --git a/lib/librte_mempool/rte_mempool.c
> > b/lib/librte_mempool/rte_mempool.c index 1c2aed8..9a39f5c 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -568,6 +568,10 @@ static unsigned optimize_object_size(unsigned
> > obj_size) else
> >  			paddr = mz->phys_addr;
> >  
> > +		/* Populate mz range */
> > +		if ((mp->flags & MEMPOOL_F_POOL_CREATED) == 0)
> > +			rte_mempool_ops_populate_mz_range(mp, mz);
> > +
> >  		if (rte_eal_has_hugepages()
> 
> Given what I've said above, I think the populate() callback should be
> in rte_mempool_populate_phys() instead of
> rte_mempool_populate_default(). It would be called for each contiguous
> zone.
>

Ok.

> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -387,6 +387,12 @@ typedef int (*rte_mempool_dequeue_t)(struct
> > rte_mempool *mp, */
> >  typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool
> > *mp); +/**
> > + * Set the memzone va/pa addr range and len in the external pool.
> > + */
> > +typedef void (*rte_mempool_populate_mz_range_t)(struct rte_mempool
> > *mp,
> > +		const struct rte_memzone *mz);
> > +
> 
> And this API would be:
> typedef void (*rte_mempool_populate_t)(struct rte_mempool *mp,
> 		char *vaddr, phys_addr_t paddr, size_t len)
>  
> 
> If your hw absolutly needs a contiguous memory, a solution could be:
> 
> - add a new flag MEMPOOL_F_CONTIG (maybe a better nale could be
>   found) saying that all the mempool objects must be contiguous.
> - add the ops_populate() callback in rte_mempool_populate_phys(), as
>   suggested above
> 
> Then:
> 
>   /* create an empty mempool */
>   rte_mempool_create_empty(...);
> 
>   /* set the handler:
>    *   in the ext handler, the mempool flags are updated with
>    *   MEMPOOL_F_CONTIG
>    */
>   rte_mempool_set_ops_byname(..., "my_hardware");
> 

You mean, ext handler will set mempool flag using 'pool_config' param; somethng
like rte_mempool_ops_by_name(..., "my_hardware" , MEMPOOL_F_CONTIG); ?

>   /* if MEMPOOL_F_CONTIG is set, all populate() functions should ensure
>    * that there is only one contiguous zone
>    */
>   rte_mempool_populate_default(...);
> 

I am not too sure about scope of MEMPOOL_F_CONTIG. How MEMPOOL_F_CONTIG
flag {setted by application/ driver by calling rte_mempool_create(..., flag)}
can make sure only one contiguous zone? Like to understand from you.

In my understanding:
rte_mempool_populate_default() will request for memzone based on mp->size value;
And if mp->size more than one hugepage_sz{i.e. one mz request not enough} then
it will request more hugepages{ i.e. more mz request},. So IIIU then contiguity
not gauranteed.

> Regards,
> Olivier

  reply	other threads:[~2017-01-31 14:32 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-20 14:20 [dpdk-dev] [PATCH] " santosh.shukla
2017-01-20 14:38 ` Jerin Jacob
2017-01-20 15:13 ` [dpdk-dev] [PATCH v2] " santosh.shukla
2017-01-31 10:31   ` Olivier Matz
2017-01-31 14:32     ` Santosh Shukla [this message]
2017-02-06 17:01       ` Olivier Matz
2017-02-07  4:00         ` Santosh Shukla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170131143211.GA31464@santosh-Latitude-E5530-non-vPro \
    --to=santosh.shukla@caviumnetworks.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=olivier.matz@6wind.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).