DPDK patches and discussions
 help / color / mirror / Atom feed
From: Santosh Shukla <santosh.shukla@caviumnetworks.com>
To: Olivier Matz <olivier.matz@6wind.com>
Cc: <dev@dpdk.org>, <thomas.monjalon@6wind.com>,
	<jerin.jacob@caviumnetworks.com>, <hemant.agrawal@nxp.com>
Subject: Re: [dpdk-dev] [PATCH v2] mempool: Introduce _populate_mz_range api
Date: Tue, 7 Feb 2017 09:30:01 +0530	[thread overview]
Message-ID: <20170207035959.GA23401@santosh-Latitude-E5530-non-vPro> (raw)
In-Reply-To: <20170206180115.480e07af@platinum>

Hi Olivier,

On Mon, Feb 06, 2017 at 06:01:15PM +0100, Olivier Matz wrote:
> On Tue, 31 Jan 2017 20:02:26 +0530, Santosh Shukla
> <santosh.shukla@caviumnetworks.com> wrote:
> > Hi Olivier,
> > 
> > Reply inline.
> > 
> > On Tue, Jan 31, 2017 at 11:31:51AM +0100, Olivier Matz wrote:
> > > Hi Santosh,
> > > 
> > > I guess this patch is targeted for 17.05, right?
> > >  
> > 
> > Yes.
> > 
> > > Please see some other comments below.
> > > 
> > > On Fri, 20 Jan 2017 20:43:41 +0530,
> > > <santosh.shukla@caviumnetworks.com> wrote:  
> > > > From: Santosh Shukla <santosh.shukla@caviumnetworks.com>

[snip..]

> > > > --- a/lib/librte_mempool/rte_mempool.h
> > > > +++ b/lib/librte_mempool/rte_mempool.h
> > > > @@ -387,6 +387,12 @@ typedef int (*rte_mempool_dequeue_t)(struct
> > > > rte_mempool *mp, */
> > > >  typedef unsigned (*rte_mempool_get_count)(const struct
> > > > rte_mempool *mp); +/**
> > > > + * Set the memzone va/pa addr range and len in the external pool.
> > > > + */
> > > > +typedef void (*rte_mempool_populate_mz_range_t)(struct
> > > > rte_mempool *mp,
> > > > +		const struct rte_memzone *mz);
> > > > +  
> > > 
> > > And this API would be:
> > > typedef void (*rte_mempool_populate_t)(struct rte_mempool *mp,
> > > 		char *vaddr, phys_addr_t paddr, size_t len)
> > >  
> > > 
> > > If your hw absolutly needs a contiguous memory, a solution could be:
> > > 
> > > - add a new flag MEMPOOL_F_CONTIG (maybe a better nale could be
> > >   found) saying that all the mempool objects must be contiguous.
> > > - add the ops_populate() callback in rte_mempool_populate_phys(), as
> > >   suggested above
> > > 
> > > Then:
> > > 
> > >   /* create an empty mempool */
> > >   rte_mempool_create_empty(...);
> > > 
> > >   /* set the handler:
> > >    *   in the ext handler, the mempool flags are updated with
> > >    *   MEMPOOL_F_CONTIG
> > >    */
> > >   rte_mempool_set_ops_byname(..., "my_hardware");
> > >   
> > 
> > You mean, ext handler will set mempool flag using 'pool_config'
> > param; somethng like rte_mempool_ops_by_name(..., "my_hardware" ,
> > MEMPOOL_F_CONTIG); ?
> 
> I don't mean changing the API of rte_mempool_set_ops_byname().
> I was suggesting something like this:
> 
> static int your_handler_alloc(struct rte_mempool *mp)
> {
> 	/* handler init */
> 	...
> 
> 	mp->flags |= MEMPOOL_F_CONTIG;
> 	return 0;
> }
> 

Ok,. Will do in v3.

> And update rte_mempool_populate_*() functions to manage this flag:
> instead of segment, just fail if it cannot fit in one segment. It won't
> really solve the issue, but at least it will be properly detected, and
> the user could take dispositions to have more contiguous memory (ex:
> use larger hugepages, allocate hugepages at boot time).
>

Agree.
Will take care in v3.

> > 
> > >   /* if MEMPOOL_F_CONTIG is set, all populate() functions should
> > > ensure
> > >    * that there is only one contiguous zone
> > >    */
> > >   rte_mempool_populate_default(...);
> > >   
> > 
> > I am not too sure about scope of MEMPOOL_F_CONTIG. How
> > MEMPOOL_F_CONTIG flag {setted by application/ driver by calling
> > rte_mempool_create(..., flag)} can make sure only one contiguous
> > zone? Like to understand from you.
> 
> As described above, there would be no change from application point of
> view. The handler would set the mempool flag by itself to change the
> behavior of the populate functions.
> 
> 

Right.

> > 
> > In my understanding:
> > rte_mempool_populate_default() will request for memzone based on
> > mp->size value; And if mp->size more than one hugepage_sz{i.e. one mz
> > request not enough} then it will request more hugepages{ i.e. more mz
> > request},. So IIIU then contiguity not gauranteed.
> 
> Yes, that's how it works today. As EAL cannot guarantees that the
> hugepages are physically contiguous, it tries to segment the mempool
> objects in several memory zones.
> 
> 
> Regards,
> Olivier
> 
> 
> 

Regards,
Santosh

      reply	other threads:[~2017-02-07  4:00 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-20 14:20 [dpdk-dev] [PATCH] " santosh.shukla
2017-01-20 14:38 ` Jerin Jacob
2017-01-20 15:13 ` [dpdk-dev] [PATCH v2] " santosh.shukla
2017-01-31 10:31   ` Olivier Matz
2017-01-31 14:32     ` Santosh Shukla
2017-02-06 17:01       ` Olivier Matz
2017-02-07  4:00         ` Santosh Shukla [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170207035959.GA23401@santosh-Latitude-E5530-non-vPro \
    --to=santosh.shukla@caviumnetworks.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=olivier.matz@6wind.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).