DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olivier MATZ <olivier.matz@6wind.com>
To: Hemant Agrawal <hemant.agrawal@nxp.com>
Cc: Thomas Monjalon <thomas.monjalon@6wind.com>, <dev@dpdk.org>,
	<shreyansh.jain@nxp.com>
Subject: Re: [dpdk-dev] [PATCH] mempool: introduce flag to indicate hw mempool
Date: Tue, 4 Apr 2017 11:13:37 +0200	[thread overview]
Message-ID: <20170404111337.7ecf8502@neon> (raw)
In-Reply-To: <d4b6b1d8-75bd-d0a5-2818-34c74bfdf120@nxp.com>

Hi Hemant,

On Tue, 4 Apr 2017 12:59:08 +0530
Hemant Agrawal <hemant.agrawal@nxp.com> wrote:

> Hi Thomas/Olivier,
> 
> On 4/4/2017 12:28 PM, Thomas Monjalon wrote:
> > 2017-04-04 11:05, Hemant Agrawal:  
> >> Hi Olivier,
> >>
> >> On 4/3/2017 8:49 PM, Olivier Matz wrote:  
> >>> Hi Hemant,
> >>>
> >>> On Mon, 3 Apr 2017 14:42:09 +0530, Hemant Agrawal
> >>> <hemant.agrawal@nxp.com> wrote:  
> >>>> Hardware pools need to distinguish between buffers allocated
> >>>> using software or hardware backed pools.
> >>>>
> >>>> Some HW NICs may choose to autonomously free the pickets during
> >>>> transmit if the packet is from HW pool. While they should not do
> >>>> it for software backed pools.
> >>>>
> >>>> Such flag would also help when multiple pools are being handled
> >>>> by a PMD, saving costly compare operations for any internal
> >>>> marker.
> >>>>
> >>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> >>>> ---
> >>>>  lib/librte_mempool/rte_mempool.h | 5 +++++
> >>>>  1 file changed, 5 insertions(+)
> >>>>
> >>>> diff --git a/lib/librte_mempool/rte_mempool.h
> >>>> b/lib/librte_mempool/rte_mempool.h index 991feaa..91dbd21 100644
> >>>> --- a/lib/librte_mempool/rte_mempool.h
> >>>> +++ b/lib/librte_mempool/rte_mempool.h
> >>>> @@ -263,6 +263,11 @@ struct rte_mempool {
> >>>>  #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is
> >>>> "single-consumer".*/ #define MEMPOOL_F_POOL_CREATED
> >>>> 0x0010 /**< Internal: pool is created. */ #define
> >>>> MEMPOOL_F_NO_PHYS_CONTIG 0x0020 /**< Don't need physically
> >>>> contiguous objs. */ +#define MEMPOOL_F_HW_POOL        (1 <<
> >>>> ((sizeof(int) * 8) - 1)) /**< Internal:
> >>>> +	* Hardware offloaded pool. This information may be used
> >>>> by the
> >>>> +	* NIC or other hw. Some NICs autonomously free the HW
> >>>> backed pool packets. */ +
> >>>> +/**< Don't need physically contiguous objs. */
> >>>>
> >>>>  /**
> >>>>   * @internal When debug is enabled, store some statistics.  
> >>>
> >>>
> >>> One thing is still not clear to me: in your driver, you check
> >>> this flag:
> >>> - if it is unset, you reallocate a packet from your hw pool, you
> >>> copy some metadata, and you send it to the hw.
> >>> - if it is set, you assume that you can call mempool_to_bpid(mp)
> >>> and directly send it to the hw.
> >>>
> >>> I think this is not correct. The test you want to do in your
> >>> driver is: "is it the pool that I registered for my hardware"?
> >>> It is not:
> >>> "is it a hardware managed pool?".
> >>> I think what you are doing here prevents to use 2 hardware
> >>> mempools at the same time, because they would all have this flag,
> >>> and mempool_to_bpid() would probably crash.
> >>>  
> >>
> >> No, I am only trying to differentiate between hw and software pool
> >> packets. I don't see a possiblity of having two different
> >> orthogonal hw mempool types working in the system. At any point of
> >> time when you are running DPDK on a particular type of hardware,
> >> you will only have *one* type of hardware backed pools in your
> >> implementation.  The number of mempool instances may be many but
> >> all will able to work with mempool_to_bpid().  
> >
> > No you could have different HW mempools on one system.
> > Please imagine PCI NICs which provide a mempool.
> > (other argument: never say never ;)
> >  
> Thanks. Good Advice :)
> 
> >> The application may send packet allocated from a *ring* pool
> >> instead of using "hw" pool.
> >>
> >> So, it is sufficient to just check if the pool is offloaded or
> >> not. HW can take care of all the supported pools.
> >>  
> >>> Instead, can't you just compare the mempool pointer to a value
> >>> stored internally in the driver?  
> >>
> >> There can be more than one instance of mempool, the driver is
> >> capable of supporting multiple hw offloaded mempools. Each dpaa2
> >> PMD port may have different mempool instance registered.
> >>
> >> So, pointer comparison is not practical unless I start storing the
> >> mempool driver pointer.  
> >
> > Is it difficult to store this pointer?
> >  
> 
> Yes! Something is workable here.
> PMD stores the "rte_mempool_ops_table" ops_index for dpaa2 (the
> default buffer pool). The mbuf contains the pool pointer, which will
> also have the pool->ops_index. so, it can be compared on per packet
> basis.
> 
> Olivier, do you see any issue with above approach.
> 

Sorry I missed this mail. Yes, I think this approach should work.

Olivier

  reply	other threads:[~2017-04-04  9:13 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-03  9:12 Hemant Agrawal
2017-04-03 15:19 ` Olivier Matz
2017-04-04  5:35   ` Hemant Agrawal
2017-04-04  6:58     ` Thomas Monjalon
2017-04-04  7:29       ` Hemant Agrawal
2017-04-04  9:13         ` Olivier MATZ [this message]
2017-04-04  7:48       ` Olivier MATZ

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170404111337.7ecf8502@neon \
    --to=olivier.matz@6wind.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=shreyansh.jain@nxp.com \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).