DPDK patches and discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: Hemant Agrawal <hemant.agrawal@nxp.com>, shreyansh.jain@nxp.com
Cc: Ferruh Yigit <ferruh.yigit@intel.com>,
	Olivier Matz <olivier.matz@6wind.com>,
	dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v1] mempool/dpaa2: add DPAA2 hardware offloaded mempool
Date: Thu, 30 Mar 2017 15:52:22 +0200	[thread overview]
Message-ID: <4494502.Baz0AuQ6It@xps13> (raw)
In-Reply-To: <31349bbe-0e58-6962-ca41-b231796bd0ba@nxp.com>

2017-03-30 18:20, Hemant Agrawal:
> On 3/30/2017 4:59 PM, Ferruh Yigit wrote:
> > On 3/28/2017 10:45 AM, Hemant Agrawal wrote:
> >> Hi Olivier,
> >>
> >> On 3/27/2017 10:00 PM, Olivier Matz wrote:
> >>> Hi Hemant,
> >>>
> >>> On Fri, 24 Mar 2017 17:42:46 +0100, Olivier Matz <olivier.matz@6wind.com> wrote:
> >>>>>> From high level, I'm still a little puzzled by the amount of references
> >>>>>> to mbuf in a mempool handler code, which should theorically handle any
> >>>>>> kind of objects.
> >>>>>>
> >>>>>> Is it planned to support other kind of objects?
> >>
> >> We do have plan. However, we also have reservations about using hw
> >> mempools for non-packet objects. They generally give advantage when
> >> working seamlessly with NICs for rx/tx of packets.
> >>
> >>>>>> Does this driver passes the mempool autotest?
> >>
> >> We have tested it internally by manually changing the  mempool autotest
> >> (mempool name from "stack" to "dpaa2").  we still need to figure out
> >> about how to pass the default pool name to autotest.
> >>
> >>>>>> Can the user be aware of these limitations?
> >>
> >> That opens a new question,  Do We need a documentation for
> >> drivers/mempools as well.
> >> or, for the time being, we can add this to NXP PMD driver limitations?
> >>
> >>>
> >>> Some more comments.
> >>>
> >>> I think the mempool model as it is today in DPDK does not match your
> >>> driver model.
> >>>
> >>> For instance, the fact that the hardware is able return the mbuf in the
> >>> pool by itself makes me think that the mbuf rework patchset [1] can break
> >>> your driver. Especially this patch [2], that expects that m->refcnt=1,
> >>> m->nb_segs=1 and m->next=NULL when allocating from a pool.
> >>>
> >> Yes! we will need to give a small patch, once your patch is applied.
> >>
> >>> - Can this handler can be used with another driver?
> >>
> >> NXP mempool is specific to NXP hw only. It is designed to work with with
> >> NXP DPAA2 type NICs. There is no limitation in using it with any other
> >> PCI NIC connected to NXP Board. We do have tested it with ixgbe (82599)
> >> interworking with DPAA2 interfaces.
> >>
> >>> - Can your driver be used with another mempool handler?
> >> No, NXP DPAA2 PMD need NXP mempool only - at least for RX packets.
> >> In TX, we can send non-NXP DPAA2 pool packets. (The HW will not free
> >> them autonomously, but TX confirm will be required.)
> >>
> >>> - Is the dpaa driver the only driver that would take advantage of
> >>>   the mempool handler? Will it work with cloned mbufs?
> >>>
> >> For now, dpaa driver is the only user.  We will be sending cloned-mbuf
> >> support patches, once the basic driver is up-stream.
> >>
> >>> Defining a flag like this in your private code should not be done:
> >>>
> >>>    #define MEMPOOL_F_HW_PKT_POOL (1 << ((sizeof(int) * 8) - 1))
> >>>
> >>> Nothing prevents to break it if someone touches the generic flags in
> >>> mempool. And hope that no other driver does the same :)
> >>
> >> Yes! I agree. We need to work with you to improve the overall hw mempool
> >> support infrastructure:
> >>
> >> 1. When transmitting packet, the HW need to differentiate between HW
> >> supported pool vs non-HW supported pool packets. (Application may choose
> >> to have multiple pools of different type).
> >>
> >> 2. Option to use a different default mempool when used with virtio-net
> >> in VM.  You shared your opinion & some possible ways a while back. Now,
> >> we are seeing hw mempools actually coming to DPDK. So, We need to
> >> re-start this discussion.
> >>
> >>>
> >>> Maybe you can do the same without flag, for instance by checking if
> >>> (m->pool == pmd->pool)?
> >>
> >> This may not work, if more than one instance of hw mempool is in use.
> >>
> >>>
> >>>
> >>> I think a new mempool handler should pass the mempool tests, or at least
> >>> we should add a new API that would describe the capabilities or something
> >>> like that (for instance: support mbuf pool only, support multiprocess).
> >>>
> >> Let me start working on this asap. we will experiment and send some RFCs.
> >>
> >>>
> >>> To conclude, I'm quite reserved.
> >>> Well, all the code is in driver/, meaning it does not pollute the rest.
> >>
> >> Thanks and understood your concerns.
> >
> > The discussion is still ongoing for the patch, there are valid concerns,
> > the patch is already in the next-net tree, but it may not be merged with
> > ongoing discussion.
> >
> > Btw, since dpaa2 net driver depends this one, we are talking about all
> > dpaa2 code (bus, mempool, net).
> >
> > I think we have following options:
> >
> > 1) Drop the dpaa2 patches from next-net.
> >    Work on having an agreement.
> >    If all parties agreed, target for this release RC2.
> >    If not, it will slip into next release.
> >
> > 2) Since these are drivers, scope is limited,
> >    get PMD as it is for this release, and work for fixes on next release
> >    This has risk of proper fixes may not happen.
> 
> The dpaa2 hw mempool driver is a a very specific driver and not a 
> general purpose mempool. It will only be used for dpaa2 pmd. It can only 
> work on NXP hardware.
> 
> We are already working on suggestion to make sure that mempool_autotest 
> work and document the mempool limitation in NXP PMD. The patches will be 
> out shortly.
> 
> The changes in the common/lib area requires a lot of discussions. Some 
> of the discussion started around the mempool changes for hw support, 
> never reached a conclusion. They can easily be sequential.
> 
> This driver is not making any changes in the common/lib area. All 
> changes are confined with the NXP specific driver only.
> 
> If we keep continuing like this, I am afraid that NXP PMD will get stuck 
> indefinitely.
> 
> >
> >
> > I am for doing right at first time, so will pick option 1, but also this
> > PMD is waiting for some time. So this is a tricky decision,
> 
> Currently, all our work and planning is pending for the PMD to be 
> integrated in DPDK. Once the basic support is available,  we will also 
> be able to contribute to other areas like eventdev, tm and many other 
> items. Even our Crypto device patches are pending subject to PMD being 
> integrated to master.
> 
> >
> > Thomas, Hemant, et al,
> >
> > Can you please comment?

Yes it is a tricky decision.
It would not be reasonable to postpone this patch to 17.08.
However I agree with your position of "doing right at first time".
Anyway, the right merge order is to make sure the move of mempool handlers
as mempool drivers work fine.
We must also merge the mbuf rework (which has not been acknowledged a lot).
Then the DPAA2 mempool should be adapted.
Can we wait RC2 for a better DPAA2 integration?

  reply	other threads:[~2017-03-30 13:52 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-17 12:47 [dpdk-dev] [PATCH v1] NXP DPAA2 External Mempool Driver Hemant Agrawal
2017-03-17 12:47 ` [dpdk-dev] [PATCH v1] mempool/dpaa2: add DPAA2 hardware offloaded mempool Hemant Agrawal
2017-03-22  6:09   ` Jianbo Liu
2017-03-23 16:57     ` Hemant Agrawal
2017-03-24 14:57   ` Ferruh Yigit
2017-03-24 15:59     ` Ferruh Yigit
2017-03-24 16:31       ` Olivier Matz
2017-03-24 16:38         ` Ferruh Yigit
2017-03-24 16:42           ` Olivier Matz
2017-03-27 16:30             ` Olivier Matz
2017-03-28  9:45               ` Hemant Agrawal
2017-03-30 11:29                 ` Ferruh Yigit
2017-03-30 12:50                   ` Hemant Agrawal
2017-03-30 13:52                     ` Thomas Monjalon [this message]
2017-04-03 14:50                       ` Ferruh Yigit
2017-04-09  7:59   ` [dpdk-dev] [PATCH v2] NXP DPAA2 External Mempool Driver Hemant Agrawal
2017-04-09  7:59     ` [dpdk-dev] [PATCH v2] mempool/dpaa2: add DPAA2 hardware offloaded mempool Hemant Agrawal
2017-04-10 19:58       ` Olivier MATZ
2017-04-11  5:58         ` Hemant Agrawal
2017-04-11  7:50           ` Thomas Monjalon
2017-04-11  8:39             ` Ferruh Yigit
2017-04-11 12:50               ` Thomas Monjalon
2017-04-11 12:56                 ` Olivier MATZ
2017-04-11 13:54                   ` Hemant Agrawal
2017-04-11 13:42     ` [dpdk-dev] [PATCH v3] NXP DPAA2 External Mempool Driver Hemant Agrawal
2017-04-11 13:42       ` [dpdk-dev] [PATCH v3] mempool/dpaa2: add DPAA2 hardware offloaded mempool Hemant Agrawal
2017-04-12 11:31       ` [dpdk-dev] [PATCH v3] NXP DPAA2 External Mempool Driver Ferruh Yigit
2017-04-12 13:50       ` Ferruh Yigit
2017-03-17 13:42 ` [dpdk-dev] [PATCH v1] " Thomas Monjalon
2017-03-17 17:12   ` Hemant Agrawal
2017-03-17 17:22     ` Olivier Matz
2017-03-20 10:08       ` Hemant Agrawal
2017-04-09  7:50 ` [dpdk-dev] [PATCH v3 00/21] NXP DPAA2 FSLMC Bus driver Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 01/21] mk/dpaa2: add the crc support to the machine type Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 02/21] bus/fslmc: introducing fsl-mc bus driver Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 03/21] bus/fslmc: add QBMAN driver to bus Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 04/21] bus/fslmc: introduce MC object functions Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 05/21] bus/fslmc: add mc dpio object support Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 06/21] bus/fslmc: add mc dpbp " Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 07/21] eal/vfio: adding vfio utility functions in map file Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 08/21] bus/fslmc: add vfio support Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 09/21] bus/fslmc: scan for net and sec device Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 10/21] bus/fslmc: add debug log support Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 11/21] bus/fslmc: dpio portal driver Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 12/21] bus/fslmc: introduce support for hardware mempool object Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 13/21] bus/fslmc: affine dpio to crypto threads Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 14/21] bus/fslmc: define queues for DPAA2 devices Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 15/21] bus/fslmc: define hardware annotation area size Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 16/21] bus/fslmc: introduce true and false macros Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 17/21] bus/fslmc: define VLAN header length Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 18/21] bus/fslmc: add packet FLE definitions Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 19/21] bus/fslmc: add physical-virtual address translation helpers Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 20/21] bus/fslmc: add support for DMA mapping for ARM SMMU Hemant Agrawal
2017-04-09  7:50   ` [dpdk-dev] [PATCH v3 21/21] bus/fslmc: frame queue based dq storage alloc Hemant Agrawal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4494502.Baz0AuQ6It@xps13 \
    --to=thomas.monjalon@6wind.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=olivier.matz@6wind.com \
    --cc=shreyansh.jain@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).