DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Honnappa Nagarahalli" <Honnappa.Nagarahalli@arm.com>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Cc: <dev@dpdk.org>, "nd" <nd@arm.com>, <thomas@monjalon.net>,
	"Feifei Wang" <Feifei.Wang2@arm.com>,
	"Yigit, Ferruh" <ferruh.yigit@intel.com>,
	"Andrew Rybchenko" <andrew.rybchenko@oktetlabs.ru>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Xing,  Beilei" <beilei.xing@intel.com>
Subject: RE: [RFC PATCH v1 0/4] Direct re-arming of buffers on receive side
Date: Fri, 28 Jan 2022 12:29:11 +0100	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D86E57@smartserver.smartshare.dk> (raw)
In-Reply-To: <DBAPR08MB581402E6560B4DD7FEB82D9798219@DBAPR08MB5814.eurprd08.prod.outlook.com>

> From: Morten Brørup
> Sent: Thursday, 27 January 2022 18.14
> 
> > From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> > Sent: Thursday, 27 January 2022 05.07
> >
> > Thanks Morten, appreciate your comments. Few responses inline.
> >
> > > -----Original Message-----
> > > From: Morten Brørup <mb@smartsharesystems.com>
> > > Sent: Sunday, December 26, 2021 4:25 AM
> > >
> > > > From: Feifei Wang [mailto:feifei.wang2@arm.com]
> > > > Sent: Friday, 24 December 2021 17.46
> > > >
> > <snip>
> >
> > > >
> > > > However, this solution poses several constraint:
> > > >
> > > > 1)The receive queue needs to know which transmit queue it should
> > take
> > > > the buffers from. The application logic decides which transmit
> port
> > to
> > > > use to send out the packets. In many use cases the NIC might have
> a
> > > > single port ([1], [2], [3]), in which case a given transmit queue
> > is
> > > > always mapped to a single receive queue (1:1 Rx queue: Tx queue).
> > This
> > > > is easy to configure.
> > > >
> > > > If the NIC has 2 ports (there are several references), then we
> will
> > > > have
> > > > 1:2 (RX queue: TX queue) mapping which is still easy to
> configure.
> > > > However, if this is generalized to 'N' ports, the configuration
> can
> > be
> > > > long. More over the PMD would have to scan a list of transmit
> > queues
> > > > to pull the buffers from.
> > >
> > > I disagree with the description of this constraint.
> > >
> > > As I understand it, it doesn't matter now many ports or queues are
> in
> > a NIC or
> > > system.
> > >
> > > The constraint is more narrow:
> > >
> > > This patch requires that all packets ingressing on some port/queue
> > must
> > > egress on the specific port/queue that it has been configured to
> ream
> > its
> > > buffers from. I.e. an application cannot route packets between
> > multiple ports
> > > with this patch.
> > Agree, this patch as is has this constraint. It is not a constraint
> > that would apply for NICs with single port. The above text is
> > describing some of the issues associated with generalizing the
> solution
> > for N number of ports. If N is small, the configuration is small and
> > scanning should not be bad.

But I think N is the number of queues, not the number of ports.

> >
> 
> Perhaps we can live with the 1:1 limitation, if that is the primary use
> case.

Or some similar limitation for NICs with dual ports for redundancy.

> 
> Alternatively, the feature could fall back to using the mempool if
> unable to get/put buffers directly from/to a participating NIC. In this
> case, I envision a library serving as a shim layer between the NICs and
> the mempool. In other words: Take a step back from the implementation,
> and discuss the high level requirements and architecture of the
> proposed feature.

Please ignore my comment above. I had missed the fact that the direct re-arm feature only works inside a single NIC, and not across multiple NICs. And it is not going to work across multiple NICs, unless they are exactly the same type, because their internal descriptor structures may differ. Also, taking a deeper look at the i40e part of the patch, I notice that it already falls back to using the mempool.

> 
> > >
> > > >
> >
> > <snip>
> >
> > > >
> > >
> > > You are missing the fourth constraint:
> > >
> > > 4) The application must transmit all received packets immediately,
> > i.e. QoS
> > > queueing and similar is prohibited.
> > I do not understand this, can you please elaborate?. Even if there is
> > QoS queuing, there would be steady stream of packets being
> transmitted.
> > These transmitted packets will fill the buffers on the RX side.
> 
> E.g. an appliance may receive packets on a 10 Gbps backbone port, and
> queue some of the packets up for a customer with a 20 Mbit/s
> subscription. When there is a large burst of packets towards that
> subscriber, they will queue up in the QoS queue dedicated to that
> subscriber. During that traffic burst, there is much more RX than TX.
> And after the traffic burst, there will be more TX than RX.
> 
> >
> > >
> > <snip>
> >
> > > >
> > >
> > > The patch provides a significant performance improvement, but I am
> > > wondering if any real world applications exist that would use this.
> > Only a
> > > "router on a stick" (i.e. a single-port router) comes to my mind,
> and
> > that is
> > > probably sufficient to call it useful in the real world. Do you
> have
> > any other
> > > examples to support the usefulness of this patch?
> > SmartNIC is a clear and dominant use case, typically they have a
> single
> > port for data plane traffic (dual ports are mostly for redundancy)
> > This patch avoids good amount of store operations. The smaller CPUs
> > found in SmartNICs have smaller store buffers which can become
> > bottlenecks. Avoiding the lcore cache saves valuable HW cache space.
> 
> OK. This is an important use case!

Some NICs have many queues, so the number of RX/TX queue mappings is big. Aren't SmartNICs going to use many RX/TX queues?

> 
> >
> > >
> > > Anyway, the patch doesn't do any harm if unused, and the only
> > performance
> > > cost is the "if (rxq->direct_rxrearm_enable)" branch in the Ethdev
> > driver. So I
> > > don't oppose to it.

If a PMD maintainer agrees to maintaining such a feature, I don't oppose either.

The PMDs are full of cruft already, so why bother complaining about more, if the performance impact is negligible. :-)


  parent reply	other threads:[~2022-01-28 11:29 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-24 16:46 Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 1/4] net/i40e: enable direct re-arm mode Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 2/4] ethdev: add API for " Feifei Wang
2021-12-24 19:38   ` Stephen Hemminger
2021-12-26  9:49     ` 回复: " Feifei Wang
2021-12-26 10:31       ` Morten Brørup
2021-12-24 16:46 ` [RFC PATCH v1 3/4] net/i40e: add direct re-arm mode internal API Feifei Wang
2021-12-24 16:46 ` [RFC PATCH v1 4/4] examples/l3fwd: give an example for direct rearm mode Feifei Wang
2021-12-26 10:25 ` [RFC PATCH v1 0/4] Direct re-arming of buffers on receive side Morten Brørup
2021-12-28  6:55   ` 回复: " Feifei Wang
2022-01-18 15:51     ` Ferruh Yigit
2022-01-18 16:53       ` Thomas Monjalon
2022-01-18 17:27         ` Morten Brørup
2022-01-27  5:24           ` Honnappa Nagarahalli
2022-01-27 16:45             ` Ananyev, Konstantin
2022-02-02 19:46               ` Honnappa Nagarahalli
2022-01-27  5:16         ` Honnappa Nagarahalli
2023-02-28  6:43       ` 回复: " Feifei Wang
2023-02-28  6:52         ` Feifei Wang
2022-01-27  4:06   ` Honnappa Nagarahalli
2022-01-27 17:13     ` Morten Brørup
2022-01-28 11:29     ` Morten Brørup [this message]
2023-03-23 10:43 ` [PATCH v4 0/3] Recycle buffers from Tx to Rx Feifei Wang
2023-03-23 10:43   ` [PATCH v4 1/3] ethdev: add API for buffer recycle mode Feifei Wang
2023-03-23 11:41     ` Morten Brørup
2023-03-29  2:16       ` Feifei Wang
2023-03-23 10:43   ` [PATCH v4 2/3] net/i40e: implement recycle buffer mode Feifei Wang
2023-03-23 10:43   ` [PATCH v4 3/3] net/ixgbe: " Feifei Wang
2023-03-30  6:29 ` [PATCH v5 0/3] Recycle buffers from Tx to Rx Feifei Wang
2023-03-30  6:29   ` [PATCH v5 1/3] ethdev: add API for buffer recycle mode Feifei Wang
2023-03-30  7:19     ` Morten Brørup
2023-03-30  9:31       ` Feifei Wang
2023-03-30 15:15         ` Morten Brørup
2023-03-30 15:58         ` Morten Brørup
2023-04-26  6:59           ` Feifei Wang
2023-04-19 14:46     ` Ferruh Yigit
2023-04-26  7:29       ` Feifei Wang
2023-03-30  6:29   ` [PATCH v5 2/3] net/i40e: implement recycle buffer mode Feifei Wang
2023-03-30  6:29   ` [PATCH v5 3/3] net/ixgbe: " Feifei Wang
2023-04-19 14:46     ` Ferruh Yigit
2023-04-26  7:36       ` Feifei Wang
2023-03-30 15:04   ` [PATCH v5 0/3] Recycle buffers from Tx to Rx Stephen Hemminger
2023-04-03  2:48     ` Feifei Wang
2023-04-19 14:56   ` Ferruh Yigit
2023-04-25  7:57     ` Feifei Wang
2023-05-25  9:45 ` [PATCH v6 0/4] Recycle mbufs from Tx queue to Rx queue Feifei Wang
2023-05-25  9:45   ` [PATCH v6 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-05-25 15:08     ` Morten Brørup
2023-05-31  6:10       ` Feifei Wang
2023-06-05 12:53     ` Константин Ананьев
2023-06-06  2:55       ` Feifei Wang
2023-06-06  7:10         ` Konstantin Ananyev
2023-06-06  7:31           ` Feifei Wang
2023-06-06  8:34             ` Konstantin Ananyev
2023-06-07  0:00               ` Ferruh Yigit
2023-06-12  3:25                 ` Feifei Wang
2023-05-25  9:45   ` [PATCH v6 2/4] net/i40e: implement " Feifei Wang
2023-06-05 13:02     ` Константин Ананьев
2023-06-06  3:16       ` Feifei Wang
2023-06-06  7:18         ` Konstantin Ananyev
2023-06-06  7:58           ` Feifei Wang
2023-06-06  8:27             ` Konstantin Ananyev
2023-06-12  3:05               ` Feifei Wang
2023-05-25  9:45   ` [PATCH v6 3/4] net/ixgbe: " Feifei Wang
2023-05-25  9:45   ` [PATCH v6 4/4] app/testpmd: add recycle mbufs engine Feifei Wang
2023-06-05 13:08     ` Константин Ананьев
2023-06-06  6:32       ` Feifei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D86E57@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=Feifei.Wang2@arm.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=nd@arm.com \
    --cc=qi.z.zhang@intel.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).