From: "Xueming(Steven) Li" <xuemingl@nvidia.com>
To: "jerinjacobk@gmail.com" <jerinjacobk@gmail.com>,
"konstantin.ananyev@intel.com" <konstantin.ananyev@intel.com>
Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>,
"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
"dev@dpdk.org" <dev@dpdk.org>,
"ferruh.yigit@intel.com" <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH v1] ethdev: introduce shared Rx queue
Date: Wed, 29 Sep 2021 14:54:30 +0000 [thread overview]
Message-ID: <f4fe94929e493f1a5ba7f066fd94b6a90e7a198a.camel@nvidia.com> (raw)
In-Reply-To: <DM6PR11MB4491FB8AC51B86EE7BDB6CC99AA99@DM6PR11MB4491.namprd11.prod.outlook.com>
On Wed, 2021-09-29 at 12:35 +0000, Ananyev, Konstantin wrote:
>
> > -----Original Message-----
> > From: Xueming(Steven) Li <xuemingl@nvidia.com>
> > Sent: Wednesday, September 29, 2021 1:09 PM
> > To: jerinjacobk@gmail.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; andrew.rybchenko@oktetlabs.ru; dev@dpdk.org; Yigit, Ferruh
> > <ferruh.yigit@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH v1] ethdev: introduce shared Rx queue>
> > On Wed, 2021-09-29 at 09:52 +0000, Ananyev, Konstantin wrote:
> > >
> > > > -----Original Message-----
> > > > From: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > Sent: Wednesday, September 29, 2021 10:13 AM
> > > > To: jerinjacobk@gmail.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; andrew.rybchenko@oktetlabs.ru; dev@dpdk.org; Yigit, Ferruh
> > > > <ferruh.yigit@intel.com>
> > > > Subject: Re: [dpdk-dev] [PATCH v1] ethdev: introduce shared Rx queue
> > > >
> > > > On Wed, 2021-09-29 at 00:26 +0000, Ananyev, Konstantin wrote:
> > > > > > > > > > > > > > > > > > > In current DPDK framework, each RX queue
> > > > > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > > > > pre-loaded with mbufs
> > > > > > > > > > > > > > > > > > > for incoming packets. When number of
> > > > > > > > > > > > > > > > > > > representors scale out in a
> > > > > > > > > > > > > > > > > > > switch domain, the memory consumption
> > > > > > > > > > > > > > > > > > > became
> > > > > > > > > > > > > > > > > > > significant. Most
> > > > > > > > > > > > > > > > > > > important, polling all ports leads to
> > > > > > > > > > > > > > > > > > > high
> > > > > > > > > > > > > > > > > > > cache miss, high
> > > > > > > > > > > > > > > > > > > latency and low throughput.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > This patch introduces shared RX queue.
> > > > > > > > > > > > > > > > > > > Ports
> > > > > > > > > > > > > > > > > > > with same
> > > > > > > > > > > > > > > > > > > configuration in a switch domain could
> > > > > > > > > > > > > > > > > > > share
> > > > > > > > > > > > > > > > > > > RX queue set by specifying sharing group.
> > > > > > > > > > > > > > > > > > > Polling any queue using same shared RX
> > > > > > > > > > > > > > > > > > > queue
> > > > > > > > > > > > > > > > > > > receives packets from
> > > > > > > > > > > > > > > > > > > all member ports. Source port is
> > > > > > > > > > > > > > > > > > > identified
> > > > > > > > > > > > > > > > > > > by mbuf->port.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Port queue number in a shared group
> > > > > > > > > > > > > > > > > > > should be
> > > > > > > > > > > > > > > > > > > identical. Queue
> > > > > > > > > > > > > > > > > > > index is
> > > > > > > > > > > > > > > > > > > 1:1 mapped in shared group.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Share RX queue is supposed to be polled
> > > > > > > > > > > > > > > > > > > on
> > > > > > > > > > > > > > > > > > > same thread.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Multiple groups is supported by group ID.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Is this offload specific to the
> > > > > > > > > > > > > > > > > > representor? If
> > > > > > > > > > > > > > > > > > so can this name be changed specifically to
> > > > > > > > > > > > > > > > > > representor?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Yes, PF and representor in switch domain
> > > > > > > > > > > > > > > > > could
> > > > > > > > > > > > > > > > > take advantage.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > If it is for a generic case, how the flow
> > > > > > > > > > > > > > > > > > ordering will be maintained?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Not quite sure that I understood your
> > > > > > > > > > > > > > > > > question.
> > > > > > > > > > > > > > > > > The control path of is
> > > > > > > > > > > > > > > > > almost same as before, PF and representor
> > > > > > > > > > > > > > > > > port
> > > > > > > > > > > > > > > > > still needed, rte flows not impacted.
> > > > > > > > > > > > > > > > > Queues still needed for each member port,
> > > > > > > > > > > > > > > > > descriptors(mbuf) will be
> > > > > > > > > > > > > > > > > supplied from shared Rx queue in my PMD
> > > > > > > > > > > > > > > > > implementation.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > My question was if create a generic
> > > > > > > > > > > > > > > > RTE_ETH_RX_OFFLOAD_SHARED_RXQ offload, multiple
> > > > > > > > > > > > > > > > ethdev receive queues land into
> > > > > > > > the same
> > > > > > > > > > > > > > > > receive queue, In that case, how the flow order
> > > > > > > > > > > > > > > > is
> > > > > > > > > > > > > > > > maintained for respective receive queues.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I guess the question is testpmd forward stream?
> > > > > > > > > > > > > > > The
> > > > > > > > > > > > > > > forwarding logic has to be changed slightly in
> > > > > > > > > > > > > > > case
> > > > > > > > > > > > > > > of shared rxq.
> > > > > > > > > > > > > > > basically for each packet in rx_burst result,
> > > > > > > > > > > > > > > lookup
> > > > > > > > > > > > > > > source stream according to mbuf->port, forwarding
> > > > > > > > > > > > > > > to
> > > > > > > > > > > > > > > target fs.
> > > > > > > > > > > > > > > Packets from same source port could be grouped as
> > > > > > > > > > > > > > > a
> > > > > > > > > > > > > > > small burst to process, this will accelerates the
> > > > > > > > > > > > > > > performance if traffic
> > > > > > > > come from
> > > > > > > > > > > > > > > limited ports. I'll introduce some common api to
> > > > > > > > > > > > > > > do
> > > > > > > > > > > > > > > shard rxq forwarding, call it with packets
> > > > > > > > > > > > > > > handling
> > > > > > > > > > > > > > > callback, so it suites for
> > > > > > > > > > > > > > > all forwarding engine. Will sent patches soon.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > All ports will put the packets in to the same queue
> > > > > > > > > > > > > > (share queue), right? Does
> > > > > > > > > > > > > > this means only single core will poll only, what
> > > > > > > > > > > > > > will
> > > > > > > > > > > > > > happen if there are
> > > > > > > > > > > > > > multiple cores polling, won't it cause problem?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > And if this requires specific changes in the
> > > > > > > > > > > > > > application, I am not sure about
> > > > > > > > > > > > > > the solution, can't this work in a transparent way
> > > > > > > > > > > > > > to
> > > > > > > > > > > > > > the application?
> > > > > > > > > > > > >
> > > > > > > > > > > > > Discussed with Jerin, new API introduced in v3 2/8
> > > > > > > > > > > > > that
> > > > > > > > > > > > > aggregate ports
> > > > > > > > > > > > > in same group into one new port. Users could schedule
> > > > > > > > > > > > > polling on the
> > > > > > > > > > > > > aggregated port instead of all member ports.
> > > > > > > > > > > >
> > > > > > > > > > > > The v3 still has testpmd changes in fastpath. Right?
> > > > > > > > > > > > IMO,
> > > > > > > > > > > > For this
> > > > > > > > > > > > feature, we should not change fastpath of testpmd
> > > > > > > > > > > > application. Instead, testpmd can use aggregated ports
> > > > > > > > > > > > probably as
> > > > > > > > > > > > separate fwd_engine to show how to use this feature.
> > > > > > > > > > >
> > > > > > > > > > > Good point to discuss :) There are two strategies to
> > > > > > > > > > > polling
> > > > > > > > > > > a shared
> > > > > > > > > > > Rxq:
> > > > > > > > > > > 1. polling each member port
> > > > > > > > > > > All forwarding engines can be reused to work as
> > > > > > > > > > > before.
> > > > > > > > > > > My testpmd patches are efforts towards this direction.
> > > > > > > > > > > Does your PMD support this?
> > > > > > > > > >
> > > > > > > > > > Not unfortunately. More than that, every application needs
> > > > > > > > > > to
> > > > > > > > > > change
> > > > > > > > > > to support this model.
> > > > > > > > >
> > > > > > > > > Both strategies need user application to resolve port ID from
> > > > > > > > > mbuf and
> > > > > > > > > process accordingly.
> > > > > > > > > This one doesn't demand aggregated port, no polling schedule
> > > > > > > > > change.
> > > > > > > >
> > > > > > > > I was thinking, mbuf will be updated from driver/aggregator
> > > > > > > > port as
> > > > > > > > when it
> > > > > > > > comes to application.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > 2. polling aggregated port
> > > > > > > > > > > Besides forwarding engine, need more work to to demo
> > > > > > > > > > > it.
> > > > > > > > > > > This is an optional API, not supported by my PMD yet.
> > > > > > > > > >
> > > > > > > > > > We are thinking of implementing this PMD when it comes to
> > > > > > > > > > it,
> > > > > > > > > > ie.
> > > > > > > > > > without application change in fastpath
> > > > > > > > > > logic.
> > > > > > > > >
> > > > > > > > > Fastpath have to resolve port ID anyway and forwarding
> > > > > > > > > according
> > > > > > > > > to
> > > > > > > > > logic. Forwarding engine need to adapt to support shard Rxq.
> > > > > > > > > Fortunately, in testpmd, this can be done with an abstract
> > > > > > > > > API.
> > > > > > > > >
> > > > > > > > > Let's defer part 2 until some PMD really support it and
> > > > > > > > > tested,
> > > > > > > > > how do
> > > > > > > > > you think?
> > > > > > > >
> > > > > > > > We are not planning to use this feature so either way it is OK
> > > > > > > > to
> > > > > > > > me.
> > > > > > > > I leave to ethdev maintainers decide between 1 vs 2.
> > > > > > > >
> > > > > > > > I do have a strong opinion not changing the testpmd basic
> > > > > > > > forward
> > > > > > > > engines
> > > > > > > > for this feature.I would like to keep it simple as fastpath
> > > > > > > > optimized and would
> > > > > > > > like to add a separate Forwarding engine as means to verify
> > > > > > > > this
> > > > > > > > feature.
> > > > > > >
> > > > > > > +1 to that.
> > > > > > > I don't think it a 'common' feature.
> > > > > > > So separate FWD mode seems like a best choice to me.
> > > > > >
> > > > > > -1 :)
> > > > > > There was some internal requirement from test team, they need to
> > > > > > verify
> > > > > > all features like packet content, rss, vlan, checksum, rte_flow...
> > > > > > to
> > > > > > be working based on shared rx queue.
> > > > >
> > > > > Then I suppose you'll need to write really comprehensive fwd-engine
> > > > > to satisfy your test team :)
> > > > > Speaking seriously, I still don't understand why do you need all
> > > > > available fwd-engines to verify this feature.
> > > > > From what I understand, main purpose of your changes to test-pmd:
> > > > > allow to fwd packet though different fwd_stream (TX through different
> > > > > HW queue).
> > > > > In theory, if implemented in generic and extendable way - that
> > > > > might be a useful add-on to tespmd fwd functionality.
> > > > > But current implementation looks very case specific.
> > > > > And as I don't think it is a common case, I don't see much point to
> > > > > pollute
> > > > > basic fwd cases with it.
> > > > >
> > > > > BTW, as a side note, the code below looks bogus to me:
> > > > > +void
> > > > > +forward_shared_rxq(struct fwd_stream *fs, uint16_t nb_rx,
> > > > > + struct rte_mbuf **pkts_burst, packet_fwd_cb
> > > > > fwd)
> > > > > +{
> > > > > + uint16_t i, nb_fs_rx = 1, port;
> > > > > +
> > > > > + /* Locate real source fs according to mbuf->port. */
> > > > > + for (i = 0; i < nb_rx; ++i) {
> > > > > + rte_prefetch0(pkts_burst[i + 1]);
> > > > >
> > > > > you access pkt_burst[] beyond array boundaries,
> > > > > also you ask cpu to prefetch some unknown and possibly invalid
> > > > > address.
> > > >
> > > > Sorry I forgot this topic. It's too late to prefetch current packet, so
> > > > perfetch next is better. Prefetch an invalid address at end of a look
> > > > doesn't hurt, it's common in DPDK.
> > >
> > > First of all it is usually never 'OK' to access array beyond its bounds.
> > > Second prefetching invalid address *does* hurt performance badly on many CPUs
> > > (TLB misses, consumed memory bandwidth etc.).
> > > As a reference: https://lwn.net/Articles/444346/
> > > If some existing DPDK code really does that - then I believe it is an issue and has to be addressed.
> > > More important - it is really bad attitude to submit bogus code to DPDK community
> > > and pretend that it is 'OK'.
> >
> > Thanks for the link!
> > From instruction spec, "The PREFETCHh instruction is merely a hint and
> > does not affect program behavior."
> > There are 3 choices here:
> > 1: no prefetch. D$ miss will happen on each packet, time cost depends
> > on where data sits(close or far) and burst size.
> > 2: prefetch with loop end check to avoid random address. Pro is free of
> > TLB miss per burst, Cons is "if" instruction per packet. Cost depends
> > on burst size.
> > 3: brute force prefetch, cost is TLB miss, but no addtional
> > instructions per packet. Not sure how random the last address could be
> > in testpmd and how many TLB miss could happen.
>
> There are plenty of standard techniques to avoid that issue while keeping
> prefetch() in place.
> Probably the easiest one:
>
> for (i=0; i < nb_rx - 1; i++) {
> prefetch(pkt[i+1];
> /* do your stuff with pkt[i[ here */
> }
>
> /* do your stuff with pkt[nb_rx - 1]; */
Thanks, will update in next version.
>
> > Based on my expericen of performance optimization, IIRC, option 3 has
> > the best performance. But for this case, result depends on how many
> > sub-burst inside and how sub-burst get processed, maybe callback will
> > flush prefetch data completely or not. So it's hard to get a
> > conclusion, what I said is that the code in PMD driver should have a
> > reason.
> >
> > On the other hand, the latency and throughput saving of this featue on
> > multiple ports is huge, I perfer to down play this prefetch discussion
> > if you agree.
> >
>
> Honestly, I don't know how else to explain to you that there is a bug in that piece of code.
> From my perspective it is a trivial bug, with a trivial fix.
> But you simply keep ignoring the arguments.
> Till it get fixed and other comments addressed - my vote is NACK for these series.
> I don't think we need bogus code in testpmd.
>
>
next prev parent reply other threads:[~2021-09-29 14:54 UTC|newest]
Thread overview: 266+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-27 3:42 [dpdk-dev] [RFC] " Xueming Li
2021-07-28 7:56 ` Andrew Rybchenko
2021-07-28 8:20 ` Xueming(Steven) Li
2021-08-09 11:47 ` [dpdk-dev] [PATCH v1] " Xueming Li
2021-08-09 13:50 ` Jerin Jacob
2021-08-09 14:16 ` Xueming(Steven) Li
2021-08-11 8:02 ` Jerin Jacob
2021-08-11 8:28 ` Xueming(Steven) Li
2021-08-11 12:04 ` Ferruh Yigit
2021-08-11 12:59 ` Xueming(Steven) Li
2021-08-12 14:35 ` Xueming(Steven) Li
2021-09-15 15:34 ` Xueming(Steven) Li
2021-09-26 5:35 ` Xueming(Steven) Li
2021-09-28 9:35 ` Jerin Jacob
2021-09-28 11:36 ` Xueming(Steven) Li
2021-09-28 11:37 ` Xueming(Steven) Li
2021-09-28 11:37 ` Xueming(Steven) Li
2021-09-28 12:58 ` Jerin Jacob
2021-09-28 13:25 ` Xueming(Steven) Li
2021-09-28 13:38 ` Jerin Jacob
2021-09-28 13:59 ` Ananyev, Konstantin
2021-09-28 14:40 ` Xueming(Steven) Li
2021-09-28 14:59 ` Jerin Jacob
2021-09-29 7:41 ` Xueming(Steven) Li
2021-09-29 8:05 ` Jerin Jacob
2021-10-08 8:26 ` Xueming(Steven) Li
2021-10-10 9:46 ` Jerin Jacob
2021-10-10 13:40 ` Xueming(Steven) Li
2021-10-11 4:10 ` Jerin Jacob
2021-09-29 0:26 ` Ananyev, Konstantin
2021-09-29 8:40 ` Xueming(Steven) Li
2021-09-29 10:20 ` Ananyev, Konstantin
2021-09-29 13:25 ` Xueming(Steven) Li
2021-09-30 9:59 ` Ananyev, Konstantin
2021-10-06 7:54 ` Xueming(Steven) Li
2021-09-29 9:12 ` Xueming(Steven) Li
2021-09-29 9:52 ` Ananyev, Konstantin
2021-09-29 11:07 ` Bruce Richardson
2021-09-29 11:46 ` Ananyev, Konstantin
2021-09-29 12:17 ` Bruce Richardson
2021-09-29 12:08 ` Xueming(Steven) Li
2021-09-29 12:35 ` Ananyev, Konstantin
2021-09-29 14:54 ` Xueming(Steven) Li [this message]
2021-09-28 14:51 ` Xueming(Steven) Li
2021-09-28 12:59 ` Xueming(Steven) Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 01/15] " Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 02/15] app/testpmd: dump port and queue info for each packet Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 03/15] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 04/15] app/testpmd: make sure shared Rx queue polled on same core Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 05/15] app/testpmd: adds common forwarding for shared Rx queue Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 06/15] app/testpmd: add common fwd wrapper function Xueming Li
2021-08-17 9:37 ` Jerin Jacob
2021-08-18 11:27 ` Xueming(Steven) Li
2021-08-18 11:47 ` Jerin Jacob
2021-08-18 14:08 ` Xueming(Steven) Li
2021-08-26 11:28 ` Jerin Jacob
2021-08-29 7:07 ` Xueming(Steven) Li
2021-09-01 14:44 ` Xueming(Steven) Li
2021-09-28 5:54 ` Xueming(Steven) Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 07/15] app/testpmd: support shared Rx queues for IO forwarding Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 08/15] app/testpmd: support shared Rx queue for rxonly forwarding Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 09/15] app/testpmd: support shared Rx queue for icmpecho fwd Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 10/15] app/testpmd: support shared Rx queue for csum fwd Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 11/15] app/testpmd: support shared Rx queue for flowgen Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 12/15] app/testpmd: support shared Rx queue for MAC fwd Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 13/15] app/testpmd: support shared Rx queue for macswap fwd Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 14/15] app/testpmd: support shared Rx queue for 5tuple fwd Xueming Li
2021-08-11 14:04 ` [dpdk-dev] [PATCH v2 15/15] app/testpmd: support shared Rx queue for ieee1588 fwd Xueming Li
2021-08-17 9:33 ` [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue Jerin Jacob
2021-08-17 11:31 ` Xueming(Steven) Li
2021-08-17 15:11 ` Jerin Jacob
2021-08-18 11:14 ` Xueming(Steven) Li
2021-08-19 5:26 ` Jerin Jacob
2021-08-19 12:09 ` Xueming(Steven) Li
2021-08-26 11:58 ` Jerin Jacob
2021-08-28 14:16 ` Xueming(Steven) Li
2021-08-30 9:31 ` Jerin Jacob
2021-08-30 10:13 ` Xueming(Steven) Li
2021-09-15 14:45 ` Xueming(Steven) Li
2021-09-16 4:16 ` Jerin Jacob
2021-09-28 5:50 ` Xueming(Steven) Li
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 0/8] " Xueming Li
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 1/8] " Xueming Li
2021-09-27 23:53 ` Ajit Khaparde
2021-09-28 14:24 ` Xueming(Steven) Li
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 2/8] ethdev: new API to aggregate shared Rx queue group Xueming Li
2021-09-26 17:54 ` Ajit Khaparde
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 3/8] app/testpmd: dump port and queue info for each packet Xueming Li
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 4/8] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 5/8] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 6/8] app/testpmd: add common fwd wrapper Xueming Li
2021-09-17 11:24 ` Jerin Jacob
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 7/8] app/testpmd: improve forwarding cache miss Xueming Li
2021-09-17 8:01 ` [dpdk-dev] [PATCH v3 8/8] app/testpmd: support shared Rx queue forwarding Xueming Li
2021-09-30 14:55 ` [dpdk-dev] [PATCH v4 0/6] ethdev: introduce shared Rx queue Xueming Li
2021-09-30 14:55 ` [dpdk-dev] [PATCH v4 1/6] " Xueming Li
2021-10-11 10:47 ` Andrew Rybchenko
2021-10-11 13:12 ` Xueming(Steven) Li
2021-09-30 14:55 ` [dpdk-dev] [PATCH v4 2/6] ethdev: new API to aggregate shared Rx queue group Xueming Li
2021-09-30 14:55 ` [dpdk-dev] [PATCH v4 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-09-30 14:56 ` [dpdk-dev] [PATCH v4 4/6] app/testpmd: dump port info for " Xueming Li
2021-09-30 14:56 ` [dpdk-dev] [PATCH v4 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-09-30 14:56 ` [dpdk-dev] [PATCH v4 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-11 11:49 ` [dpdk-dev] [PATCH v4 0/6] ethdev: introduce " Andrew Rybchenko
2021-10-11 15:11 ` Xueming(Steven) Li
2021-10-12 6:37 ` Xueming(Steven) Li
2021-10-12 8:48 ` Andrew Rybchenko
2021-10-12 10:55 ` Xueming(Steven) Li
2021-10-12 11:28 ` Andrew Rybchenko
2021-10-12 11:33 ` Xueming(Steven) Li
2021-10-13 7:53 ` Xueming(Steven) Li
2021-10-11 12:37 ` [dpdk-dev] [PATCH v5 0/5] " Xueming Li
2021-10-11 12:37 ` [dpdk-dev] [PATCH v5 1/5] " Xueming Li
2021-10-11 12:37 ` [dpdk-dev] [PATCH v5 2/5] app/testpmd: new parameter to enable " Xueming Li
2021-10-11 12:37 ` [dpdk-dev] [PATCH v5 3/5] app/testpmd: dump port info for " Xueming Li
2021-10-11 12:37 ` [dpdk-dev] [PATCH v5 4/5] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-11 12:37 ` [dpdk-dev] [PATCH v5 5/5] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-12 14:39 ` [dpdk-dev] [PATCH v6 0/5] ethdev: introduce " Xueming Li
2021-10-12 14:39 ` [dpdk-dev] [PATCH v6 1/5] " Xueming Li
2021-10-15 9:28 ` Andrew Rybchenko
2021-10-15 10:54 ` Xueming(Steven) Li
2021-10-18 6:46 ` Andrew Rybchenko
2021-10-18 6:57 ` Xueming(Steven) Li
2021-10-15 17:20 ` Ferruh Yigit
2021-10-16 9:14 ` Xueming(Steven) Li
2021-10-12 14:39 ` [dpdk-dev] [PATCH v6 2/5] app/testpmd: new parameter to enable " Xueming Li
2021-10-12 14:39 ` [dpdk-dev] [PATCH v6 3/5] app/testpmd: dump port info for " Xueming Li
2021-10-12 14:39 ` [dpdk-dev] [PATCH v6 4/5] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-12 14:39 ` [dpdk-dev] [PATCH v6 5/5] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-16 8:42 ` [dpdk-dev] [PATCH v7 0/5] ethdev: introduce " Xueming Li
2021-10-16 8:42 ` [dpdk-dev] [PATCH v7 1/5] " Xueming Li
2021-10-17 5:33 ` Ajit Khaparde
2021-10-17 7:29 ` Xueming(Steven) Li
2021-10-16 8:42 ` [dpdk-dev] [PATCH v7 2/5] app/testpmd: new parameter to enable " Xueming Li
2021-10-16 8:42 ` [dpdk-dev] [PATCH v7 3/5] app/testpmd: dump port info for " Xueming Li
2021-10-16 8:42 ` [dpdk-dev] [PATCH v7 4/5] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-16 8:42 ` [dpdk-dev] [PATCH v7 5/5] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 0/6] ethdev: introduce " Xueming Li
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 1/6] " Xueming Li
2021-10-19 0:21 ` Ajit Khaparde
2021-10-19 5:54 ` Xueming(Steven) Li
2021-10-19 6:28 ` Andrew Rybchenko
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 2/6] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 4/6] app/testpmd: dump port info for " Xueming Li
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-19 8:17 ` [dpdk-dev] [PATCH v9 0/6] ethdev: introduce " Xueming Li
2021-10-19 8:17 ` [dpdk-dev] [PATCH v9 1/6] " Xueming Li
2021-10-19 8:17 ` [dpdk-dev] [PATCH v9 2/6] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-19 8:33 ` Andrew Rybchenko
2021-10-19 9:10 ` Xueming(Steven) Li
2021-10-19 9:39 ` Andrew Rybchenko
2021-10-19 8:17 ` [dpdk-dev] [PATCH v9 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-19 8:17 ` [dpdk-dev] [PATCH v9 4/6] app/testpmd: dump port info for " Xueming Li
2021-10-19 8:17 ` [dpdk-dev] [PATCH v9 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-19 8:17 ` [dpdk-dev] [PATCH v9 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-19 15:20 ` [dpdk-dev] [PATCH v10 0/6] ethdev: introduce " Xueming Li
2021-10-19 15:20 ` [dpdk-dev] [PATCH v10 1/6] ethdev: new API to resolve device capability name Xueming Li
2021-10-19 15:20 ` [dpdk-dev] [PATCH v10 2/6] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-19 15:20 ` [dpdk-dev] [PATCH v10 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-19 15:20 ` [dpdk-dev] [PATCH v10 4/6] app/testpmd: dump port info for " Xueming Li
2021-10-19 15:20 ` [dpdk-dev] [PATCH v10 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-19 15:20 ` [dpdk-dev] [PATCH v10 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 0/7] ethdev: introduce " Xueming Li
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 1/7] " Xueming Li
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 2/7] ethdev: new API to resolve device capability name Xueming Li
2021-10-19 17:57 ` Andrew Rybchenko
2021-10-20 7:47 ` Xueming(Steven) Li
2021-10-20 7:48 ` Andrew Rybchenko
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 3/7] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 4/7] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 5/7] app/testpmd: dump port info for " Xueming Li
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 6/7] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-19 15:28 ` [dpdk-dev] [PATCH v10 7/7] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-20 7:53 ` [dpdk-dev] [PATCH v11 0/7] ethdev: introduce " Xueming Li
2021-10-20 7:53 ` [dpdk-dev] [PATCH v11 1/7] " Xueming Li
2021-10-20 17:14 ` Ajit Khaparde
2021-10-20 7:53 ` [dpdk-dev] [PATCH v11 2/7] ethdev: new API to resolve device capability name Xueming Li
2021-10-20 10:52 ` Andrew Rybchenko
2021-10-20 17:16 ` Ajit Khaparde
2021-10-20 18:42 ` Thomas Monjalon
2021-10-20 7:53 ` [dpdk-dev] [PATCH v11 3/7] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-21 3:24 ` Li, Xiaoyun
2021-10-21 3:28 ` Ajit Khaparde
2021-10-20 7:53 ` [dpdk-dev] [PATCH v11 4/7] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-20 17:29 ` Ajit Khaparde
2021-10-20 19:14 ` Thomas Monjalon
2021-10-21 4:09 ` Xueming(Steven) Li
2021-10-21 3:49 ` Xueming(Steven) Li
2021-10-21 3:24 ` Li, Xiaoyun
2021-10-21 3:58 ` Xueming(Steven) Li
2021-10-21 5:15 ` Li, Xiaoyun
2021-10-20 7:53 ` [dpdk-dev] [PATCH v11 5/7] app/testpmd: dump port info for " Xueming Li
2021-10-21 3:24 ` Li, Xiaoyun
2021-10-20 7:53 ` [dpdk-dev] [PATCH v11 6/7] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-21 3:24 ` Li, Xiaoyun
2021-10-21 4:21 ` Xueming(Steven) Li
2021-10-20 7:53 ` [dpdk-dev] [PATCH v11 7/7] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-20 19:20 ` Thomas Monjalon
2021-10-21 3:26 ` Li, Xiaoyun
2021-10-21 4:39 ` Xueming(Steven) Li
2021-10-21 5:08 ` [dpdk-dev] [PATCH v12 0/7] ethdev: introduce " Xueming Li
2021-10-21 5:08 ` [dpdk-dev] [PATCH v12 1/7] " Xueming Li
2021-10-21 5:08 ` [dpdk-dev] [PATCH v12 2/7] ethdev: get device capability name as string Xueming Li
2021-10-21 5:08 ` [dpdk-dev] [PATCH v12 3/7] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-21 5:08 ` [dpdk-dev] [PATCH v12 4/7] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-21 9:20 ` Thomas Monjalon
2021-10-21 5:08 ` [dpdk-dev] [PATCH v12 5/7] app/testpmd: dump port info for " Xueming Li
2021-10-21 5:08 ` [dpdk-dev] [PATCH v12 6/7] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-21 6:35 ` Li, Xiaoyun
2021-10-21 5:08 ` [dpdk-dev] [PATCH v12 7/7] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-21 6:33 ` Li, Xiaoyun
2021-10-21 7:58 ` Xueming(Steven) Li
2021-10-21 8:01 ` Li, Xiaoyun
2021-10-21 8:22 ` Xueming(Steven) Li
2021-10-21 9:28 ` Thomas Monjalon
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 0/7] ethdev: introduce " Xueming Li
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 1/7] " Xueming Li
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 2/7] ethdev: get device capability name as string Xueming Li
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 3/7] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 4/7] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-21 19:45 ` Ajit Khaparde
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 5/7] app/testpmd: dump port info for " Xueming Li
2021-10-21 19:48 ` Ajit Khaparde
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 6/7] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-21 10:41 ` [dpdk-dev] [PATCH v13 7/7] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-21 23:41 ` [dpdk-dev] [PATCH v13 0/7] ethdev: introduce " Ferruh Yigit
2021-10-22 6:31 ` Xueming(Steven) Li
2021-11-04 15:52 ` Tom Barbette
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 00/14] net/mlx5: support " Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 01/14] common/mlx5: introduce user index field in completion Xueming Li
2021-11-04 9:14 ` Slava Ovsiienko
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 02/14] net/mlx5: fix field reference for PPC Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 03/14] common/mlx5: adds basic receive memory pool support Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 04/14] common/mlx5: support receive memory pool Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 05/14] net/mlx5: fix Rx queue memory allocation return value Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 06/14] net/mlx5: clean Rx queue code Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 07/14] net/mlx5: split Rx queue into shareable and private Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 08/14] net/mlx5: move Rx queue reference count Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 09/14] net/mlx5: move Rx queue hairpin info to private data Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 10/14] net/mlx5: remove port info from shareable Rx queue Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 11/14] net/mlx5: move Rx queue DevX resource Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 12/14] net/mlx5: remove Rx queue data list from device Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 13/14] net/mlx5: support shared Rx queue Xueming Li
2021-11-03 7:58 ` [dpdk-dev] [PATCH v3 14/14] net/mlx5: add shared Rx queue port datapath support Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 00/14] net/mlx5: support shared Rx queue Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 01/14] common/mlx5: introduce user index field in completion Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 02/14] net/mlx5: fix field reference for PPC Xueming Li
2021-11-04 17:07 ` Raslan Darawsheh
2021-11-04 17:49 ` David Christensen
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 03/14] common/mlx5: adds basic receive memory pool support Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 04/14] common/mlx5: support receive memory pool Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 05/14] net/mlx5: fix Rx queue memory allocation return value Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 06/14] net/mlx5: clean Rx queue code Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 07/14] net/mlx5: split Rx queue into shareable and private Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 08/14] net/mlx5: move Rx queue reference count Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 09/14] net/mlx5: move Rx queue hairpin info to private data Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 10/14] net/mlx5: remove port info from shareable Rx queue Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 11/14] net/mlx5: move Rx queue DevX resource Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 12/14] net/mlx5: remove Rx queue data list from device Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 13/14] net/mlx5: support shared Rx queue Xueming Li
2021-11-04 12:33 ` [dpdk-dev] [PATCH v4 14/14] net/mlx5: add shared Rx queue port datapath support Xueming Li
2021-11-04 17:50 ` David Christensen
2021-11-05 6:40 ` Ruifeng Wang
2021-11-04 20:06 ` [dpdk-dev] [PATCH v4 00/14] net/mlx5: support shared Rx queue Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f4fe94929e493f1a5ba7f066fd94b6a90e7a198a.camel@nvidia.com \
--to=xuemingl@nvidia.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=jerinjacobk@gmail.com \
--cc=konstantin.ananyev@intel.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).