From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 7047CA0C43;
	Tue, 28 Sep 2021 16:59:59 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 45CE4410D7;
	Tue, 28 Sep 2021 16:59:59 +0200 (CEST)
Received: from mail-io1-f42.google.com (mail-io1-f42.google.com
 [209.85.166.42]) by mails.dpdk.org (Postfix) with ESMTP id CB5C740E3C
 for <dev@dpdk.org>; Tue, 28 Sep 2021 16:59:57 +0200 (CEST)
Received: by mail-io1-f42.google.com with SMTP id y197so27670920iof.11
 for <dev@dpdk.org>; Tue, 28 Sep 2021 07:59:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=iAO0Gx6d3amEGFSKiW0wPCpNvetZqPFQJLKJX6NZGGg=;
 b=YnS0Xe+pGObK1NnDkah8UqVhHRE/EQucgeQqbsmJ0dVD3Omp6pT7XQXHH7lq0u/Rl4
 KElmTEpxrdDjSwcUFDKIV9dkccvJvrd03TlbvXjFanZjnghKmbBeC+oTNN6Ii/g5jTMz
 En2peFT1sVOFraf+rIiz+CiGAv3qpILf9NDwy1X2yaR1WTlgTJ7gbANGwilResrI5cNy
 4iRa8MVUzzgHgvHuRjCxlb6keALsBmhPJwNTx6Mqm4ZAojP4Q0Oz6d+ngJkZFGxAYzfP
 HWW8big/VEsoTVxTIap2qB/9aAmKWICQM2Zojd6uKMUJaLdech0MRhTxYVCs+bXcOjvp
 n1Eg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20210112;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=iAO0Gx6d3amEGFSKiW0wPCpNvetZqPFQJLKJX6NZGGg=;
 b=fmD5DeYtqwQMPIBicX3mKUKs+fZAcM8nC2Oj+j1MW3e2AR9rs1XwPGunQtCHR8GUVG
 7qR2Z5D43VENNm3xPgDaFb1IZrzEUoO+mQb9N9Xmh2U3BpQpK1ugwHj9+0uPvPe7fxFk
 GnipMmU1fe8f+J2NWFbmCspQTkGhMe6LO+3MR1hpwZLsuRl05XrhW6JdgEMxQW1giVZn
 PeATobW46mbZOsjQQHqul6KZATCVQC3C06Lu8zs0L1XEtnvUCVF10DStuMjxFn+4N8W+
 K9JkfBfUvts8VBt9TCg/v6vxouHfX+jriDj5UX79b+WHVWjH+CrbLKBawYUNC4yLmrm1
 5NGg==
X-Gm-Message-State: AOAM5319QyL98LHTUm+MS4JvD2VVanIP37/rdKbq9G/qvVS61yMjFm5R
 p/ND1zG1Z+NW7IHNNbm50R8N/lgMaAdr/vqlPls=
X-Google-Smtp-Source: ABdhPJzXRcKURPDlk8t1snwRYypOrGXa6R6rEY/uLCUOfLyDnAAGP19N/lGIAXSIUIczo8isSb6oszWHdaWDguX3qA8=
X-Received: by 2002:a05:6638:1696:: with SMTP id
 f22mr5060323jat.15.1632841197111; 
 Tue, 28 Sep 2021 07:59:57 -0700 (PDT)
MIME-Version: 1.0
References: <20210727034204.20649-1-xuemingl@nvidia.com>
 <20210809114716.22035-1-xuemingl@nvidia.com>
 <CALBAE1N5Zfv-oOXU6U8yZ0x8-Qw92jD6tb9iCNHYA=KZ41RJDw@mail.gmail.com>
 <DM4PR12MB53734EA00A2AB3C44A98D49BA1F69@DM4PR12MB5373.namprd12.prod.outlook.com>
 <CALBAE1Nr=JWZaTZKYLD9AVEbGkk-pq1ZXtRat9B-EMN7h3s62Q@mail.gmail.com>
 <DM4PR12MB5373977D5E3725AAB7BF0740A1F89@DM4PR12MB5373.namprd12.prod.outlook.com>
 <c436b5ad-eb02-5c34-b573-0d63dbb20f1d@intel.com>
 <6d4308c307a72d62b0be4d61f70f8d0c64a4e7ba.camel@nvidia.com>
 <CALBAE1PzW2KiFMybJEza6GQJJ7U9AN0a=tHP2G1Wr-Q6yaLCdw@mail.gmail.com>
 <15b1590a8899d85e85bb4f7c104b9399654ee160.camel@nvidia.com>
 <CALBAE1Ng8XhvBnOSPYv4-azquZJpj-tCvQYeHUnhkVp_V+S3xg@mail.gmail.com>
 <b52db379c722348558be7f05015dbfe042adb606.camel@nvidia.com>
 <CALBAE1PDNOGgfvCtk88EK0rUksXGh8-DUjkV8Btw8Jek+S-ufA@mail.gmail.com>
 <DM6PR11MB4491F28CE51AC38FD81EC59F9AA89@DM6PR11MB4491.namprd11.prod.outlook.com>
 <543d0e4ac61633fa179506906f88092a6d928fe6.camel@nvidia.com>
In-Reply-To: <543d0e4ac61633fa179506906f88092a6d928fe6.camel@nvidia.com>
From: Jerin Jacob <jerinjacobk@gmail.com>
Date: Tue, 28 Sep 2021 20:29:30 +0530
Message-ID: <CALBAE1NUO9TpPamKRt9dgbuWaHz2GZs3+RKPeySaE5cNHhi=Ng@mail.gmail.com>
To: "Xueming(Steven) Li" <xuemingl@nvidia.com>
Cc: "konstantin.ananyev@intel.com" <konstantin.ananyev@intel.com>, 
 NBU-Contact-Thomas Monjalon <thomas@monjalon.net>, 
 "andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
 "dev@dpdk.org" <dev@dpdk.org>, 
 "ferruh.yigit@intel.com" <ferruh.yigit@intel.com>
Content-Type: text/plain; charset="UTF-8"
Subject: Re: [dpdk-dev] [PATCH v1] ethdev: introduce shared Rx queue
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On Tue, Sep 28, 2021 at 8:10 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
>
> On Tue, 2021-09-28 at 13:59 +0000, Ananyev, Konstantin wrote:
> > >
> > > On Tue, Sep 28, 2021 at 6:55 PM Xueming(Steven) Li
> > > <xuemingl@nvidia.com> wrote:
> > > >
> > > > On Tue, 2021-09-28 at 18:28 +0530, Jerin Jacob wrote:
> > > > > On Tue, Sep 28, 2021 at 5:07 PM Xueming(Steven) Li
> > > > > <xuemingl@nvidia.com> wrote:
> > > > > >
> > > > > > On Tue, 2021-09-28 at 15:05 +0530, Jerin Jacob wrote:
> > > > > > > On Sun, Sep 26, 2021 at 11:06 AM Xueming(Steven) Li
> > > > > > > <xuemingl@nvidia.com> wrote:
> > > > > > > >
> > > > > > > > On Wed, 2021-08-11 at 13:04 +0100, Ferruh Yigit wrote:
> > > > > > > > > On 8/11/2021 9:28 AM, Xueming(Steven) Li wrote:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > > > Sent: Wednesday, August 11, 2021 4:03 PM
> > > > > > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas
> > > > > > > > > > > Monjalon
> > > <thomas@monjalon.net>;
> > > > > > > > > > > Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v1] ethdev:
> > > > > > > > > > > introduce shared Rx queue
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Aug 9, 2021 at 7:46 PM Xueming(Steven) Li
> > > > > > > > > > > <xuemingl@nvidia.com> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > Hi,
> > > > > > > > > > > >
> > > > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > > > > > Sent: Monday, August 9, 2021 9:51 PM
> > > > > > > > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > > > > > > > <ferruh.yigit@intel.com>;
> > > > > > > > > > > > > NBU-Contact-Thomas Monjalon
> > > > > > > > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v1] ethdev:
> > > > > > > > > > > > > introduce shared Rx queue
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Mon, Aug 9, 2021 at 5:18 PM Xueming Li
> > > > > > > > > > > > > <xuemingl@nvidia.com> wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > In current DPDK framework, each RX queue is
> > > > > > > > > > > > > > pre-loaded with mbufs
> > > > > > > > > > > > > > for incoming packets. When number of
> > > > > > > > > > > > > > representors scale out in a
> > > > > > > > > > > > > > switch domain, the memory consumption became
> > > > > > > > > > > > > > significant. Most
> > > > > > > > > > > > > > important, polling all ports leads to high
> > > > > > > > > > > > > > cache miss, high
> > > > > > > > > > > > > > latency and low throughput.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > This patch introduces shared RX queue. Ports
> > > > > > > > > > > > > > with same
> > > > > > > > > > > > > > configuration in a switch domain could share
> > > > > > > > > > > > > > RX queue set by specifying sharing group.
> > > > > > > > > > > > > > Polling any queue using same shared RX queue
> > > > > > > > > > > > > > receives packets from
> > > > > > > > > > > > > > all member ports. Source port is identified
> > > > > > > > > > > > > > by mbuf->port.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Port queue number in a shared group should be
> > > > > > > > > > > > > > identical. Queue
> > > > > > > > > > > > > > index is
> > > > > > > > > > > > > > 1:1 mapped in shared group.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Share RX queue is supposed to be polled on
> > > > > > > > > > > > > > same thread.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Multiple groups is supported by group ID.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Is this offload specific to the representor? If
> > > > > > > > > > > > > so can this name be changed specifically to
> > > > > > > > > > > > > representor?
> > > > > > > > > > > >
> > > > > > > > > > > > Yes, PF and representor in switch domain could
> > > > > > > > > > > > take advantage.
> > > > > > > > > > > >
> > > > > > > > > > > > > If it is for a generic case, how the flow
> > > > > > > > > > > > > ordering will be maintained?
> > > > > > > > > > > >
> > > > > > > > > > > > Not quite sure that I understood your question.
> > > > > > > > > > > > The control path of is
> > > > > > > > > > > > almost same as before, PF and representor port
> > > > > > > > > > > > still needed, rte flows not impacted.
> > > > > > > > > > > > Queues still needed for each member port,
> > > > > > > > > > > > descriptors(mbuf) will be
> > > > > > > > > > > > supplied from shared Rx queue in my PMD
> > > > > > > > > > > > implementation.
> > > > > > > > > > >
> > > > > > > > > > > My question was if create a generic
> > > > > > > > > > > RTE_ETH_RX_OFFLOAD_SHARED_RXQ offload, multiple
> > > > > > > > > > > ethdev receive queues land into
> > > the same
> > > > > > > > > > > receive queue, In that case, how the flow order is
> > > > > > > > > > > maintained for respective receive queues.
> > > > > > > > > >
> > > > > > > > > > I guess the question is testpmd forward stream? The
> > > > > > > > > > forwarding logic has to be changed slightly in case
> > > > > > > > > > of shared rxq.
> > > > > > > > > > basically for each packet in rx_burst result, lookup
> > > > > > > > > > source stream according to mbuf->port, forwarding to
> > > > > > > > > > target fs.
> > > > > > > > > > Packets from same source port could be grouped as a
> > > > > > > > > > small burst to process, this will accelerates the
> > > > > > > > > > performance if traffic
> > > come from
> > > > > > > > > > limited ports. I'll introduce some common api to do
> > > > > > > > > > shard rxq forwarding, call it with packets handling
> > > > > > > > > > callback, so it suites for
> > > > > > > > > > all forwarding engine. Will sent patches soon.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > All ports will put the packets in to the same queue
> > > > > > > > > (share queue), right? Does
> > > > > > > > > this means only single core will poll only, what will
> > > > > > > > > happen if there are
> > > > > > > > > multiple cores polling, won't it cause problem?
> > > > > > > > >
> > > > > > > > > And if this requires specific changes in the
> > > > > > > > > application, I am not sure about
> > > > > > > > > the solution, can't this work in a transparent way to
> > > > > > > > > the application?
> > > > > > > >
> > > > > > > > Discussed with Jerin, new API introduced in v3 2/8 that
> > > > > > > > aggregate ports
> > > > > > > > in same group into one new port. Users could schedule
> > > > > > > > polling on the
> > > > > > > > aggregated port instead of all member ports.
> > > > > > >
> > > > > > > The v3 still has testpmd changes in fastpath. Right? IMO,
> > > > > > > For this
> > > > > > > feature, we should not change fastpath of testpmd
> > > > > > > application. Instead, testpmd can use aggregated ports
> > > > > > > probably as
> > > > > > > separate fwd_engine to show how to use this feature.
> > > > > >
> > > > > > Good point to discuss :) There are two strategies to polling
> > > > > > a shared
> > > > > > Rxq:
> > > > > > 1. polling each member port
> > > > > >    All forwarding engines can be reused to work as before.
> > > > > >    My testpmd patches are efforts towards this direction.
> > > > > >    Does your PMD support this?
> > > > >
> > > > > Not unfortunately. More than that, every application needs to
> > > > > change
> > > > > to support this model.
> > > >
> > > > Both strategies need user application to resolve port ID from
> > > > mbuf and
> > > > process accordingly.
> > > > This one doesn't demand aggregated port, no polling schedule
> > > > change.
> > >
> > > I was thinking, mbuf will be updated from driver/aggregator port as
> > > when it
> > > comes to application.
> > >
> > > >
> > > > >
> > > > > > 2. polling aggregated port
> > > > > >    Besides forwarding engine, need more work to to demo it.
> > > > > >    This is an optional API, not supported by my PMD yet.
> > > > >
> > > > > We are thinking of implementing this PMD when it comes to it,
> > > > > ie.
> > > > > without application change in fastpath
> > > > > logic.
> > > >
> > > > Fastpath have to resolve port ID anyway and forwarding according
> > > > to
> > > > logic. Forwarding engine need to adapt to support shard Rxq.
> > > > Fortunately, in testpmd, this can be done with an abstract API.
> > > >
> > > > Let's defer part 2 until some PMD really support it and tested,
> > > > how do
> > > > you think?
> > >
> > > We are not planning to use this feature so either way it is OK to
> > > me.
> > > I leave to ethdev maintainers decide between 1 vs 2.
> > >
> > > I do have a strong opinion not changing the testpmd basic forward
> > > engines
> > > for this feature.I would like to keep it simple as fastpath
> > > optimized and would
> > > like to add a separate Forwarding engine as means to verify this
> > > feature.
> >
> > +1 to that.
> > I don't think it a 'common' feature.
> > So separate FWD mode seems like a best choice to me.
>
> -1 :)
> There was some internal requirement from test team, they need to verify

Internal QA requirements may not be the driving factor :-)

> all features like packet content, rss, vlan, checksum, rte_flow... to
> be working based on shared rx queue. Based on the patch, I believe the
> impact has been minimized.


>
> >
> > >
> > >
> > >
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Overall, is this for optimizing memory for the port
> > > > > > > > > represontors? If so can't we
> > > > > > > > > have a port representor specific solution, reducing
> > > > > > > > > scope can reduce the
> > > > > > > > > complexity it brings?
> > > > > > > > >
> > > > > > > > > > > If this offload is only useful for representor
> > > > > > > > > > > case, Can we make this offload specific to
> > > > > > > > > > > representor the case by changing its
> > > name and
> > > > > > > > > > > scope.
> > > > > > > > > >
> > > > > > > > > > It works for both PF and representors in same switch
> > > > > > > > > > domain, for application like OVS, few changes to
> > > > > > > > > > apply.
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Signed-off-by: Xueming Li
> > > > > > > > > > > > > > <xuemingl@nvidia.com>
> > > > > > > > > > > > > > ---
> > > > > > > > > > > > > >  doc/guides/nics/features.rst
> > > > > > > > > > > > > > | 11 +++++++++++
> > > > > > > > > > > > > >  doc/guides/nics/features/default.ini
> > > > > > > > > > > > > > |  1 +
> > > > > > > > > > > > > >  doc/guides/prog_guide/switch_representation.
> > > > > > > > > > > > > > rst | 10 ++++++++++
> > > > > > > > > > > > > >  lib/ethdev/rte_ethdev.c
> > > > > > > > > > > > > > |  1 +
> > > > > > > > > > > > > >  lib/ethdev/rte_ethdev.h
> > > > > > > > > > > > > > |  7 +++++++
> > > > > > > > > > > > > >  5 files changed, 30 insertions(+)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > diff --git a/doc/guides/nics/features.rst
> > > > > > > > > > > > > > b/doc/guides/nics/features.rst index
> > > > > > > > > > > > > > a96e12d155..2e2a9b1554 100644
> > > > > > > > > > > > > > --- a/doc/guides/nics/features.rst
> > > > > > > > > > > > > > +++ b/doc/guides/nics/features.rst
> > > > > > > > > > > > > > @@ -624,6 +624,17 @@ Supports inner packet L4
> > > > > > > > > > > > > > checksum.
> > > > > > > > > > > > > >    ``tx_offload_capa,tx_queue_offload_capa:DE
> > > > > > > > > > > > > > V_TX_OFFLOAD_OUTER_UDP_CKSUM``.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > +.. _nic_features_shared_rx_queue:
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +Shared Rx queue
> > > > > > > > > > > > > > +---------------
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +Supports shared Rx queue for ports in same
> > > > > > > > > > > > > > switch domain.
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +* **[uses]
> > > > > > > > > > > > > > rte_eth_rxconf,rte_eth_rxmode**:
> > > > > > > > > > > > > > ``offloads:RTE_ETH_RX_OFFLOAD_SHARED_RXQ``.
> > > > > > > > > > > > > > +* **[provides] mbuf**: ``mbuf.port``.
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > >  .. _nic_features_packet_type_parsing:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >  Packet type parsing
> > > > > > > > > > > > > > diff --git
> > > > > > > > > > > > > > a/doc/guides/nics/features/default.ini
> > > > > > > > > > > > > > b/doc/guides/nics/features/default.ini
> > > > > > > > > > > > > > index 754184ddd4..ebeb4c1851 100644
> > > > > > > > > > > > > > --- a/doc/guides/nics/features/default.ini
> > > > > > > > > > > > > > +++ b/doc/guides/nics/features/default.ini
> > > > > > > > > > > > > > @@ -19,6 +19,7 @@ Free Tx mbuf on demand =
> > > > > > > > > > > > > >  Queue start/stop     =
> > > > > > > > > > > > > >  Runtime Rx queue setup =
> > > > > > > > > > > > > >  Runtime Tx queue setup =
> > > > > > > > > > > > > > +Shared Rx queue      =
> > > > > > > > > > > > > >  Burst mode info      =
> > > > > > > > > > > > > >  Power mgmt address monitor =
> > > > > > > > > > > > > >  MTU update           =
> > > > > > > > > > > > > > diff --git
> > > > > > > > > > > > > > a/doc/guides/prog_guide/switch_representation
> > > > > > > > > > > > > > .rst
> > > > > > > > > > > > > > b/doc/guides/prog_guide/switch_representation
> > > > > > > > > > > > > > .rst
> > > > > > > > > > > > > > index ff6aa91c80..45bf5a3a10 100644
> > > > > > > > > > > > > > ---
> > > > > > > > > > > > > > a/doc/guides/prog_guide/switch_representation
> > > > > > > > > > > > > > .rst
> > > > > > > > > > > > > > +++
> > > > > > > > > > > > > > b/doc/guides/prog_guide/switch_representation
> > > > > > > > > > > > > > .rst
> > > > > > > > > > > > > > @@ -123,6 +123,16 @@ thought as a software
> > > > > > > > > > > > > > "patch panel" front-end for applications.
> > > > > > > > > > > > > >  .. [1] `Ethernet switch device driver model
> > > > > > > > > > > > > > (switchdev)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > <https://www.kernel.org/doc/Documentation/net
> > > > > > > > > > > > > > working/switchdev.txt
> > > > > > > > > > > > > > > `_
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > +- Memory usage of representors is huge when
> > > > > > > > > > > > > > number of representor
> > > > > > > > > > > > > > +grows,
> > > > > > > > > > > > > > +  because PMD always allocate mbuf for each
> > > > > > > > > > > > > > descriptor of Rx queue.
> > > > > > > > > > > > > > +  Polling the large number of ports brings
> > > > > > > > > > > > > > more CPU load, cache
> > > > > > > > > > > > > > +miss and
> > > > > > > > > > > > > > +  latency. Shared Rx queue can be used to
> > > > > > > > > > > > > > share Rx queue between
> > > > > > > > > > > > > > +PF and
> > > > > > > > > > > > > > +  representors in same switch domain.
> > > > > > > > > > > > > > +``RTE_ETH_RX_OFFLOAD_SHARED_RXQ``
> > > > > > > > > > > > > > +  is present in Rx offloading capability of
> > > > > > > > > > > > > > device info. Setting
> > > > > > > > > > > > > > +the
> > > > > > > > > > > > > > +  offloading flag in device Rx mode or Rx
> > > > > > > > > > > > > > queue configuration to
> > > > > > > > > > > > > > +enable
> > > > > > > > > > > > > > +  shared Rx queue. Polling any member port
> > > > > > > > > > > > > > of shared Rx queue can
> > > > > > > > > > > > > > +return
> > > > > > > > > > > > > > +  packets of all ports in group, port ID is
> > > > > > > > > > > > > > saved in ``mbuf.port``.
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > >  Basic SR-IOV
> > > > > > > > > > > > > >  ------------
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > diff --git a/lib/ethdev/rte_ethdev.c
> > > > > > > > > > > > > > b/lib/ethdev/rte_ethdev.c
> > > > > > > > > > > > > > index 9d95cd11e1..1361ff759a 100644
> > > > > > > > > > > > > > --- a/lib/ethdev/rte_ethdev.c
> > > > > > > > > > > > > > +++ b/lib/ethdev/rte_ethdev.c
> > > > > > > > > > > > > > @@ -127,6 +127,7 @@ static const struct {
> > > > > > > > > > > > > >         RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSU
> > > > > > > > > > > > > > M),
> > > > > > > > > > > > > >         RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
> > > > > > > > > > > > > >         RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPL
> > > > > > > > > > > > > > IT),
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > RTE_ETH_RX_OFFLOAD_BIT2STR(SHARED_RXQ),
> > > > > > > > > > > > > >  };
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >  #undef RTE_RX_OFFLOAD_BIT2STR
> > > > > > > > > > > > > > diff --git a/lib/ethdev/rte_ethdev.h
> > > > > > > > > > > > > > b/lib/ethdev/rte_ethdev.h
> > > > > > > > > > > > > > index d2b27c351f..a578c9db9d 100644
> > > > > > > > > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > > > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > > > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > > > > > > > >         uint8_t rx_drop_en; /**< Drop packets
> > > > > > > > > > > > > > if no descriptors are available. */
> > > > > > > > > > > > > >         uint8_t rx_deferred_start; /**< Do
> > > > > > > > > > > > > > not start queue with rte_eth_dev_start(). */
> > > > > > > > > > > > > >         uint16_t rx_nseg; /**< Number of
> > > > > > > > > > > > > > descriptions in rx_seg array.
> > > > > > > > > > > > > > */
> > > > > > > > > > > > > > +       uint32_t shared_group; /**< Shared
> > > > > > > > > > > > > > port group index in
> > > > > > > > > > > > > > + switch domain. */
> > > > > > > > > > > > > >         /**
> > > > > > > > > > > > > >          * Per-queue Rx offloads to be set
> > > > > > > > > > > > > > using DEV_RX_OFFLOAD_* flags.
> > > > > > > > > > > > > >          * Only offloads set on
> > > > > > > > > > > > > > rx_queue_offload_capa or
> > > > > > > > > > > > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct
> > > > > > > > > > > > > > rte_eth_conf {
> > > > > > > > > > > > > > #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
> > > > > > > > > > > > > > 0x00040000
> > > > > > > > > > > > > >  #define DEV_RX_OFFLOAD_RSS_HASH
> > > > > > > > > > > > > > 0x00080000
> > > > > > > > > > > > > >  #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT
> > > > > > > > > > > > > > 0x00100000
> > > > > > > > > > > > > > +/**
> > > > > > > > > > > > > > + * Rx queue is shared among ports in same
> > > > > > > > > > > > > > switch domain to save
> > > > > > > > > > > > > > +memory,
> > > > > > > > > > > > > > + * avoid polling each port. Any port in
> > > > > > > > > > > > > > group can be used to receive packets.
> > > > > > > > > > > > > > + * Real source port number saved in mbuf-
> > > > > > > > > > > > > > >port field.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ
> > > > > > > > > > > > > > 0x00200000
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >  #define DEV_RX_OFFLOAD_CHECKSUM
> > > > > > > > > > > > > > (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > > > > > > > >                                  DEV_RX_OFFLO
> > > > > > > > > > > > > > AD_UDP_CKSUM | \
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > > 2.25.1
> > > > > > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > > >
> > > >
>