DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xueming(Steven) Li" <xuemingl@nvidia.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Cc: "jerinjacobk@gmail.com" <jerinjacobk@gmail.com>,
	NBU-Contact-Thomas Monjalon <thomas@monjalon.net>,
	"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
	Slava Ovsiienko <viacheslavo@nvidia.com>,
	"konstantin.ananyev@intel.com" <konstantin.ananyev@intel.com>,
	"ferruh.yigit@intel.com" <ferruh.yigit@intel.com>,
	Lior Margalit <lmargalit@nvidia.com>
Subject: Re: [dpdk-dev] [PATCH v8 0/6] ethdev: introduce shared Rx queue
Date: Mon, 18 Oct 2021 13:05:36 +0000	[thread overview]
Message-ID: <cf848aad2d162f81dec2b9537cc1c3cf7d363dc0.camel@nvidia.com> (raw)
In-Reply-To: <20211018120842.2058637-1-xuemingl@nvidia.com>

Sorry, forgot to reply to original thread, resent.
	
Please ignore this series.


On Mon, 2021-10-18 at 20:08 +0800, Xueming Li wrote:
> In current DPDK framework, all Rx queues is pre-loaded with mbufs for
> incoming packets. When number of representors scale out in a switch
> domain, the memory consumption became significant. Further more,
> polling all ports leads to high cache miss, high latency and low
> throughputs.
> 
> This patch introduces shared Rx queue. PF and representors in same
> Rx domain and switch domain could share Rx queue set by specifying
> non-zero share group value in Rx queue configuration.
> 
> All ports that share Rx queue actually shares hardware descriptor
> queue and feed all Rx queues with one descriptor supply, memory is saved.
> 
> Polling any queue using same shared Rx queue receives packets from all
> member ports. Source port is identified by mbuf->port.
> 
> Multiple groups is supported by group ID. Port queue number in a shared
> group should be identical. Queue index is 1:1 mapped in shared group.
> An example of two share groups:
>  Group1, 4 shared Rx queues per member port: PF, repr0, repr1
>  Group2, 2 shared Rx queues per member port: repr2, repr3, ... repr127
>  Poll first port for each group:
>   core	port	queue
>   0	0	0
>   1	0	1
>   2	0	2
>   3	0	3
>   4	2	0
>   5	2	1
> 
> Shared Rx queue must be polled on single thread or core. If both PF0 and
> representor0 joined same share group, can't poll pf0rxq0 on core1 and
> rep0rxq0 on core2. Actually, polling one port within share group is
> sufficient since polling any port in group will return packets for any
> port in group.
> 
> There was some discussion to aggregate member ports in same group into a
> dummy port, several ways to achieve it. Since it optional, need to collect
> more feedback and requirement from user, make better decision later.
> 
> v1:
>   - initial version
> v2:
>   - add testpmd patches
> v3:
>   - change common forwarding api to macro for performance, thanks Jerin.
>   - save global variable accessed in forwarding to flowstream to minimize
>     cache miss
>   - combined patches for each forwarding engine
>   - support multiple groups in testpmd "--share-rxq" parameter
>   - new api to aggregate shared rxq group
> v4:
>   - spelling fixes
>   - remove shared-rxq support for all forwarding engines
>   - add dedicate shared-rxq forwarding engine
> v5:
>  - fix grammars
>  - remove aggregate api and leave it for later discussion
>  - add release notes
>  - add deployment example
> v6:
>  - replace RxQ offload flag with device offload capability flag
>  - add Rx domain
>  - RxQ is shared when share group > 0
>  - update testpmd accordingly
> v7:
>  - fix testpmd share group id allocation
>  - change rx_domain to 16bits
> v8:
>  - add new patch for testpmd to show device Rx domain ID and capability
>  - new share_qid in RxQ configuration
> 
> Xueming Li (6):
>   ethdev: introduce shared Rx queue
>   app/testpmd: dump device capability and Rx domain info
>   app/testpmd: new parameter to enable shared Rx queue
>   app/testpmd: dump port info for shared Rx queue
>   app/testpmd: force shared Rx queue polled on same core
>   app/testpmd: add forwarding engine for shared Rx queue
> 
>  app/test-pmd/config.c                         | 114 +++++++++++++-
>  app/test-pmd/meson.build                      |   1 +
>  app/test-pmd/parameters.c                     |  13 ++
>  app/test-pmd/shared_rxq_fwd.c                 | 148 ++++++++++++++++++
>  app/test-pmd/testpmd.c                        |  25 ++-
>  app/test-pmd/testpmd.h                        |   5 +
>  app/test-pmd/util.c                           |   3 +
>  doc/guides/nics/features.rst                  |  13 ++
>  doc/guides/nics/features/default.ini          |   1 +
>  .../prog_guide/switch_representation.rst      |  11 ++
>  doc/guides/rel_notes/release_21_11.rst        |   6 +
>  doc/guides/testpmd_app_ug/run_app.rst         |   8 +
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst   |   5 +-
>  lib/ethdev/rte_ethdev.c                       |   8 +
>  lib/ethdev/rte_ethdev.h                       |  24 +++
>  15 files changed, 379 insertions(+), 6 deletions(-)
>  create mode 100644 app/test-pmd/shared_rxq_fwd.c
> 


  parent reply	other threads:[~2021-10-18 13:05 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-18 12:08 Xueming Li
2021-10-18 12:08 ` [dpdk-dev] [PATCH v8 1/6] " Xueming Li
2021-10-18 12:08 ` [dpdk-dev] [PATCH v8 2/6] app/testpmd: dump device capability and Rx domain info Xueming Li
2021-10-18 12:08 ` [dpdk-dev] [PATCH v8 3/6] app/testpmd: new parameter to enable shared Rx queue Xueming Li
2021-10-18 12:08 ` [dpdk-dev] [PATCH v8 4/6] app/testpmd: dump port info for " Xueming Li
2021-10-18 12:08 ` [dpdk-dev] [PATCH v8 5/6] app/testpmd: force shared Rx queue polled on same core Xueming Li
2021-10-18 12:08 ` [dpdk-dev] [PATCH v8 6/6] app/testpmd: add forwarding engine for shared Rx queue Xueming Li
2021-10-18 13:05 ` Xueming(Steven) Li [this message]
  -- strict thread matches above, loose matches on Subject: below --
2021-07-27  3:42 [dpdk-dev] [RFC] ethdev: introduce " Xueming Li
2021-10-18 12:59 ` [dpdk-dev] [PATCH v8 0/6] " Xueming Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cf848aad2d162f81dec2b9537cc1c3cf7d363dc0.camel@nvidia.com \
    --to=xuemingl@nvidia.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=jerinjacobk@gmail.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=lmargalit@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).