DPDK patches and discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: Dariusz Sosnowski <dsosnowski@nvidia.com>
Cc: Ferruh Yigit <ferruh.yigit@xilinx.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	dev@dpdk.org, Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
	Matan Azrad <matan@nvidia.com>, Ori Kam <orika@nvidia.com>,
	Wisam Jaddo <wisamm@nvidia.com>,
	Aman Singh <aman.deep.singh@intel.com>,
	Yuying Zhang <yuying.zhang@intel.com>
Subject: Re: [PATCH v2 0/8] ethdev: introduce hairpin memory capabilities
Date: Sat, 08 Oct 2022 18:31:31 +0200	[thread overview]
Message-ID: <37847984.10thIPus4b@thomas> (raw)
In-Reply-To: <20221006110105.2986966-1-dsosnowski@nvidia.com>

06/10/2022 13:00, Dariusz Sosnowski:
> The hairpin queues are used to transmit packets received on the wire, back to the wire.
> How hairpin queues are implemented and configured is decided internally by the PMD and
> applications have no control over the configuration of Rx and Tx hairpin queues.
> This patchset addresses that by:
> 
> - Extending hairpin queue capabilities reported by PMDs.
> - Exposing new configuration options for Rx and Tx hairpin queues.
> 
> Main goal of this patchset is to allow applications to provide configuration hints
> regarding memory placement of hairpin queues.
> These hints specify whether buffers of hairpin queues should be placed in host memory
> or in dedicated device memory.
> 
> For example, in context of NVIDIA Connect-X and BlueField devices,
> this distinction is important for several reasons:
> 
> - By default, data buffers and packet descriptors are placed in device memory region
>   which is shared with other resources (e.g. flow rules).
>   This results in memory contention on the device,
>   which may lead to degraded performance under heavy load.
> - Placing hairpin queues in dedicated device memory can decrease latency of hairpinned traffic,
>   since hairpin queue processing will not be memory starved by other operations.
>   Side effect of this memory configuration is that it leaves less memory for other resources,
>   possibly causing memory contention in non-hairpin traffic.
> - Placing hairpin queues in host memory can increase throughput of hairpinned
>   traffic at the cost of increasing latency.
>   Each packet processed by hairpin queues will incur additional PCI transactions (increase in latency),
>   but memory contention on the device is avoided.
> 
> Depending on the workload and whether throughput or latency has a higher priority for developers,
> it would be beneficial if developers could choose the best hairpin configuration for their use case.
> 
> To address that, this patchset adds the following configuration options (in rte_eth_hairpin_conf struct):
> 
> - use_locked_device_memory - If set, PMD will allocate specialized on-device memory for the queue.
> - use_rte_memory - If set, PMD will use DPDK-managed memory for the queue.
> - force_memory - If set, PMD will be forced to use provided memory configuration.
>   If no appropriate resources are available, the queue allocation will fail.
>   If unset and no appropriate resources are available, PMD will fallback to its default behavior.
> 
> Implementing support for these flags is optional and applications should be allowed to not set any of these new flags.
> This will result in default memory configuration provided by the PMD.
> Application developers should consult the PMD documentation in that case.
> 
> These changes were originally proposed in http://patches.dpdk.org/project/dpdk/patch/20220811120530.191683-1-dsosnowski@nvidia.com/.
> 
> Dariusz Sosnowski (8):
>   ethdev: introduce hairpin memory capabilities
>   common/mlx5: add hairpin SQ buffer type capabilities
>   common/mlx5: add hairpin RQ buffer type capabilities
>   net/mlx5: allow hairpin Tx queue in RTE memory
>   net/mlx5: allow hairpin Rx queue in locked memory
>   doc: add notes for hairpin to mlx5 documentation
>   app/testpmd: add hairpin queues memory modes
>   app/flow-perf: add hairpin queue memory config

Doc squashed in mlx5 commits.
Applied, thanks.



      parent reply	other threads:[~2022-10-08 16:31 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-19 16:37 [PATCH 0/7] " Dariusz Sosnowski
2022-09-19 16:37 ` [PATCH 1/7] " Dariusz Sosnowski
2022-10-04 16:50   ` Thomas Monjalon
2022-10-06 11:21     ` Dariusz Sosnowski
2022-09-19 16:37 ` [PATCH 2/7] common/mlx5: add hairpin SQ buffer type capabilities Dariusz Sosnowski
2022-09-27 13:03   ` Slava Ovsiienko
2022-09-19 16:37 ` [PATCH 3/7] common/mlx5: add hairpin RQ " Dariusz Sosnowski
2022-09-27 13:04   ` Slava Ovsiienko
2022-09-19 16:37 ` [PATCH 4/7] net/mlx5: allow hairpin Tx queue in RTE memory Dariusz Sosnowski
2022-09-27 13:05   ` Slava Ovsiienko
2022-09-19 16:37 ` [PATCH 5/7] net/mlx5: allow hairpin Rx queue in locked memory Dariusz Sosnowski
2022-09-27 13:04   ` Slava Ovsiienko
2022-11-25 14:06   ` Kenneth Klette Jonassen
2022-09-19 16:37 ` [PATCH 6/7] app/testpmd: add hairpin queues memory modes Dariusz Sosnowski
2022-09-19 16:37 ` [PATCH 7/7] app/flow-perf: add hairpin queue memory config Dariusz Sosnowski
2022-10-04 12:24   ` Wisam Monther
2022-10-06 11:06     ` Dariusz Sosnowski
2022-10-04 16:44 ` [PATCH 0/7] ethdev: introduce hairpin memory capabilities Thomas Monjalon
2022-10-06 11:08   ` Dariusz Sosnowski
2022-10-06 11:00 ` [PATCH v2 0/8] " Dariusz Sosnowski
2022-10-06 11:00   ` [PATCH v2 1/8] " Dariusz Sosnowski
2022-10-06 11:00   ` [PATCH v2 2/8] common/mlx5: add hairpin SQ buffer type capabilities Dariusz Sosnowski
2022-10-06 11:01   ` [PATCH v2 3/8] common/mlx5: add hairpin RQ " Dariusz Sosnowski
2022-10-06 11:01   ` [PATCH v2 4/8] net/mlx5: allow hairpin Tx queue in RTE memory Dariusz Sosnowski
2022-10-06 11:01   ` [PATCH v2 5/8] net/mlx5: allow hairpin Rx queue in locked memory Dariusz Sosnowski
2022-10-06 11:01   ` [PATCH v2 6/8] doc: add notes for hairpin to mlx5 documentation Dariusz Sosnowski
2022-10-06 11:01   ` [PATCH v2 7/8] app/testpmd: add hairpin queues memory modes Dariusz Sosnowski
2022-10-06 11:01   ` [PATCH v2 8/8] app/flow-perf: add hairpin queue memory config Dariusz Sosnowski
2022-10-15 16:30     ` Wisam Monther
2022-10-08 16:31   ` Thomas Monjalon [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=37847984.10thIPus4b@thomas \
    --to=thomas@monjalon.net \
    --cc=aman.deep.singh@intel.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=ferruh.yigit@xilinx.com \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    --cc=wisamm@nvidia.com \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).