DPDK patches and discussions
 help / color / mirror / Atom feed
From: <shaibran@amazon.com>
To: <ferruh.yigit@amd.com>
Cc: <dev@dpdk.org>, Shai Brandes <shaibran@amazon.com>
Subject: [PATCH v2 31/33] net/ena: support max large llq depth from the device
Date: Mon, 4 Mar 2024 14:29:40 +0200	[thread overview]
Message-ID: <20240304122942.3496-32-shaibran@amazon.com> (raw)
In-Reply-To: <20240304122942.3496-1-shaibran@amazon.com>

From: Shai Brandes <shaibran@amazon.com>

Selected AWS instances from later generations enable
large LLQ by default, allowing the transmission of
packets with headers exceeding 96 bytes.

Due to the overall ENA memory BAR size limitation,
large LLQ has the side effect of halving the maximum
number of LLQ entries (from 1024 to 512).

ENA-Express, powered by AWS Scalable Reliable Datagram
(SRD) technology, requires Tx queue with 1024 entries.
Selected AWS instances from upcoming generations will
have double the size of the ENA memory BAR, enabling ENA-Express
to work with a large LLQ of 1024 entries.

The initial default large LLQ size will remain 512.

Signed-off-by: Shai Brandes <shaibran@amazon.com>
Reviewed-by: Amit Bernstein <amitbern@amazon.com>
---
 doc/guides/rel_notes/release_24_03.rst        |  2 +
 drivers/net/ena/ena_ethdev.c                  | 38 ++++++++++++-------
 drivers/net/ena/hal/ena_defs/ena_admin_defs.h |  4 +-
 3 files changed, 29 insertions(+), 15 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 2a22bb07ed..9823616eeb 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -107,6 +107,8 @@ New Features
   * Added support for sub-optimal configuration notifications from the device.
   * Restructured fast release of mbufs when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE optimization is enabled.
   * Replaced `enable_llq` and `large_llq_hdr` devargs with a new devarg `llq_policy`.
+  * Added support for LLQ header size recommendation from the device.
+  * Allowed large LLQ with 1024 entries when the device supports enlarged memory BAR.
 
 * **Updated Atomic Rules' Arkville driver.**
 
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index d73e321d0f..43693ee2ee 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -42,6 +42,8 @@
 
 #define DECIMAL_BASE 10
 
+#define MAX_WIDE_LLQ_DEPTH_UNSUPPORTED 0
+
 /*
  * We should try to keep ENA_CLEANUP_BUF_SIZE lower than
  * RTE_MEMPOOL_CACHE_MAX_SIZE, so we can fit this in mempool local cache.
@@ -1071,7 +1073,7 @@ static int
 ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx,
 		       bool use_large_llq_hdr)
 {
-	struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq;
+	struct ena_admin_feature_llq_desc *dev = &ctx->get_feat_ctx->llq;
 	struct ena_com_dev *ena_dev = ctx->ena_dev;
 	uint32_t max_tx_queue_size;
 	uint32_t max_rx_queue_size;
@@ -1086,7 +1088,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx,
 		if (ena_dev->tx_mem_queue_type ==
 		    ENA_ADMIN_PLACEMENT_POLICY_DEV) {
 			max_tx_queue_size = RTE_MIN(max_tx_queue_size,
-				llq->max_llq_depth);
+				dev->max_llq_depth);
 		} else {
 			max_tx_queue_size = RTE_MIN(max_tx_queue_size,
 				max_queue_ext->max_tx_sq_depth);
@@ -1106,7 +1108,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx,
 		if (ena_dev->tx_mem_queue_type ==
 		    ENA_ADMIN_PLACEMENT_POLICY_DEV) {
 			max_tx_queue_size = RTE_MIN(max_tx_queue_size,
-				llq->max_llq_depth);
+				dev->max_llq_depth);
 		} else {
 			max_tx_queue_size = RTE_MIN(max_tx_queue_size,
 				max_queues->max_sq_depth);
@@ -1122,18 +1124,28 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx,
 	max_rx_queue_size = rte_align32prevpow2(max_rx_queue_size);
 	max_tx_queue_size = rte_align32prevpow2(max_tx_queue_size);
 
-	if (use_large_llq_hdr) {
-		if ((llq->entry_size_ctrl_supported &
-		     ENA_ADMIN_LIST_ENTRY_SIZE_256B) &&
-		    (ena_dev->tx_mem_queue_type ==
-		     ENA_ADMIN_PLACEMENT_POLICY_DEV)) {
-			max_tx_queue_size /= 2;
-			PMD_INIT_LOG(INFO,
-				"Forcing large headers and decreasing maximum Tx queue size to %d\n",
+	if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV && use_large_llq_hdr) {
+		/* intersection between driver configuration and device capabilities */
+		if (dev->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B) {
+			if (dev->max_wide_llq_depth == MAX_WIDE_LLQ_DEPTH_UNSUPPORTED) {
+				/* Devices that do not support the double-sized ENA memory BAR will
+				 * report max_wide_llq_depth as 0. In such case, driver halves the
+				 * queue depth when working in large llq policy.
+				 */
+				max_tx_queue_size >>= 1;
+				PMD_INIT_LOG(INFO,
+					"large LLQ policy requires limiting Tx queue size to %u entries\n",
 				max_tx_queue_size);
+			} else if (dev->max_wide_llq_depth < max_tx_queue_size) {
+				/* In case the queue depth that the driver calculated exceeds
+				 * the maximal value that the device allows, it will be limited
+				 * to that maximal value
+				 */
+				max_tx_queue_size = dev->max_wide_llq_depth;
+			}
 		} else {
-			PMD_INIT_LOG(ERR,
-				"Forcing large headers failed: LLQ is disabled or device does not support large headers\n");
+			PMD_INIT_LOG(INFO,
+				"Forcing large LLQ headers failed since device lacks this support\n");
 		}
 	}
 
diff --git a/drivers/net/ena/hal/ena_defs/ena_admin_defs.h b/drivers/net/ena/hal/ena_defs/ena_admin_defs.h
index 2adce75ed3..cff6451c96 100644
--- a/drivers/net/ena/hal/ena_defs/ena_admin_defs.h
+++ b/drivers/net/ena/hal/ena_defs/ena_admin_defs.h
@@ -696,8 +696,8 @@ struct ena_admin_feature_llq_desc {
 	 */
 	uint8_t entry_size_recommended;
 
-	/* reserved */
-	uint8_t reserved1[2];
+	/* max depth of wide llq, or 0 for N/A */
+	uint16_t max_wide_llq_depth;
 
 	/* accelerated low latency queues requirement. driver needs to
 	 * support those requirements in order to use accelerated llq
-- 
2.17.1


  parent reply	other threads:[~2024-03-04 12:34 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-04 12:29 [PATCH v2 00/33] net/ena: v2.9.0 driver release shaibran
2024-03-04 12:29 ` [PATCH v2 01/33] net/ena: rework the metrics multi-process functions shaibran
2024-03-04 12:29 ` [PATCH v2 02/33] net/ena: report new supported link speed capabilities shaibran
2024-03-04 12:29 ` [PATCH v2 03/33] net/ena: update imissed stat with Rx overruns shaibran
2024-03-04 12:29 ` [PATCH v2 04/33] net/ena: sub-optimal configuration notifications support shaibran
2024-03-04 12:29 ` [PATCH v2 05/33] net/ena: fix fast mbuf free shaibran
2024-03-04 12:29 ` [PATCH v2 06/33] net/ena: rename base folder to hal shaibran
2024-03-04 12:29 ` [PATCH v2 07/33] net/ena: restructure the llq policy setting process shaibran
2024-03-04 12:29 ` [PATCH v2 08/33] net/ena/hal: exponential backoff exp limit shaibran
2024-03-04 12:29 ` [PATCH v2 09/33] net/ena/hal: add a new csum offload bit shaibran
2024-03-04 12:29 ` [PATCH v2 10/33] net/ena/hal: added a bus parameter to ena memcpy macro shaibran
2024-03-04 12:29 ` [PATCH v2 11/33] net/ena/hal: optimize Rx ring submission queue shaibran
2024-03-04 12:29 ` [PATCH v2 12/33] net/ena/hal: rename fields in completion descriptors shaibran
2024-03-04 12:29 ` [PATCH v2 13/33] net/ena/hal: use correct read once on u8 field shaibran
2024-03-04 12:29 ` [PATCH v2 14/33] net/ena/hal: add completion descriptor corruption check shaibran
2024-03-04 12:29 ` [PATCH v2 15/33] net/ena/hal: malformed Tx descriptor error reason shaibran
2024-03-04 12:29 ` [PATCH v2 16/33] net/ena/hal: phc feature modifications shaibran
2024-03-04 12:29 ` [PATCH v2 17/33] net/ena/hal: restructure interrupt handling shaibran
2024-03-04 12:29 ` [PATCH v2 18/33] net/ena/hal: add unlikely to error checks shaibran
2024-03-04 12:29 ` [PATCH v2 19/33] net/ena/hal: missing admin interrupt reset reason shaibran
2024-03-04 12:29 ` [PATCH v2 20/33] net/ena/hal: check for existing keep alive notification shaibran
2024-03-04 12:29 ` [PATCH v2 21/33] net/ena/hal: modify memory barrier comment shaibran
2024-03-04 12:29 ` [PATCH v2 22/33] net/ena/hal: rework Rx ring submission queue shaibran
2024-03-04 12:29 ` [PATCH v2 23/33] net/ena/hal: remove operating system type enum shaibran
2024-03-04 12:29 ` [PATCH v2 24/33] net/ena/hal: handle command abort shaibran
2024-03-04 12:29 ` [PATCH v2 25/33] net/ena/hal: add support for device reset request shaibran
2024-03-04 12:29 ` [PATCH v2 26/33] net/ena: cosmetic changes shaibran
2024-03-04 12:29 ` [PATCH v2 27/33] net/ena/hal: modify customer metrics memory management shaibran
2024-03-04 12:29 ` [PATCH v2 28/33] net/ena/hal: cosmetic changes shaibran
2024-03-04 12:29 ` [PATCH v2 29/33] net/ena: update device-preferred size of rings shaibran
2024-03-04 12:29 ` [PATCH v2 30/33] net/ena: exhaust interrupt callbacks in device close shaibran
2024-03-04 12:29 ` shaibran [this message]
2024-03-04 12:29 ` [PATCH v2 32/33] net/ena: control path pure polling mode shaibran
2024-03-04 12:29 ` [PATCH v2 33/33] net/ena: upgrade driver version to 2.9.0 shaibran

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240304122942.3496-32-shaibran@amazon.com \
    --to=shaibran@amazon.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).