From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4612345AC7; Sun, 6 Oct 2024 14:32:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C48274025F; Sun, 6 Oct 2024 14:32:35 +0200 (CEST) Received: from smtp-fw-52005.amazon.com (smtp-fw-52005.amazon.com [52.119.213.156]) by mails.dpdk.org (Postfix) with ESMTP id 544AF4025D for ; Sun, 6 Oct 2024 14:32:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1728217954; x=1759753954; h=from:to:cc:subject:date:message-id:mime-version; bh=hyV6mg1cHPuvGOMerY1xY4iPdmnpFrxjRyuSt4knn1Y=; b=m+5hTq0DWPvWSAXNqSxOlBALv4D7dDY6ZWmvnZCZEQH88UWjngR9Ifmq Oh2pOmbj41JMjaGtkrrqQWzQaCjyAR7k7f03eQa/juzeStT3wxdNFEQBt 6DV8Lc5dd+kk6U1+O/JndW+hGUDWgJ6HaA+o74CqvKgdc7i/1+qTNngkS Y=; X-IronPort-AV: E=Sophos;i="6.11,182,1725321600"; d="scan'208";a="685442911" Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO smtpout.prod.us-west-2.prod.farcaster.email.amazon.dev) ([10.43.8.6]) by smtp-border-fw-52005.iad7.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Oct 2024 12:32:33 +0000 Received: from EX19MTAEUA001.ant.amazon.com [10.0.10.100:19671] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.45.76:2525] with esmtp (Farcaster) id 9e3a4cee-ad4a-48b6-b8f3-df48b4fb9ae5; Sun, 6 Oct 2024 12:32:32 +0000 (UTC) X-Farcaster-Flow-ID: 9e3a4cee-ad4a-48b6-b8f3-df48b4fb9ae5 Received: from EX19D007EUA001.ant.amazon.com (10.252.50.133) by EX19MTAEUA001.ant.amazon.com (10.252.50.192) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34; Sun, 6 Oct 2024 12:32:32 +0000 Received: from EX19MTAUEA001.ant.amazon.com (10.252.134.203) by EX19D007EUA001.ant.amazon.com (10.252.50.133) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.35; Sun, 6 Oct 2024 12:32:31 +0000 Received: from email-imr-corp-prod-pdx-all-2c-785684ef.us-west-2.amazon.com (10.43.8.2) by mail-relay.amazon.com (10.252.134.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.1258.34 via Frontend Transport; Sun, 6 Oct 2024 12:32:31 +0000 Received: from HFA15-CG15235BS.amazon.com (unknown [10.1.212.40]) by email-imr-corp-prod-pdx-all-2c-785684ef.us-west-2.amazon.com (Postfix) with ESMTP id E12C3A03E2; Sun, 6 Oct 2024 12:32:29 +0000 (UTC) From: To: CC: , Shai Brandes Subject: [PATCH] net/ena: restructure the llq policy user setting Date: Sun, 6 Oct 2024 15:32:19 +0300 Message-ID: <20241006123219.3081-1-shaibran@amazon.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Shai Brandes Replaced `enable_llq`, `normal_llq_hdr` and `large_llq_hdr` devargs with a new shared devarg named `llq_policy` that implements the same logic and accepts the following values: 0 - Disable LLQ. Use with extreme caution as it leads to a huge performance degradation on AWS instances built with Nitro v4 onwards. 1 - Accept device recommended LLQ policy (Default). Device can recommend normal or large LLQ policy. 2 - Enforce normal LLQ policy. 3 - Enforce large LLQ policy. Required for packets with header that exceed 96 bytes on AWS instances built with Nitro v2 and earlier. Signed-off-by: Shai Brandes Reviewed-by: Amit Bernstein --- doc/guides/nics/ena.rst | 20 ++--- doc/guides/rel_notes/release_24_11.rst | 8 ++ drivers/net/ena/ena_ethdev.c | 103 ++++++++----------------- drivers/net/ena/ena_ethdev.h | 3 - 4 files changed, 48 insertions(+), 86 deletions(-) diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst index 2b105834a0..1467c0c190 100644 --- a/doc/guides/nics/ena.rst +++ b/doc/guides/nics/ena.rst @@ -107,11 +107,14 @@ Configuration Runtime Configuration ^^^^^^^^^^^^^^^^^^^^^ - * **large_llq_hdr** (default 0) + * **llq_policy** (default 1) - Enables or disables usage of large LLQ headers. This option will have - effect only if the device also supports large LLQ headers. Otherwise, the - default value will be used. + Controls whether use device recommended header policy or override it. + 0 - Disable LLQ (Use with extreme caution as it leads to a huge performance + degradation on AWS instances built with Nitro v4 onwards). + 1 - Accept device recommended LLQ policy (Default). + 2 - Enforce normal LLQ policy. + 3 - Enforce large LLQ policy. * **normal_llq_hdr** (default 0) @@ -126,15 +129,6 @@ Runtime Configuration timer service. Setting this parameter to 0 disables this feature. Maximum allowed value is 60 seconds. - * **enable_llq** (default 1) - - Determines whenever the driver should use the LLQ (if it's available) or - not. - - **NOTE: On the 6th generation AWS instances disabling LLQ may lead to a - huge performance degradation. In general disabling LLQ is highly not - recommended!** - * **control_poll_interval** (default 0) Enable polling-based functionality of the admin queues, diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index dfa4795f85..b88e05f349 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -75,6 +75,12 @@ New Features registers by module names and get the information (names, values and other attributes) of the filtered registers. +* **Updated Amazon ENA (Elastic Network Adapter) net driver.** + + * Modified the PMD API that controls the LLQ header policy. + * Replaced ``enable_llq``, ``normal_llq_hdr`` and ``large_llq_hdr`` devargs + with a new shared devarg ``llq_policy`` that keeps the same logic. + * **Updated Cisco enic driver.** * Added SR-IOV VF support. @@ -112,6 +118,8 @@ API Changes Also, make sure to start the actual text at the margin. ======================================================= +* drivers/net/ena: Removed ``enable_llq``, ``normal_llq_hdr`` and ``large_llq_hdr`` devargs + and replaced it with a new shared devarg ``llq_policy`` that keeps the same logic. ABI Changes ----------- diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index e0c239e88f..18e0be6d5c 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -79,18 +79,25 @@ struct ena_stats { ENA_STAT_ENTRY(stat, srd) /* Device arguments */ -#define ENA_DEVARG_LARGE_LLQ_HDR "large_llq_hdr" -#define ENA_DEVARG_NORMAL_LLQ_HDR "normal_llq_hdr" + +/* llq_policy Controls whether to disable LLQ, use device recommended + * header policy or overriding the device recommendation. + * 0 - Disable LLQ. Use with extreme caution as it leads to a huge + * performance degradation on AWS instances built with Nitro v4 onwards. + * 1 - Accept device recommended LLQ policy (Default). + * Device can recommend normal or large LLQ policy. + * 2 - Enforce normal LLQ policy. + * 3 - Enforce large LLQ policy. + * Required for packets with header that exceed 96 bytes on + * AWS instances built with Nitro v2 and Nitro v1. + */ +#define ENA_DEVARG_LLQ_POLICY "llq_policy" + /* Timeout in seconds after which a single uncompleted Tx packet should be * considered as a missing. */ #define ENA_DEVARG_MISS_TXC_TO "miss_txc_to" -/* - * Controls whether LLQ should be used (if available). Enabled by default. - * NOTE: It's highly not recommended to disable the LLQ, as it may lead to a - * huge performance degradation on 6th generation AWS instances. - */ -#define ENA_DEVARG_ENABLE_LLQ "enable_llq" + /* * Controls the period of time (in milliseconds) between two consecutive inspections of * the control queues when the driver is in poll mode and not using interrupts. @@ -294,9 +301,9 @@ static int ena_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n); -static int ena_process_bool_devarg(const char *key, - const char *value, - void *opaque); +static int ena_process_llq_policy_devarg(const char *key, + const char *value, + void *opaque); static int ena_parse_devargs(struct ena_adapter *adapter, struct rte_devargs *devargs); static void ena_copy_customer_metrics(struct ena_adapter *adapter, @@ -312,7 +319,6 @@ static int ena_rx_queue_intr_disable(struct rte_eth_dev *dev, static int ena_configure_aenq(struct ena_adapter *adapter); static int ena_mp_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer); -static ena_llq_policy ena_define_llq_hdr_policy(struct ena_adapter *adapter); static bool ena_use_large_llq_hdr(struct ena_adapter *adapter, uint8_t recommended_entry_size); static const struct eth_dev_ops ena_dev_ops = { @@ -2320,9 +2326,6 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) /* Assign default devargs values */ adapter->missing_tx_completion_to = ENA_TX_TIMEOUT; - adapter->enable_llq = true; - adapter->use_large_llq_hdr = false; - adapter->use_normal_llq_hdr = false; /* Get user bypass */ rc = ena_parse_devargs(adapter, pci_dev->device.devargs); @@ -2330,8 +2333,6 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) PMD_INIT_LOG(CRIT, "Failed to parse devargs\n"); goto err; } - adapter->llq_header_policy = ena_define_llq_hdr_policy(adapter); - rc = ena_com_allocate_customer_metrics_buffer(ena_dev); if (rc != 0) { PMD_INIT_LOG(CRIT, "Failed to allocate customer metrics buffer\n"); @@ -3734,44 +3735,29 @@ static int ena_process_uint_devarg(const char *key, return 0; } -static int ena_process_bool_devarg(const char *key, - const char *value, - void *opaque) +static int ena_process_llq_policy_devarg(const char *key, const char *value, void *opaque) { struct ena_adapter *adapter = opaque; - bool bool_value; + uint32_t policy; - /* Parse the value. */ - if (strcmp(value, "1") == 0) { - bool_value = true; - } else if (strcmp(value, "0") == 0) { - bool_value = false; + policy = strtoul(value, NULL, DECIMAL_BASE); + if (policy < ENA_LLQ_POLICY_LAST) { + adapter->llq_header_policy = policy; } else { - PMD_INIT_LOG(ERR, - "Invalid value: '%s' for key '%s'. Accepted: '0' or '1'\n", - value, key); + PMD_INIT_LOG(ERR, "Invalid value: '%s' for key '%s'. valid [0-3]\n", value, key); return -EINVAL; } - - /* Now, assign it to the proper adapter field. */ - if (strcmp(key, ENA_DEVARG_LARGE_LLQ_HDR) == 0) - adapter->use_large_llq_hdr = bool_value; - else if (strcmp(key, ENA_DEVARG_NORMAL_LLQ_HDR) == 0) - adapter->use_normal_llq_hdr = bool_value; - else if (strcmp(key, ENA_DEVARG_ENABLE_LLQ) == 0) - adapter->enable_llq = bool_value; - + PMD_DRV_LOG(INFO, + "LLQ policy is %u [0 - disabled, 1 - device recommended, 2 - normal, 3 - large]\n", + adapter->llq_header_policy); return 0; } -static int ena_parse_devargs(struct ena_adapter *adapter, - struct rte_devargs *devargs) +static int ena_parse_devargs(struct ena_adapter *adapter, struct rte_devargs *devargs) { static const char * const allowed_args[] = { - ENA_DEVARG_LARGE_LLQ_HDR, - ENA_DEVARG_NORMAL_LLQ_HDR, + ENA_DEVARG_LLQ_POLICY, ENA_DEVARG_MISS_TXC_TO, - ENA_DEVARG_ENABLE_LLQ, ENA_DEVARG_CONTROL_PATH_POLL_INTERVAL, NULL, }; @@ -3783,27 +3769,17 @@ static int ena_parse_devargs(struct ena_adapter *adapter, kvlist = rte_kvargs_parse(devargs->args, allowed_args); if (kvlist == NULL) { - PMD_INIT_LOG(ERR, "Invalid device arguments: %s\n", - devargs->args); + PMD_INIT_LOG(ERR, "Invalid device arguments: %s\n", devargs->args); return -EINVAL; } - - rc = rte_kvargs_process(kvlist, ENA_DEVARG_LARGE_LLQ_HDR, - ena_process_bool_devarg, adapter); - if (rc != 0) - goto exit; - rc = rte_kvargs_process(kvlist, ENA_DEVARG_NORMAL_LLQ_HDR, - ena_process_bool_devarg, adapter); + rc = rte_kvargs_process(kvlist, ENA_DEVARG_LLQ_POLICY, + ena_process_llq_policy_devarg, adapter); if (rc != 0) goto exit; rc = rte_kvargs_process(kvlist, ENA_DEVARG_MISS_TXC_TO, ena_process_uint_devarg, adapter); if (rc != 0) goto exit; - rc = rte_kvargs_process(kvlist, ENA_DEVARG_ENABLE_LLQ, - ena_process_bool_devarg, adapter); - if (rc != 0) - goto exit; rc = rte_kvargs_process(kvlist, ENA_DEVARG_CONTROL_PATH_POLL_INTERVAL, ena_process_uint_devarg, adapter); if (rc != 0) @@ -4027,9 +4003,7 @@ RTE_PMD_REGISTER_PCI(net_ena, rte_ena_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_ena, pci_id_ena_map); RTE_PMD_REGISTER_KMOD_DEP(net_ena, "* igb_uio | uio_pci_generic | vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(net_ena, - ENA_DEVARG_LARGE_LLQ_HDR "=<0|1> " - ENA_DEVARG_NORMAL_LLQ_HDR "=<0|1> " - ENA_DEVARG_ENABLE_LLQ "=<0|1> " + ENA_DEVARG_LLQ_POLICY "=<0|1|2|3> " ENA_DEVARG_MISS_TXC_TO "=" ENA_DEVARG_CONTROL_PATH_POLL_INTERVAL "=<0-1000>"); RTE_LOG_REGISTER_SUFFIX(ena_logtype_init, init, NOTICE); @@ -4217,17 +4191,6 @@ ena_mp_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) return rte_mp_reply(&mp_rsp, peer); } -static ena_llq_policy ena_define_llq_hdr_policy(struct ena_adapter *adapter) -{ - if (!adapter->enable_llq) - return ENA_LLQ_POLICY_DISABLED; - if (adapter->use_large_llq_hdr) - return ENA_LLQ_POLICY_LARGE; - if (adapter->use_normal_llq_hdr) - return ENA_LLQ_POLICY_NORMAL; - return ENA_LLQ_POLICY_RECOMMENDED; -} - static bool ena_use_large_llq_hdr(struct ena_adapter *adapter, uint8_t recommended_entry_size) { if (adapter->llq_header_policy == ENA_LLQ_POLICY_LARGE) { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 7d82d222ce..fe7d4a2d65 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -337,9 +337,6 @@ struct ena_adapter { uint32_t active_aenq_groups; bool trigger_reset; - bool enable_llq; - bool use_large_llq_hdr; - bool use_normal_llq_hdr; ena_llq_policy llq_header_policy; uint32_t last_tx_comp_qid; -- 2.17.1