From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E02F1A0597; Wed, 8 Apr 2020 10:32:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E76FE1C1B5; Wed, 8 Apr 2020 10:29:55 +0200 (CEST) Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by dpdk.org (Postfix) with ESMTP id F15071C192 for ; Wed, 8 Apr 2020 10:29:49 +0200 (CEST) Received: by mail-lf1-f68.google.com with SMTP id k28so4466226lfe.10 for ; Wed, 08 Apr 2020 01:29:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lpv95Elv/MKF4d3EuMJ1iuhEwO5iW/FYRbl57Xpd4xo=; b=Tcaj2n9vaxIHAUlhr9d+zjYqN76ugR+sXkEPxPThY7cel+LnbgHMjrunOZwyGqVIUj tTva3UJ1bkJD3PNxGcyTpJGJBWCeoUXzvmtTBFj4WlAEDA12KW04/NjetUm5n6zVoSM2 Q6I+nJMjtj3uQD2VR5+c2BnZNpt8omyAtTAw43ZaGZ9f0zw7nCrqR1dKoPyoULRpFTHl rwCkxF/DwbInWJmTM+WpcWByJ9t6QVPIRB1myGTVmSgIbw4Fy9nnCVmbWPUbNAmDdoKK c3oPpu5MYeUskBvx/hzdRySuDAdvEd51umNyT5xk3PK/hF9rp9DEP1pPhwJ5J7877wI1 lSHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lpv95Elv/MKF4d3EuMJ1iuhEwO5iW/FYRbl57Xpd4xo=; b=dZLRjDNG8zNRWoEvV2IB3UG1lfBmn8pm2fgIwUQF0lBlUYhp7B6ogMrmYFMCgnfTwb 4K33Udq0zwWOLcHCu6yEhbhpZRTPRa8bKr+nQE+zJhXJCLngkNwcm5wdv0RwSVFtZpNf MXH3/EByrM5qmWmQ8VBAzRmrxSLNCj1TswNQcpcZztvDXcSunAgcg9SJQ8EUhIs8d1oi EzDRja2I0sGaUYcbIZ397sVXJZ1+UwUvsxax2sKl5psexZqfeo7RjL0pINAUqIxLU3yp RqcvLZ+yQiES1Wi3XWEHMXvuhFE9Zq3fJd02D+m14gBE3R8lcj/H3ZrpMLZxOR7a2E1r tw3A== X-Gm-Message-State: AGi0PuZJhNJY/20jCtjJH7Vet6DObsXkq2lq4hPPDOshkBacYvMvFOn7 kZNTnBsD50y/VN1UHc9obNBugZTD+MY= X-Google-Smtp-Source: APiQypL8l6x5XaSsMtD1OKGhd4vODPX7aV66c8bJDL+jZSDt4cgoevqYmGAO6ArELg/S2fXIgTp1ag== X-Received: by 2002:a05:6512:6cf:: with SMTP id u15mr3783726lff.98.1586334589102; Wed, 08 Apr 2020 01:29:49 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:48 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:10 +0200 Message-Id: <20200408082921.31000-20-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v3 19/30] net/ena: add support for large LLQ headers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Default LLQ (Low-latency queue) maximum header size is 96 bytes and can be too small for some types of packets - like IPv6 packets with multiple extension. This can be fixed, by using large LLQ headers. If the device supports larger LLQ headers, the user can activate them by using device argument 'large_llq_hdr' with value '1'. If the device isn't supporting this feature, the default value (96B) will be used. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v2: * Use devargs instead of compilation options v3: * Fix commit log - explain LLQ abbreviation, motivation behind this change and mention about new device argument * Update copyright date of the modified file * Add release notes doc/guides/nics/ena.rst | 10 ++- doc/guides/rel_notes/release_20_05.rst | 6 ++ drivers/net/ena/ena_ethdev.c | 110 +++++++++++++++++++++++-- drivers/net/ena/ena_ethdev.h | 2 + 4 files changed, 121 insertions(+), 7 deletions(-) diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst index bbf27f235a..0b9622ac85 100644 --- a/doc/guides/nics/ena.rst +++ b/doc/guides/nics/ena.rst @@ -1,5 +1,5 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. All rights reserved. ENA Poll Mode Driver @@ -95,6 +95,14 @@ Configuration information * **CONFIG_RTE_LIBRTE_ENA_COM_DEBUG** (default n): Enables or disables debug logging of low level tx/rx logic in ena_com(base) within the ENA PMD driver. +**Runtime Configuration Parameters** + + * **large_llq_hdr** (default 0) + + Enables or disables usage of large LLQ headers. This option will have + effect only if the device also supports large LLQ headers. Otherwise, the + default value will be used. + **ENA Configuration Parameters** * **Number of Queues** diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst index 2596269da5..7c73fe8fd5 100644 --- a/doc/guides/rel_notes/release_20_05.rst +++ b/doc/guides/rel_notes/release_20_05.rst @@ -78,6 +78,12 @@ New Features * Hierarchial Scheduling with DWRR and SP. * Single rate - Two color, Two rate - Three color shaping. +* **Updated Amazon ena driver.** + + Updated ena PMD with new features and improvements, including: + + * Added support for large LLQ (Low-latency queue) headers. + Removed Items ------------- diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index d0cd0e49c8..fdcbe53c1c 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "ena_ethdev.h" #include "ena_logs.h" @@ -82,6 +83,9 @@ struct ena_stats { #define ENA_STAT_GLOBAL_ENTRY(stat) \ ENA_STAT_ENTRY(stat, dev) +/* Device arguments */ +#define ENA_DEVARG_LARGE_LLQ_HDR "large_llq_hdr" + /* * Each rte_memzone should have unique name. * To satisfy it, count number of allocation and add it to name. @@ -231,6 +235,11 @@ static int ena_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n); +static int ena_process_bool_devarg(const char *key, + const char *value, + void *opaque); +static int ena_parse_devargs(struct ena_adapter *adapter, + struct rte_devargs *devargs); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -842,7 +851,8 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) } static int -ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx) +ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, + bool use_large_llq_hdr) { struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq; struct ena_com_dev *ena_dev = ctx->ena_dev; @@ -895,6 +905,21 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx) max_rx_queue_size = rte_align32prevpow2(max_rx_queue_size); max_tx_queue_size = rte_align32prevpow2(max_tx_queue_size); + if (use_large_llq_hdr) { + if ((llq->entry_size_ctrl_supported & + ENA_ADMIN_LIST_ENTRY_SIZE_256B) && + (ena_dev->tx_mem_queue_type == + ENA_ADMIN_PLACEMENT_POLICY_DEV)) { + max_tx_queue_size /= 2; + PMD_INIT_LOG(INFO, + "Forcing large headers and decreasing maximum TX queue size to %d\n", + max_tx_queue_size); + } else { + PMD_INIT_LOG(ERR, + "Forcing large headers failed: LLQ is disabled or device does not support large headers\n"); + } + } + if (unlikely(max_rx_queue_size == 0 || max_tx_queue_size == 0)) { PMD_INIT_LOG(ERR, "Invalid queue size"); return -EFAULT; @@ -1594,14 +1619,25 @@ static void ena_timer_wd_callback(__rte_unused struct rte_timer *timer, } static inline void -set_default_llq_configurations(struct ena_llq_configurations *llq_config) +set_default_llq_configurations(struct ena_llq_configurations *llq_config, + struct ena_admin_feature_llq_desc *llq, + bool use_large_llq_hdr) { llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER; - llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B; llq_config->llq_stride_ctrl = ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY; llq_config->llq_num_decs_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2; - llq_config->llq_ring_entry_size_value = 128; + + if (use_large_llq_hdr && + (llq->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B)) { + llq_config->llq_ring_entry_size = + ENA_ADMIN_LIST_ENTRY_SIZE_256B; + llq_config->llq_ring_entry_size_value = 256; + } else { + llq_config->llq_ring_entry_size = + ENA_ADMIN_LIST_ENTRY_SIZE_128B; + llq_config->llq_ring_entry_size_value = 128; + } } static int @@ -1740,6 +1776,12 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) snprintf(adapter->name, ENA_NAME_MAX_LEN, "ena_%d", adapter->id_number); + rc = ena_parse_devargs(adapter, pci_dev->device.devargs); + if (rc != 0) { + PMD_INIT_LOG(CRIT, "Failed to parse devargs\n"); + goto err; + } + /* device specific initialization routine */ rc = ena_device_init(ena_dev, &get_feat_ctx, &wd_state); if (rc) { @@ -1748,7 +1790,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) } adapter->wd_state = wd_state; - set_default_llq_configurations(&llq_config); + set_default_llq_configurations(&llq_config, &get_feat_ctx.llq, + adapter->use_large_llq_hdr); rc = ena_set_queues_placement_policy(adapter, ena_dev, &get_feat_ctx.llq, &llq_config); if (unlikely(rc)) { @@ -1766,7 +1809,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) calc_queue_ctx.get_feat_ctx = &get_feat_ctx; max_num_io_queues = ena_calc_max_io_queue_num(ena_dev, &get_feat_ctx); - rc = ena_calc_io_queue_size(&calc_queue_ctx); + rc = ena_calc_io_queue_size(&calc_queue_ctx, + adapter->use_large_llq_hdr); if (unlikely((rc != 0) || (max_num_io_queues == 0))) { rc = -EFAULT; goto err_device_destroy; @@ -2582,6 +2626,59 @@ static int ena_xstats_get_by_id(struct rte_eth_dev *dev, return valid; } +static int ena_process_bool_devarg(const char *key, + const char *value, + void *opaque) +{ + struct ena_adapter *adapter = opaque; + bool bool_value; + + /* Parse the value. */ + if (strcmp(value, "1") == 0) { + bool_value = true; + } else if (strcmp(value, "0") == 0) { + bool_value = false; + } else { + PMD_INIT_LOG(ERR, + "Invalid value: '%s' for key '%s'. Accepted: '0' or '1'\n", + value, key); + return -EINVAL; + } + + /* Now, assign it to the proper adapter field. */ + if (strcmp(key, ENA_DEVARG_LARGE_LLQ_HDR)) + adapter->use_large_llq_hdr = bool_value; + + return 0; +} + +static int ena_parse_devargs(struct ena_adapter *adapter, + struct rte_devargs *devargs) +{ + static const char * const allowed_args[] = { + ENA_DEVARG_LARGE_LLQ_HDR, + }; + struct rte_kvargs *kvlist; + int rc; + + if (devargs == NULL) + return 0; + + kvlist = rte_kvargs_parse(devargs->args, allowed_args); + if (kvlist == NULL) { + PMD_INIT_LOG(ERR, "Invalid device arguments: %s\n", + devargs->args); + return -EINVAL; + } + + rc = rte_kvargs_process(kvlist, ENA_DEVARG_LARGE_LLQ_HDR, + ena_process_bool_devarg, adapter); + + rte_kvargs_free(kvlist); + + return rc; +} + /********************************************************************* * PMD configuration *********************************************************************/ @@ -2608,6 +2705,7 @@ static struct rte_pci_driver rte_ena_pmd = { RTE_PMD_REGISTER_PCI(net_ena, rte_ena_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_ena, pci_id_ena_map); RTE_PMD_REGISTER_KMOD_DEP(net_ena, "* igb_uio | uio_pci_generic | vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(net_ena, ENA_DEVARG_LARGE_LLQ_HDR "=<0|1>"); RTE_INIT(ena_init_log) { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 1f320088ac..ed3674b202 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -200,6 +200,8 @@ struct ena_adapter { bool trigger_reset; bool wd_state; + + bool use_large_llq_hdr; }; #endif /* _ENA_ETHDEV_H_ */ -- 2.20.1