From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0C9AA034F; Fri, 15 Oct 2021 18:27:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6D684118B; Fri, 15 Oct 2021 18:27:49 +0200 (CEST) Received: from mail-lf1-f48.google.com (mail-lf1-f48.google.com [209.85.167.48]) by mails.dpdk.org (Postfix) with ESMTP id 8262941134 for ; Fri, 15 Oct 2021 18:27:47 +0200 (CEST) Received: by mail-lf1-f48.google.com with SMTP id g36so27265579lfv.3 for ; Fri, 15 Oct 2021 09:27:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VtaB/wVkvcJ6VS1ByXDez9oL2ce0uNoYrXOwYleCjjU=; b=XcAdpPApM1AFyabzP83gqRXDU9je20+UCY5Re943PKe/YD10tGEyXXPYt/r2Ilvqym 4YOExoURKfimOoiOupwzEWbx11WlCyhv0JfNaXOHqE2QQKXQF6r+WZni9BnCruGDSvBV 6KRexVEG/OtnSvT11nrK12Cok+yVQbhYmxat/4sbBvblhnsmFHE+BaQM4dXzpO+kJZOn qGgHMEjbIXJfr1bzwK5rCOyli6UFGtpsOWPhNdP5nMYgc7PaQmv6+PtSSyRhpfsHq5Vb MakHd6v2RgAQIGx4mvz3zMeY7KeSY0RNQHldIrwRT+Ct6DYss4yrTeTr06PkIRKZGqgX fa0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VtaB/wVkvcJ6VS1ByXDez9oL2ce0uNoYrXOwYleCjjU=; b=luI/wb8GS6MYA34G1/EHEWnCFHUmqehdpll5K2f/XUQZNsEyAs+mpjSJq1/Vy9geXU CkW+9aod3LUgsjxhGOxJHyhtPC6itmDvcwySqIwGk1MiubYcSBDmgAvGSYdb/JEN18Tk YeuibonR6R+X8fKJWlA2urQgMrxEsZoeWFG6IGHfCYnb4N1vOOIq2H3i788J/yKYczv0 1aKUr/O/XI2loKjtvCYTZB1FUc12okpvmGmGElJCiwG8MpsCiNcXEScbtXAPL+cMzxaf gL7nPk/kcBz1f5BRON/NgT0HPJL2WW4fTcEHhqO1yL1NUNEYeEekrKRGFtKltcxVYZTa hkOg== X-Gm-Message-State: AOAM531osqtFKQGETzU+OBzmeprP1bsErpoGZy7q7UEv2s5uKrnNCqcu ZiKAEj6+iqK4xU1rzXqM1hrzQQ== X-Google-Smtp-Source: ABdhPJydeFlg4AKMfZVoU7Y1h354EFGymiLv1GALOZxOxFnM4/vmn1gshaS8WCLMXK+drJBvRMZQQA== X-Received: by 2002:a05:6512:2347:: with SMTP id p7mr12297147lfu.558.1634315267101; Fri, 15 Oct 2021 09:27:47 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-181-52.dynamic.chello.pl. [89.79.181.52]) by smtp.gmail.com with ESMTPSA id b3sm559219lfe.58.2021.10.15.09.27.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 15 Oct 2021 09:27:46 -0700 (PDT) From: Michal Krawczyk To: ferruh.yigit@intel.com Cc: dev@dpdk.org, upstream@semihalf.com, shaibran@amazon.com, ndagan@amazon.com, igorch@amazon.com, Michal Krawczyk Date: Fri, 15 Oct 2021 18:26:56 +0200 Message-Id: <20211015162701.16324-3-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211015162701.16324-1-mk@semihalf.com> References: <20211014201858.9571-1-mk@semihalf.com> <20211015162701.16324-1-mk@semihalf.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2 2/7] net/ena: support Tx/Rx free thresholds X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The caller can pass Tx or Rx free threshold value to the configuration structure for each ring. It determines when the Tx/Rx function should start cleaning up/refilling the descriptors. ENA was ignoring this value and doing it's own calulcations. Now the user can configure ENA's behavior using this parameter and if this variable won't be set, the ENA will continue with the old behavior and will use it's own threshold value. The default value is not provided by the ENA in the ena_infos_get(), as it's being determined dynamically, depending on the requested ring size. Note that NULL check for Tx conf was removed from the function ena_tx_queue_setup(), as at this place the configuration will be either provided by the user or the default config will be used and it's handled by the upper (rte_ethdev) layer. Tx threshold shouldn't be used for the Tx cleanup budget as it can be inadequate to the used burst. Now the PMD tries to release mbufs for the ring until it will be depleted. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes --- v2: * Fix calculations of the default tx_free_thresh if it wasn't provided by the user. RTE_MIN was replaced with RTE_MAX. doc/guides/rel_notes/release_21_11.rst | 7 ++++ drivers/net/ena/ena_ethdev.c | 44 ++++++++++++++++++-------- drivers/net/ena/ena_ethdev.h | 5 +++ 3 files changed, 42 insertions(+), 14 deletions(-) diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 1f033cf80c..45d5cbdc78 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -93,6 +93,13 @@ New Features * Disabled secondary process support. +* **Updated Amazon ENA PMD.** + + Updated the Amazon ENA PMD. The new driver version (v2.5.0) introduced + bug fixes and improvements, including: + + * Support for the tx_free_thresh and rx_free_thresh configuration parameters. + * **Updated Broadcom bnxt PMD.** * Added flow offload support for Thor. diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 227831a98c..87216f75a9 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1140,6 +1140,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, struct ena_ring *txq = NULL; struct ena_adapter *adapter = dev->data->dev_private; unsigned int i; + uint16_t dyn_thresh; txq = &adapter->tx_ring[queue_idx]; @@ -1206,10 +1207,18 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, for (i = 0; i < txq->ring_size; i++) txq->empty_tx_reqs[i] = i; - if (tx_conf != NULL) { - txq->offloads = - tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + txq->offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads; + + /* Check if caller provided the Tx cleanup threshold value. */ + if (tx_conf->tx_free_thresh != 0) { + txq->tx_free_thresh = tx_conf->tx_free_thresh; + } else { + dyn_thresh = txq->ring_size - + txq->ring_size / ENA_REFILL_THRESH_DIVIDER; + txq->tx_free_thresh = RTE_MAX(dyn_thresh, + txq->ring_size - ENA_REFILL_THRESH_PACKET); } + /* Store pointer to this queue in upper layer */ txq->configured = 1; dev->data->tx_queues[queue_idx] = txq; @@ -1228,6 +1237,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, struct ena_ring *rxq = NULL; size_t buffer_size; int i; + uint16_t dyn_thresh; rxq = &adapter->rx_ring[queue_idx]; if (rxq->configured) { @@ -1307,6 +1317,14 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq->offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + if (rx_conf->rx_free_thresh != 0) { + rxq->rx_free_thresh = rx_conf->rx_free_thresh; + } else { + dyn_thresh = rxq->ring_size / ENA_REFILL_THRESH_DIVIDER; + rxq->rx_free_thresh = RTE_MIN(dyn_thresh, + (uint16_t)(ENA_REFILL_THRESH_PACKET)); + } + /* Store pointer to this queue in upper layer */ rxq->configured = 1; dev->data->rx_queues[queue_idx] = rxq; @@ -2134,7 +2152,6 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, { struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); unsigned int free_queue_entries; - unsigned int refill_threshold; uint16_t next_to_clean = rx_ring->next_to_clean; uint16_t descs_in_use; struct rte_mbuf *mbuf; @@ -2216,12 +2233,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->next_to_clean = next_to_clean; free_queue_entries = ena_com_free_q_entries(rx_ring->ena_com_io_sq); - refill_threshold = - RTE_MIN(rx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, - (unsigned int)ENA_REFILL_THRESH_PACKET); /* Burst refill to save doorbells, memory barriers, const interval */ - if (free_queue_entries > refill_threshold) { + if (free_queue_entries >= rx_ring->rx_free_thresh) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); ena_populate_rx_queue(rx_ring, free_queue_entries); } @@ -2588,12 +2602,12 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) static void ena_tx_cleanup(struct ena_ring *tx_ring) { - unsigned int cleanup_budget; unsigned int total_tx_descs = 0; + uint16_t cleanup_budget; uint16_t next_to_clean = tx_ring->next_to_clean; - cleanup_budget = RTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, - (unsigned int)ENA_REFILL_THRESH_PACKET); + /* Attempt to release all Tx descriptors (ring_size - 1 -> size_mask) */ + cleanup_budget = tx_ring->size_mask; while (likely(total_tx_descs < cleanup_budget)) { struct rte_mbuf *mbuf; @@ -2634,6 +2648,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct ena_ring *tx_ring = (struct ena_ring *)(tx_queue); + int available_desc; uint16_t sent_idx = 0; #ifdef RTE_ETHDEV_DEBUG_TX @@ -2653,8 +2668,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ring->size_mask)]); } - tx_ring->tx_stats.available_desc = - ena_com_free_q_entries(tx_ring->ena_com_io_sq); + available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); + tx_ring->tx_stats.available_desc = available_desc; /* If there are ready packets to be xmitted... */ if (likely(tx_ring->pkts_without_db)) { @@ -2664,7 +2679,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ring->pkts_without_db = false; } - ena_tx_cleanup(tx_ring); + if (available_desc < tx_ring->tx_free_thresh) + ena_tx_cleanup(tx_ring); tx_ring->tx_stats.available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 26d425a893..176d713dff 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -142,6 +142,11 @@ struct ena_ring { struct ena_com_io_cq *ena_com_io_cq; struct ena_com_io_sq *ena_com_io_sq; + union { + uint16_t tx_free_thresh; + uint16_t rx_free_thresh; + }; + struct ena_com_rx_buf_info ena_bufs[ENA_PKT_MAX_BUFS] __rte_cache_aligned; -- 2.25.1