From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2637E42CC0; Thu, 15 Jun 2023 07:04:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F3A7540E0F; Thu, 15 Jun 2023 07:04:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id CDCD840DDA for ; Thu, 15 Jun 2023 07:04:45 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35F0WVLq006110 for ; Wed, 14 Jun 2023 22:04:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0220; bh=9ntHVqb+XXqp3cbzD3GBdRM1LYM09QAG742pKwa6BF4=; b=P+pCLRnQJka53LE+jkAlWlB0NEWhBNQkn1tkKJe+YClPO/oVMBFVeFVZVktmJ//O8hxK 7kyBX/18mqtCjJAohDKW0iAmXtJZA65HJrBLWPo/51vXpbig9exJ5pO2a5+TpquoYnAG yRIgisxeet8vJ6c6v+GXNv4c0mKN92QBdDvUNj72PfTv5ZbrI1H3Sjv7M2XyPAG6U+0u 7A1xLQY/0FL42rfL0u1qX+MH56SbtBpa1Azt6plTsyjKzs4uz4e8E2ndu3Ya+ntXIh3G p1GrWEZGBF4eVwt8h4wtk2bDDwZe1Vnz9pqOXz5YToC1FOhIvAX7b4Xp3Oo6Sg1JRmJb rw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3r7dd2uf77-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 14 Jun 2023 22:04:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 14 Jun 2023 22:04:31 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 14 Jun 2023 22:04:31 -0700 Received: from cavium.localdomain (unknown [10.28.34.26]) by maili.marvell.com (Postfix) with ESMTP id 5B82B3F7058; Wed, 14 Jun 2023 22:04:29 -0700 (PDT) From: To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: Subject: [PATCH] net/cnxk: flush SQ before configuring MTU Date: Thu, 15 Jun 2023 01:04:23 -0400 Message-ID: <1686805463-19231-1-git-send-email-skoteshwar@marvell.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: i2BgsxrEtIONmYWrOXnFMXDwkeyz1TCC X-Proofpoint-ORIG-GUID: i2BgsxrEtIONmYWrOXnFMXDwkeyz1TCC X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.573,FMLib:17.11.176.26 definitions=2023-06-15_02,2023-06-14_02,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Satha Rao When try to configure MTU for lower value causes run time failure due to old bigger packets enqueued. To avoid error interrupts better to flush the all SQs of this port before configuring new MTU. Signed-off-by: Satha Rao --- drivers/net/cnxk/cnxk_ethdev.h | 1 + drivers/net/cnxk/cnxk_ethdev_ops.c | 47 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 48 insertions(+) diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index e280d6c..45460ae 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -446,6 +446,7 @@ int cnxk_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev); int cnxk_nix_remove(struct rte_pci_device *pci_dev); int cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu); +int cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev); int cnxk_nix_mc_addr_list_configure(struct rte_eth_dev *eth_dev, struct rte_ether_addr *mc_addr_set, uint32_t nb_mc_addr); diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c index bce6d59..da5ee19 100644 --- a/drivers/net/cnxk/cnxk_ethdev_ops.c +++ b/drivers/net/cnxk/cnxk_ethdev_ops.c @@ -496,6 +496,44 @@ } int +cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + int i, rc = 0; + + /* Flush all tx queues */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + struct roc_nix_sq *sq = &dev->sqs[i]; + + if (eth_dev->data->tx_queues[i] == NULL) + continue; + + rc = roc_nix_tm_sq_aura_fc(sq, false); + if (rc) { + plt_err("Failed to disable sqb aura fc, rc=%d", rc); + goto exit; + } + + /* Wait for sq entries to be flushed */ + rc = roc_nix_tm_sq_flush_spin(sq); + if (rc) { + plt_err("Failed to drain sq, rc=%d\n", rc); + goto exit; + } + if (data->tx_queue_state[i] == RTE_ETH_QUEUE_STATE_STARTED) { + rc = roc_nix_tm_sq_aura_fc(sq, true); + if (rc) { + plt_err("Failed to enable sq aura fc, txq=%u, rc=%d", i, rc); + goto exit; + } + } + } +exit: + return rc; +} + +int cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) { uint32_t old_frame_size, frame_size = mtu + CNXK_NIX_L2_OVERHEAD; @@ -538,6 +576,15 @@ goto exit; } + /* if new MTU was smaller than old one, then flush all SQs before MTU change */ + if (old_frame_size > frame_size) { + if (data->dev_started) { + plt_err("Reducing MTU is not supported when device started"); + goto exit; + } + cnxk_nix_sq_flush(eth_dev); + } + frame_size -= RTE_ETHER_CRC_LEN; /* Update mtu on Tx */ -- 1.8.3.1