From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68F21A054A for ; Thu, 8 Sep 2022 20:16:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6194F427EC; Thu, 8 Sep 2022 20:16:03 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2070.outbound.protection.outlook.com [40.107.237.70]) by mails.dpdk.org (Postfix) with ESMTP id B9AEB40697; Thu, 8 Sep 2022 20:16:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jBhThycJ3QJ8a84S2AfJqCFgcbMivpwmibjKjO6w6bGeg9kofXJqZdtMjT/EOzcqk6rMQchOOLsoS6pxTbaTotW0a1ZfUnEWFNdUexlfXAfTzDZ43s8aM0/u03oM5rx3Y6E/V/FcYNlPZKIjo9eaGup9tjYmvaXy7jbDYKJRWJ9OTwl9ZWQTKMyEILl4Pu9HmvhCxyzna74BGwRj6HWFj1E6zfAju7VdoKFA62lnJfj+qzSFsb5OS4ZW2Q+YuaWlt7b9AkAHNgKawg66E/4hRL1/fdDc/kFG+nZcqMHRGRi8qV4+YDigi2Sygqx3ZbPIxg6s2gwTMI79y520BidAJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZYta/q1jp2/QsPyVcHQhvP9MqKoNy4dX650uc0d/DLU=; b=YhiLirQm4b2RvzeQmrFP4a4mMIbPnsKY5uv6ZXOYU6ZFgLEaUCyVV/yw4MIgUcHrPhrAzhkAsZ1iyUTkpuxT1SEFMA0EnftuhcP4Hds7pDMng3AhDVWHyzDboqYeQUTaXP7T8CZ98iU3mlLD9ur2QPkF+Q9WODSX4KhA3dYxly8IHqr4ITNJrpE2hHTPHnrUcY6fRmfVmP2VXJyU6HfXcsOPy19uLsel2SQkfCY0OJ4iyYAPts7eLUwiTYfrs45x5skZqkAzC0TfKDuf79dcaiSgiZw06rBbAhfQefvxbVv1LrwaWWbx3R4IMrvgKowbBggpXKvitadNeSQji6riOQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=dpdk.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZYta/q1jp2/QsPyVcHQhvP9MqKoNy4dX650uc0d/DLU=; b=0hwzWC4q7o+fBNbMd7Zl4JqDjY406ztvs1uwlz5U3S/X8OPKcXjEq3pznsW1wRIvTmiUvI0eTPjaA2WbW7Nesddt19r6iRlJfxL19th/Lw9hz8feZ3kleKyS4T8tu5DA7hvZppBJvyDGQv9yV/12XRHIO4mJr4XHLyVkpyBxKbc= Received: from DS7PR03CA0121.namprd03.prod.outlook.com (2603:10b6:5:3b4::6) by DM4PR12MB6087.namprd12.prod.outlook.com (2603:10b6:8:b1::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5588.12; Thu, 8 Sep 2022 18:15:59 +0000 Received: from DM6NAM11FT021.eop-nam11.prod.protection.outlook.com (2603:10b6:5:3b4:cafe::e7) by DS7PR03CA0121.outlook.office365.com (2603:10b6:5:3b4::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5612.14 via Frontend Transport; Thu, 8 Sep 2022 18:15:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by DM6NAM11FT021.mail.protection.outlook.com (10.13.173.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.5612.13 via Frontend Transport; Thu, 8 Sep 2022 18:15:59 +0000 Received: from cae-Selwin.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Thu, 8 Sep 2022 13:15:56 -0500 From: Bhagyada Modali To: , CC: , , Bhagyada Modali Subject: [PATCH v2] net/axgbe: support segmented Tx Date: Thu, 8 Sep 2022 14:15:03 -0400 Message-ID: <20220908181503.7584-1-bhagyada.modali@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220908165817.6536-1-bhagyada.modali@amd.com> References: <20220908165817.6536-1-bhagyada.modali@amd.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM6NAM11FT021:EE_|DM4PR12MB6087:EE_ X-MS-Office365-Filtering-Correlation-Id: ca39fd90-b95b-433b-117a-08da91c62e8a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0nz1hvhq1OKVigR+kXpAjWlWFptAuR/LW1jqDoRpB5yY6TZIQiphQ+H6SoAfLK745O3dKQIlrYljWCsd80TIHH4QHS4AoOxgZTpzziSR1+qn7kVWkpftRnLimEMoiw43+IEOnkJgHNDi6UsW7h7CCi/ZgmTMUIhe48qYQbvoFSxfhxkoPwzj0zl6HUUGNM7PkePqBwuPMecBiU+K8hxz4qmt/rp9XQfjA54OxD3sHdNLYYVZq+mF4UKZGiy66za/ot6w/0+yvD4nliyfl1HtrQ7YWRxNbycSF+x1IQiv6w8rFJiCv+8ZttpMerQHf/MdH+tfHAclQgPTQBFoGNBG/skv1QlPm+MORSpZHc4pxvjcbUuMPoilY71hCPyYPDoEaH9qUEAoOjJcZpRotOClziZz8p6ieX6xmanJeVYslshoFdwdY3ku/XWWyjlREQrJ3pyOtf/mx/ltwikD1cAgduTs2Rq08VjJBKy/yRfl9UnbN2BM83BT6p0vGz+lY/YKNATfhimKZSskDL+P3QsuV5krXWb2fnYDBKWSxlOihJxVOVEugtT/WWCTOT30Q2UT9080Tk6z9JBpGU7Wwoeoxev5PDx0TDySmdpDLE0e4b4U5Gxg2yPuMPcvqXgYCtVPc8GZBzNCy2gAjfaBZku0azh37R7YzZuCbaX2A7rQvnBSgF7DBYPOicoPax9fKsJmFRRC5jj/BMJJHpBWElUnYfWiSiOyP+A9ohfb2LO6zSkpo9vKCPPOBVM4DWvieG//xHJTE7QZi4nAoUhCv8bVq5pGKh1iGNgQ/amQdOgva4/9AU1QueALGqZSdgyKXWa7 X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230016)(4636009)(39860400002)(346002)(136003)(396003)(376002)(40470700004)(36840700001)(46966006)(6666004)(7696005)(82740400003)(36756003)(86362001)(426003)(83380400001)(36860700001)(336012)(16526019)(26005)(1076003)(186003)(82310400005)(47076005)(316002)(5660300002)(2616005)(110136005)(478600001)(54906003)(40480700001)(8936002)(6636002)(4326008)(40460700003)(81166007)(70586007)(2906002)(70206006)(8676002)(356005)(41300700001)(450100002)(44832011)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2022 18:15:59.5761 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ca39fd90-b95b-433b-117a-08da91c62e8a X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT021.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6087 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Enable segmented tx support and add jumbo packet transmit capability Signed-off-by: Bhagyada Modali --- drivers/net/axgbe/axgbe_ethdev.c | 1 + drivers/net/axgbe/axgbe_ethdev.h | 1 + drivers/net/axgbe/axgbe_rxtx.c | 213 ++++++++++++++++++++++++++++++- drivers/net/axgbe/axgbe_rxtx.h | 4 + 4 files changed, 218 insertions(+), 1 deletion(-) diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c index e6822fa711..b071e4e460 100644 --- a/drivers/net/axgbe/axgbe_ethdev.c +++ b/drivers/net/axgbe/axgbe_ethdev.c @@ -1228,6 +1228,7 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | + RTE_ETH_TX_OFFLOAD_MULTI_SEGS | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM; diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h index e06d40f9eb..7f19321d88 100644 --- a/drivers/net/axgbe/axgbe_ethdev.h +++ b/drivers/net/axgbe/axgbe_ethdev.h @@ -582,6 +582,7 @@ struct axgbe_port { unsigned int tx_pbl; unsigned int tx_osp_mode; unsigned int tx_max_fifo_size; + unsigned int multi_segs_tx; /* Rx settings */ unsigned int rx_sf_mode; diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c index 8b43e8160b..881ffa01db 100644 --- a/drivers/net/axgbe/axgbe_rxtx.c +++ b/drivers/net/axgbe/axgbe_rxtx.c @@ -544,6 +544,7 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, unsigned int tsize; const struct rte_memzone *tz; uint64_t offloads; + struct rte_eth_dev_data *dev_data = dev->data; tx_desc = nb_desc; pdata = dev->data->dev_private; @@ -611,7 +612,13 @@ int axgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, if (!pdata->tx_queues) pdata->tx_queues = dev->data->tx_queues; - if (txq->vector_disable || + if ((dev_data->dev_conf.txmode.offloads & + RTE_ETH_TX_OFFLOAD_MULTI_SEGS)) + pdata->multi_segs_tx = true; + + if (pdata->multi_segs_tx) + dev->tx_pkt_burst = &axgbe_xmit_pkts_seg; + else if (txq->vector_disable || rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_128) dev->tx_pkt_burst = &axgbe_xmit_pkts; else @@ -762,6 +769,29 @@ void axgbe_dev_enable_tx(struct rte_eth_dev *dev) AXGMAC_IOWRITE_BITS(pdata, MAC_TCR, TE, 1); } +/* Free Tx conformed mbufs segments */ +static void +axgbe_xmit_cleanup_seg(struct axgbe_tx_queue *txq) +{ + volatile struct axgbe_tx_desc *desc; + uint16_t idx; + + idx = AXGBE_GET_DESC_IDX(txq, txq->dirty); + while (txq->cur != txq->dirty) { + if (unlikely(idx == txq->nb_desc)) + idx = 0; + desc = &txq->desc[idx]; + /* Check for ownership */ + if (AXGMAC_GET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN)) + return; + memset((void *)&desc->desc2, 0, 8); + /* Free mbuf */ + rte_pktmbuf_free_seg(txq->sw_ring[idx]); + txq->sw_ring[idx++] = NULL; + txq->dirty++; + } +} + /* Free Tx conformed mbufs */ static void axgbe_xmit_cleanup(struct axgbe_tx_queue *txq) { @@ -854,6 +884,187 @@ static int axgbe_xmit_hw(struct axgbe_tx_queue *txq, return 0; } +/* Tx Descriptor formation for segmented mbuf + * Each mbuf will require multiple descriptors + */ + +static int +axgbe_xmit_hw_seg(struct axgbe_tx_queue *txq, + struct rte_mbuf *mbuf) +{ + volatile struct axgbe_tx_desc *desc; + uint16_t idx; + uint64_t mask; + int start_index; + uint32_t pkt_len = 0; + int nb_desc_free; + struct rte_mbuf *tx_pkt; + + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + + if (mbuf->nb_segs > nb_desc_free) { + axgbe_xmit_cleanup_seg(txq); + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + if (unlikely(mbuf->nb_segs > nb_desc_free)) + return RTE_ETH_TX_DESC_UNAVAIL; + } + + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + desc = &txq->desc[idx]; + /* Saving the start index for setting the OWN bit finally */ + start_index = idx; + + tx_pkt = mbuf; + /* Max_pkt len = 9018 ; need to update it according to Jumbo pkt size */ + pkt_len = tx_pkt->pkt_len; + + /* Update buffer address and length */ + desc->baddr = rte_mbuf_data_iova(tx_pkt); + AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, HL_B1L, + tx_pkt->data_len); + /* Total msg length to transmit */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FL, + tx_pkt->pkt_len); + /* Timestamp enablement check */ + if (mbuf->ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST) + AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, TTSE, 1); + + rte_wmb(); + /* Mark it as First Descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, FD, 1); + /* Mark it as a NORMAL descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0); + /* configure h/w Offload */ + mask = mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK; + if (mask == RTE_MBUF_F_TX_TCP_CKSUM || mask == RTE_MBUF_F_TX_UDP_CKSUM) + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x3); + else if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CIC, 0x1); + rte_wmb(); + + if (mbuf->ol_flags & (RTE_MBUF_F_TX_VLAN | RTE_MBUF_F_TX_QINQ)) { + /* Mark it as a CONTEXT descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3, + CTXT, 1); + /* Set the VLAN tag */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3, + VT, mbuf->vlan_tci); + /* Indicate this descriptor contains the VLAN tag */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_CONTEXT_DESC3, + VLTV, 1); + AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, VTIR, + TX_NORMAL_DESC2_VLAN_INSERT); + } else { + AXGMAC_SET_BITS_LE(desc->desc2, TX_NORMAL_DESC2, VTIR, 0x0); + } + rte_wmb(); + + /* Save mbuf */ + txq->sw_ring[idx] = tx_pkt; + /* Update current index*/ + txq->cur++; + + tx_pkt = tx_pkt->next; + + while (tx_pkt != NULL) { + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + desc = &txq->desc[idx]; + + /* Update buffer address and length */ + desc->baddr = rte_mbuf_data_iova(tx_pkt); + + AXGMAC_SET_BITS_LE(desc->desc2, + TX_NORMAL_DESC2, HL_B1L, tx_pkt->data_len); + + rte_wmb(); + + /* Mark it as a NORMAL descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, CTXT, 0); + /* configure h/w Offload */ + mask = mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK; + if (mask == RTE_MBUF_F_TX_TCP_CKSUM || + mask == RTE_MBUF_F_TX_UDP_CKSUM) + AXGMAC_SET_BITS_LE(desc->desc3, + TX_NORMAL_DESC3, CIC, 0x3); + else if (mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) + AXGMAC_SET_BITS_LE(desc->desc3, + TX_NORMAL_DESC3, CIC, 0x1); + + rte_wmb(); + + /* Set OWN bit */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN, 1); + rte_wmb(); + + /* Save mbuf */ + txq->sw_ring[idx] = tx_pkt; + /* Update current index*/ + txq->cur++; + + tx_pkt = tx_pkt->next; + } + + /* Set LD bit for the last descriptor */ + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, LD, 1); + rte_wmb(); + + /* Update stats */ + txq->bytes += pkt_len; + + /* Set OWN bit for the first descriptor */ + desc = &txq->desc[start_index]; + AXGMAC_SET_BITS_LE(desc->desc3, TX_NORMAL_DESC3, OWN, 1); + rte_wmb(); + + return 0; +} + +/* Eal supported tx wrapper- Segmented*/ +uint16_t +axgbe_xmit_pkts_seg(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + PMD_INIT_FUNC_TRACE(); + + struct axgbe_tx_queue *txq; + uint16_t nb_desc_free; + uint16_t nb_pkt_sent = 0; + uint16_t idx; + uint32_t tail_addr; + struct rte_mbuf *mbuf = NULL; + + if (unlikely(nb_pkts == 0)) + return nb_pkts; + + txq = (struct axgbe_tx_queue *)tx_queue; + + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + if (unlikely(nb_desc_free <= txq->free_thresh)) { + axgbe_xmit_cleanup_seg(txq); + nb_desc_free = txq->nb_desc - (txq->cur - txq->dirty); + if (unlikely(nb_desc_free == 0)) + return 0; + } + + while (nb_pkts--) { + mbuf = *tx_pkts++; + + if (axgbe_xmit_hw_seg(txq, mbuf)) + goto out; + nb_pkt_sent++; + } +out: + /* Sync read and write */ + rte_mb(); + idx = AXGBE_GET_DESC_IDX(txq, txq->cur); + tail_addr = low32_value(txq->ring_phys_addr + + idx * sizeof(struct axgbe_tx_desc)); + /* Update tail reg with next immediate address to kick Tx DMA channel*/ + AXGMAC_DMA_IOWRITE(txq, DMA_CH_TDTR_LO, tail_addr); + txq->pkts += nb_pkt_sent; + return nb_pkt_sent; +} + /* Eal supported tx wrapper*/ uint16_t axgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, diff --git a/drivers/net/axgbe/axgbe_rxtx.h b/drivers/net/axgbe/axgbe_rxtx.h index 2a330339cd..c19d6d9db1 100644 --- a/drivers/net/axgbe/axgbe_rxtx.h +++ b/drivers/net/axgbe/axgbe_rxtx.h @@ -167,6 +167,10 @@ int axgbe_dev_fw_version_get(struct rte_eth_dev *eth_dev, uint16_t axgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); + +uint16_t axgbe_xmit_pkts_seg(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); + uint16_t axgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); -- 2.25.1