From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F112942D46 for ; Sun, 25 Jun 2023 08:38:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EB99940ED8; Sun, 25 Jun 2023 08:38:54 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2074.outbound.protection.outlook.com [40.107.94.74]) by mails.dpdk.org (Postfix) with ESMTP id F148640A7F for ; Sun, 25 Jun 2023 08:38:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ljUzqkSnKuYs57PWa7vP4uOnDNK8H4/HYcsQDa6lDMJv6l3KUZvtUQudvOyoFzuR7+ZVlE58ugNWH5Ee06Q49yphnvurNlaGqmHh0H81X3dXYMt4PqFoSU4wQe98EV26oIRtzXijDuhPpja+bh4N8f9snjqVzFRQm2cFN7nT9ypGSV6gQaqslgDIUUQBh3ZKI+LVbkOTMGLzAE7ADvManXMmm8hB4MdzyoKNbmWh3BaQwQxxL96Xo2Yv8LBavffvxpzsv87Y2Mxbbu9bF/8sMCDg/bOA3Gum55llSh2PmOVq0N4wqgjMtzwd0LjVrDH6M5FsDIdeCaOQjDB+oZIvmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=WUXomGa2bOOysf9xvskQBxidtu0RqW0qXydycVAApIM=; b=IyoZAGq2U/KivFchRgKDS3BMAVdBmmgT92sqncJLKZg42WPt1Z/E9id1rK4ZePhCWoFePCHhkAmLW1XWA86e6cKrbEa3OIPN6IC0osj36YFW+RwlFWRj1cf/xBxQ/kYvwCqRJ8tj3UtBZeXZZ69bMamOtdBxTkleOR5Fha6tOKvPVfzUArAgGrqrxW03LdO+O6jyyT469CW04YZgD7777E1c/m4FleftaLNVQ/YkedbISVOQPKei2qUTASTMiPmP11fnNmkHqeNtzjIpdkQvjmtT34dyfbAOJumRPCyLrnwSkD/8au7ienHZ+19mUvhYcCY98IptcXiLTtz3ZnIGAQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=microsoft.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=WUXomGa2bOOysf9xvskQBxidtu0RqW0qXydycVAApIM=; b=XxWVPGLByXy/dWA/A2uLV/cF2nky1otGyfVsGkaQ2wiFwQpmK8vCmgEaU4ja5q4CBuL/gA2QnS4gXtxuDGv1LrFmH3krf0SuZFB+Gco6h1zIljaEzDyAW5C1yyvCCyOpB+Mhk4XjOKH6A/JgQ8TBp5sseEgkL7qy7vKJ8+0fYbVNKJwMQRd+HOz8lPl0HvjdbbK+OUfJe3JF/YrzymHKrAd3UplCHv4Fy+11aJFQ6FFoME9u8GH8nrd8EyjUWIDrPair9aM13LZBe+Eeim2R8Iw9u2LLKKTvvqBOTmZGtYA7E2YPogFWSH43BdEYLg0q9VRUygmbyDGKNiudU/1xRQ== Received: from BN9PR03CA0783.namprd03.prod.outlook.com (2603:10b6:408:13f::8) by SN7PR12MB6767.namprd12.prod.outlook.com (2603:10b6:806:269::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.24; Sun, 25 Jun 2023 06:38:51 +0000 Received: from BN8NAM11FT029.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13f:cafe::35) by BN9PR03CA0783.outlook.office365.com (2603:10b6:408:13f::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.32 via Frontend Transport; Sun, 25 Jun 2023 06:38:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT029.mail.protection.outlook.com (10.13.177.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.47 via Frontend Transport; Sun, 25 Jun 2023 06:38:51 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Sat, 24 Jun 2023 23:38:44 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Sat, 24 Jun 2023 23:38:42 -0700 From: Xueming Li To: Long Li CC: dpdk stable Subject: patch 'net/mana: optimize completion queue by batch processing' has been queued to stable release 22.11.3 Date: Sun, 25 Jun 2023 14:34:09 +0800 Message-ID: <20230625063544.11183-32-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230625063544.11183-1-xuemingl@nvidia.com> References: <20230625063544.11183-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT029:EE_|SN7PR12MB6767:EE_ X-MS-Office365-Filtering-Correlation-Id: bf7dc22a-0f81-455c-a12e-08db7546d6ed X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: IWndrXM83ex/5w2DVFJSRjqkRzVF1BgS4xhuKEt08BAFi38EmCo6tSqX7f0GXN8DSHzB1H4aQQ0K3deldq1pNjANXqFoiry/rGL3ePKTJ22TTXGnkDbFNMraHGrqvJONX/g4/45IjkHCHXaKwuXLyAx4eWzvB5YM2nl7VMWMsdV65GDwm7GpOKnyI5J684VphLz2FTwXSa/2BDAtJ74b9Bpb/Qf32DtCGMlYF+KAUkT/8La9YqytI1828cISeKcqIZxoqdoHlsttdEmLSlH/Rc7F8F2SO6ag3ADhmSfldJh7273+gtsGZQ9BRglaIWWoDYTllR1mkfiJqmLCJRn5Smq76M5nPuEJDuTuV0ngkwonIvchwvyH3godQPN90GTNEkOH23OOyu1rIUg23UIeUpxcTD3MQfeNgVcePBjqAxscLLzNqmt2p1vwLhWO3N3kgkeXhUnwKzyOXZAyx6RrQNoGSffXu1HR/+QZy261JedUWuFPXaNz7HXEfDeLmFZ5Eb3Nmhrqj8at89b7PU5BzHzPuqlVgpaLifqrKT6xscu4enx7HZf9cOeVZq1KxS5yufBFJJqX42vW2Szu8ZbMJ5tzjQbQJwkABSAZUAMtvmbkJbjr73z5JoKWpswA34t4yNkTZbQnt08+puZon2aZXv8hueVyZ4GpMDcEYMwwfiXMKkCD3xw5yzV4u+jGx3WEs6oZd+0yXQsvVmlDo6Ks6AW+ADhGyZ44OVcLMATTECT9qaNccnoxUD27qR2kKbqnlbNFfSNmi7MHyTyARied4Q== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(136003)(396003)(39860400002)(346002)(451199021)(40470700004)(36840700001)(46966006)(82310400005)(36860700001)(40460700003)(47076005)(966005)(478600001)(45080400002)(7696005)(2616005)(426003)(6666004)(83380400001)(336012)(53546011)(1076003)(26005)(16526019)(6286002)(186003)(2906002)(30864003)(5660300002)(36756003)(70206006)(356005)(4326008)(70586007)(82740400003)(7636003)(40480700001)(8936002)(8676002)(41300700001)(316002)(86362001)(6916009)(55016003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2023 06:38:51.5084 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bf7dc22a-0f81-455c-a12e-08db7546d6ed X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT029.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6767 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 22.11.3 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 06/27/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=22.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=22.11-staging&id=ecd3e1f354f2732a67dea91eec433cad80d07332 Thanks. Xueming Li --- >From ecd3e1f354f2732a67dea91eec433cad80d07332 Mon Sep 17 00:00:00 2001 From: Long Li Date: Fri, 17 Mar 2023 16:32:44 -0700 Subject: [PATCH] net/mana: optimize completion queue by batch processing Cc: Xueming Li [ upstream commit 311246198c5f34ca4c51573ee1811a966da7d8af ] We can poll completion queues in a batch to speed up completion processing. Also, the completion data doesn't need to be copied out of the hardware queue and they can be passed as pointers to be consumed by the RX/TX code. Fixes: 517ed6e2d590 ("net/mana: add basic driver with build environment") Signed-off-by: Long Li --- drivers/net/mana/gdma.c | 62 ++++++++++++++++++++++------------------- drivers/net/mana/mana.c | 22 +++++++++++++++ drivers/net/mana/mana.h | 25 +++++++---------- drivers/net/mana/rx.c | 21 +++++--------- drivers/net/mana/tx.c | 11 +++++--- 5 files changed, 80 insertions(+), 61 deletions(-) diff --git a/drivers/net/mana/gdma.c b/drivers/net/mana/gdma.c index 7d5bb08927..65685fe236 100644 --- a/drivers/net/mana/gdma.c +++ b/drivers/net/mana/gdma.c @@ -252,45 +252,51 @@ mana_ring_doorbell(void *db_page, enum gdma_queue_types queue_type, /* * Poll completion queue for completions. */ -int -gdma_poll_completion_queue(struct mana_gdma_queue *cq, struct gdma_comp *comp) +uint32_t +gdma_poll_completion_queue(struct mana_gdma_queue *cq, + struct gdma_comp *gdma_comp, uint32_t max_comp) { struct gdma_hardware_completion_entry *cqe; - uint32_t head = cq->head % cq->count; uint32_t new_owner_bits, old_owner_bits; uint32_t cqe_owner_bits; + uint32_t num_comp = 0; struct gdma_hardware_completion_entry *buffer = cq->buffer; - cqe = &buffer[head]; - new_owner_bits = (cq->head / cq->count) & COMPLETION_QUEUE_OWNER_MASK; - old_owner_bits = (cq->head / cq->count - 1) & - COMPLETION_QUEUE_OWNER_MASK; - cqe_owner_bits = cqe->owner_bits; + while (num_comp < max_comp) { + cqe = &buffer[cq->head % cq->count]; + new_owner_bits = (cq->head / cq->count) & + COMPLETION_QUEUE_OWNER_MASK; + old_owner_bits = (cq->head / cq->count - 1) & + COMPLETION_QUEUE_OWNER_MASK; + cqe_owner_bits = cqe->owner_bits; + + DP_LOG(DEBUG, "comp cqe bits 0x%x owner bits 0x%x", + cqe_owner_bits, old_owner_bits); + + /* No new entry */ + if (cqe_owner_bits == old_owner_bits) + break; + + if (cqe_owner_bits != new_owner_bits) { + DRV_LOG(ERR, "CQ overflowed, ID %u cqe 0x%x new 0x%x", + cq->id, cqe_owner_bits, new_owner_bits); + break; + } - DP_LOG(DEBUG, "comp cqe bits 0x%x owner bits 0x%x", - cqe_owner_bits, old_owner_bits); + gdma_comp[num_comp].cqe_data = cqe->dma_client_data; + num_comp++; - if (cqe_owner_bits == old_owner_bits) - return 0; /* No new entry */ + cq->head++; - if (cqe_owner_bits != new_owner_bits) { - DP_LOG(ERR, "CQ overflowed, ID %u cqe 0x%x new 0x%x", - cq->id, cqe_owner_bits, new_owner_bits); - return -1; + DP_LOG(DEBUG, "comp new 0x%x old 0x%x cqe 0x%x wq %u sq %u head %u", + new_owner_bits, old_owner_bits, cqe_owner_bits, + cqe->wq_num, cqe->is_sq, cq->head); } - /* Ensure checking owner bits happens before reading from CQE */ + /* Make sure the CQE owner bits are checked before we access the data + * in CQE + */ rte_rmb(); - comp->work_queue_number = cqe->wq_num; - comp->send_work_queue = cqe->is_sq; - - memcpy(comp->completion_data, cqe->dma_client_data, GDMA_COMP_DATA_SIZE); - - cq->head++; - - DP_LOG(DEBUG, "comp new 0x%x old 0x%x cqe 0x%x wq %u sq %u head %u", - new_owner_bits, old_owner_bits, cqe_owner_bits, - comp->work_queue_number, comp->send_work_queue, cq->head); - return 1; + return num_comp; } diff --git a/drivers/net/mana/mana.c b/drivers/net/mana/mana.c index 8a782c0d63..2463f34c1e 100644 --- a/drivers/net/mana/mana.c +++ b/drivers/net/mana/mana.c @@ -487,6 +487,15 @@ mana_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, goto fail; } + txq->gdma_comp_buf = rte_malloc_socket("mana_txq_comp", + sizeof(*txq->gdma_comp_buf) * nb_desc, + RTE_CACHE_LINE_SIZE, socket_id); + if (!txq->gdma_comp_buf) { + DRV_LOG(ERR, "failed to allocate txq comp"); + ret = -ENOMEM; + goto fail; + } + ret = mana_mr_btree_init(&txq->mr_btree, MANA_MR_BTREE_PER_QUEUE_N, socket_id); if (ret) { @@ -506,6 +515,7 @@ mana_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; fail: + rte_free(txq->gdma_comp_buf); rte_free(txq->desc_ring); rte_free(txq); return ret; @@ -518,6 +528,7 @@ mana_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid) mana_mr_btree_free(&txq->mr_btree); + rte_free(txq->gdma_comp_buf); rte_free(txq->desc_ring); rte_free(txq); } @@ -557,6 +568,15 @@ mana_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, rxq->desc_ring_head = 0; rxq->desc_ring_tail = 0; + rxq->gdma_comp_buf = rte_malloc_socket("mana_rxq_comp", + sizeof(*rxq->gdma_comp_buf) * nb_desc, + RTE_CACHE_LINE_SIZE, socket_id); + if (!rxq->gdma_comp_buf) { + DRV_LOG(ERR, "failed to allocate rxq comp"); + ret = -ENOMEM; + goto fail; + } + ret = mana_mr_btree_init(&rxq->mr_btree, MANA_MR_BTREE_PER_QUEUE_N, socket_id); if (ret) { @@ -572,6 +592,7 @@ mana_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, return 0; fail: + rte_free(rxq->gdma_comp_buf); rte_free(rxq->desc_ring); rte_free(rxq); return ret; @@ -584,6 +605,7 @@ mana_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid) mana_mr_btree_free(&rxq->mr_btree); + rte_free(rxq->gdma_comp_buf); rte_free(rxq->desc_ring); rte_free(rxq); } diff --git a/drivers/net/mana/mana.h b/drivers/net/mana/mana.h index ce16e7efff..b653e1dd82 100644 --- a/drivers/net/mana/mana.h +++ b/drivers/net/mana/mana.h @@ -142,19 +142,6 @@ struct gdma_header { #define COMPLETION_QUEUE_OWNER_MASK \ ((1 << (COMPLETION_QUEUE_ENTRY_OWNER_BITS_SIZE)) - 1) -struct gdma_comp { - struct gdma_header gdma_header; - - /* Filled by GDMA core */ - uint32_t completion_data[GDMA_COMP_DATA_SIZE_IN_UINT32]; - - /* Filled by GDMA core */ - uint32_t work_queue_number; - - /* Filled by GDMA core */ - bool send_work_queue; -}; - struct gdma_hardware_completion_entry { char dma_client_data[GDMA_COMP_DATA_SIZE]; union { @@ -391,6 +378,11 @@ struct mana_gdma_queue { #define MANA_MR_BTREE_PER_QUEUE_N 64 +struct gdma_comp { + /* Filled by GDMA core */ + char *cqe_data; +}; + struct mana_txq { struct mana_priv *priv; uint32_t num_desc; @@ -399,6 +391,7 @@ struct mana_txq { struct mana_gdma_queue gdma_sq; struct mana_gdma_queue gdma_cq; + struct gdma_comp *gdma_comp_buf; uint32_t tx_vp_offset; @@ -433,6 +426,7 @@ struct mana_rxq { struct mana_gdma_queue gdma_rq; struct mana_gdma_queue gdma_cq; + struct gdma_comp *gdma_comp_buf; struct mana_stats stats; struct mana_mr_btree mr_btree; @@ -476,8 +470,9 @@ uint16_t mana_rx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t mana_tx_burst_removed(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n); -int gdma_poll_completion_queue(struct mana_gdma_queue *cq, - struct gdma_comp *comp); +uint32_t gdma_poll_completion_queue(struct mana_gdma_queue *cq, + struct gdma_comp *gdma_comp, + uint32_t max_comp); int mana_start_rx_queues(struct rte_eth_dev *dev); int mana_start_tx_queues(struct rte_eth_dev *dev); diff --git a/drivers/net/mana/rx.c b/drivers/net/mana/rx.c index afd153424b..6e1c397be8 100644 --- a/drivers/net/mana/rx.c +++ b/drivers/net/mana/rx.c @@ -383,24 +383,17 @@ mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) uint8_t wqe_posted = 0; struct mana_rxq *rxq = dpdk_rxq; struct mana_priv *priv = rxq->priv; - struct gdma_comp comp; struct rte_mbuf *mbuf; int ret; + uint32_t num_pkts; - while (pkt_received < pkts_n && - gdma_poll_completion_queue(&rxq->gdma_cq, &comp) == 1) { - struct mana_rxq_desc *desc; - struct mana_rx_comp_oob *oob = - (struct mana_rx_comp_oob *)&comp.completion_data[0]; - - if (comp.work_queue_number != rxq->gdma_rq.id) { - DP_LOG(ERR, "rxq comp id mismatch wqid=0x%x rcid=0x%x", - comp.work_queue_number, rxq->gdma_rq.id); - rxq->stats.errors++; - break; - } + num_pkts = gdma_poll_completion_queue(&rxq->gdma_cq, rxq->gdma_comp_buf, pkts_n); + for (uint32_t i = 0; i < num_pkts; i++) { + struct mana_rx_comp_oob *oob = (struct mana_rx_comp_oob *) + rxq->gdma_comp_buf[i].cqe_data; + struct mana_rxq_desc *desc = + &rxq->desc_ring[rxq->desc_ring_tail]; - desc = &rxq->desc_ring[rxq->desc_ring_tail]; rxq->gdma_rq.tail += desc->wqe_size_in_bu; mbuf = desc->pkt; diff --git a/drivers/net/mana/tx.c b/drivers/net/mana/tx.c index b593f98bb1..7f570181ad 100644 --- a/drivers/net/mana/tx.c +++ b/drivers/net/mana/tx.c @@ -170,17 +170,20 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct mana_txq *txq = dpdk_txq; struct mana_priv *priv = txq->priv; - struct gdma_comp comp; int ret; void *db_page; uint16_t pkt_sent = 0; + uint32_t num_comp; /* Process send completions from GDMA */ - while (gdma_poll_completion_queue(&txq->gdma_cq, &comp) == 1) { + num_comp = gdma_poll_completion_queue(&txq->gdma_cq, + txq->gdma_comp_buf, txq->num_desc); + + for (uint32_t i = 0; i < num_comp; i++) { struct mana_txq_desc *desc = &txq->desc_ring[txq->desc_ring_tail]; - struct mana_tx_comp_oob *oob = - (struct mana_tx_comp_oob *)&comp.completion_data[0]; + struct mana_tx_comp_oob *oob = (struct mana_tx_comp_oob *) + txq->gdma_comp_buf[i].cqe_data; if (oob->cqe_hdr.cqe_type != CQE_TX_OKAY) { DP_LOG(ERR, -- 2.25.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-06-25 14:31:59.369989200 +0800 +++ 0031-net-mana-optimize-completion-queue-by-batch-processi.patch 2023-06-25 14:31:58.315773900 +0800 @@ -1 +1 @@ -From 311246198c5f34ca4c51573ee1811a966da7d8af Mon Sep 17 00:00:00 2001 +From ecd3e1f354f2732a67dea91eec433cad80d07332 Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 311246198c5f34ca4c51573ee1811a966da7d8af ] @@ -11 +13,0 @@ -Cc: stable@dpdk.org