From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83FB6431D5 for ; Sun, 22 Oct 2023 16:24:47 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7630D402E4; Sun, 22 Oct 2023 16:24:47 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2068.outbound.protection.outlook.com [40.107.220.68]) by mails.dpdk.org (Postfix) with ESMTP id 0A4124027E for ; Sun, 22 Oct 2023 16:24:45 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ezWuBuf9ydZ5NK2XVTC2HWujm9L7pz46gTeCYa2OYwhg2i6ybOxtMZKxVK3KN3noVgyu2PeyEoJZYo0A0jxNx3QS6lVVrlenCB/fMy8DLC6E09n+w5DvUu1gDztDjeFWGoE38JsHbuG3h4k+xIJ7o0OWScU6Ygd1heZTD3OKrC8U/zpzC/SP8z6zZYAXkMbUidUhtUns7JagKz1GZVy/oaTeZ2oWw/fjoJFPDJigxm4X2MLlfYVPrX6wk9BHdyx6zGgLX5b0GTsmMVSTcAmoJu7WlsavRnjwHeZkpT0jhszzRWKsqn6FNhCliDTOWcsNit4UlMUgsDxZjHnL8eDc7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ntWjfMc4/ZbcRUUX1ugu23Z9UVYNSruVByL+JBGVl84=; b=l91PziiQxtRRmdaU65yYbjRzog+q5swkxN38vGwnF0zNvDbzcq4Q/avMXI/LbGsI1AFPAL2wm0cXbIR25PPiS5fZPeja+1hToQlP7NcjZV1CZa8PStsf01K9slk/qeH3eGuI+vzMEIh+lPPVeu8X6eB0jG103dwoF6mrITgeaHT7LP0OhWFSJzQd4NHrTUsDlmo05BaeTa4yJ7RXujTNVAoiIcyq9BMwP+qIdPN2PZWBSKtlhmS6+eha2oGsVuQPO7R4wFSADIuaCfBMV5MbLted4+q/7v5OVas7zAKS8rOBI5TjbduxlxcInhRnP8yEUyq0MVMwA3+o1GBQSgiMJQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=microsoft.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ntWjfMc4/ZbcRUUX1ugu23Z9UVYNSruVByL+JBGVl84=; b=Tj28vPx49wxFryAbwsuVDFhV0c/9NOyUOy4i5Hz6W/ser9B+Bm3B4LpKT084qpTWHTUoYMLbGx1q/FLYYSotDuGimHM/NeMJPsC3oLZwBSAOD/2p91o/hs7IGVsE6JcAFTyk8Vnk0z7+jsN6rfcQ+eWV2wdAl/9ZlXKU1PWU97NGrWIokECAD4GibGOIbAYBFWEtDk8fMdTsHxZBaBRsnqS8b0irJrXTuJ9KEP/Tk4IOKaK1t/kUzpZ+kwtJITcqelXZmsYIaJcZk0ns12UDreUG1isyhkt0BJoHsPE65nEpOfz3WX7+W0lB4RbyeM0fz6lO2KMkkg3UWgdp5uIOJA== Received: from MW4PR04CA0091.namprd04.prod.outlook.com (2603:10b6:303:83::6) by IA1PR12MB7686.namprd12.prod.outlook.com (2603:10b6:208:422::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.26; Sun, 22 Oct 2023 14:24:42 +0000 Received: from CO1PEPF000044EF.namprd05.prod.outlook.com (2603:10b6:303:83:cafe::8e) by MW4PR04CA0091.outlook.office365.com (2603:10b6:303:83::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6907.33 via Frontend Transport; Sun, 22 Oct 2023 14:24:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CO1PEPF000044EF.mail.protection.outlook.com (10.167.241.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.15 via Frontend Transport; Sun, 22 Oct 2023 14:24:41 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 22 Oct 2023 07:24:32 -0700 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Sun, 22 Oct 2023 07:24:30 -0700 From: Xueming Li To: Wei Hu CC: Long Li , dpdk stable Subject: patch 'net/mana: add 32-bit short doorbell' has been queued to stable release 22.11.4 Date: Sun, 22 Oct 2023 22:20:46 +0800 Message-ID: <20231022142250.10324-18-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231022142250.10324-1-xuemingl@nvidia.com> References: <20231022142250.10324-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044EF:EE_|IA1PR12MB7686:EE_ X-MS-Office365-Filtering-Correlation-Id: 0d75e642-1588-47d7-e2f5-08dbd30aa1bb X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: /RdIBIcnqCS4qoXnxomdLUXC5+ZyNVAXva/FFn8u2p5Z9ZdtGF3W1k1+OYcqHQlKFGw14Q1f51SpkU3VELFU4xcAOSm38/e5PwQjtHqwwThkb0+7a5BbMJ6EIrqF+c0qIFGjWEO6IfpC9kGv1jOyIVeoBVE7VPZqTddj0b5XdlFMHurjxQsNCLZKKqwM7i5qqDX2B+cySTpP4AHgIeqMtPMYl2UubLvssWp6eCsmn7VDS+WN1+4QfDWw3rd96a52aIrd/vQ93c9FflRXf9wkmWN7MtidDxyzqOVDiVIjaS/LjXDE+mJR67rzWUWxWZxodrMnRsNPmsg2mlQJD4k8d4u60gHU28DqH60hkIsPGdmNqy0zce555GOrJAlyeddi1Ew5WHZ2gw+JCIv6hZuZfpv/c2cJUa8P+JbYt4Z2yQkKGh+ZnRuK1AwZGUVbfnjASocwccElfSCz40CZUBBHQGt326qKnb2R9EpLRegDy5i0aw5v1xXX+AKV+7iAoekgu8Fb+8tdlIGnXOAopu3xv07tPtGpUOF5hBWmWHrTTq0K5RPjbUJHzp1JUU28T11aheJEfmpc78XSWmSm9Oy0qYEwM7slVlzXkPxLLn7z5DY0jBcjZkyTY31/kurnVq/i60tSdFi21BDufzkY0luTvn9ub8jJqAgPe0sXUxmPYC/0BpV/3IAhC0NFM7+TB5ZtBmpq1fPXpe89ssYve4hNJncLK63K1+UJGEroSHmoB3zNMpuCQmXg43JVubpX9hAPiBOcSLXvWxUYz0M9PqI+CyLYduRr6L4san2AG+5U9Zmr3QwD7ZW3Nuuwx5/wCPIFIWh+w8xHD3oaN9P5akmtcw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(396003)(346002)(39860400002)(376002)(230922051799003)(451199024)(186009)(82310400011)(64100799003)(1800799009)(46966006)(40470700004)(36840700001)(54906003)(70206006)(40480700001)(70586007)(1076003)(426003)(26005)(40460700003)(336012)(2616005)(16526019)(47076005)(6286002)(4326008)(36756003)(8936002)(8676002)(82740400003)(7636003)(356005)(36860700001)(41300700001)(30864003)(4001150100001)(2906002)(86362001)(316002)(6916009)(83380400001)(5660300002)(966005)(478600001)(6666004)(7696005)(45080400002)(53546011)(55016003)(525324003)(48020200002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Oct 2023 14:24:41.8698 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0d75e642-1588-47d7-e2f5-08dbd30aa1bb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044EF.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7686 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 22.11.4 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 11/15/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=22.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=22.11-staging&id=0e82ee1363f32a269db86628489350b13de2c97a Thanks. Xueming Li --- >From 0e82ee1363f32a269db86628489350b13de2c97a Mon Sep 17 00:00:00 2001 From: Wei Hu Date: Thu, 21 Sep 2023 08:34:42 +0000 Subject: [PATCH] net/mana: add 32-bit short doorbell Cc: Xueming Li [ upstream commit 26c6bdf3d1169e6e9ab04691a1088937137a6d5c ] Add 32-bit short doorbell support. Ring short doorbell when running in 32-bit applications. Both 32-bit and 64-bit doorbells are supported by mana hardware on same platform. 32-bit applications cannot use 64-bit doorbells. 64-bit applications can use 32-bit doorbells, however the performance would greatly suffer and it is not recommended. Signed-off-by: Wei Hu Acked-by: Long Li --- .mailmap | 1 + drivers/net/mana/gdma.c | 92 +++++++++++++++++++++++++++++++++++++++++ drivers/net/mana/mana.h | 26 ++++++++++++ drivers/net/mana/rx.c | 45 ++++++++++++++++++++ drivers/net/mana/tx.c | 25 +++++++++++ 5 files changed, 189 insertions(+) diff --git a/.mailmap b/.mailmap index 244b8d2544..bde21e6e3c 100644 --- a/.mailmap +++ b/.mailmap @@ -1442,6 +1442,7 @@ Wei Dai Weifeng Li Weiguo Li Wei Huang +Wei Hu Wei Hu (Xavier) WeiJie Zhuang Weiliang Luo diff --git a/drivers/net/mana/gdma.c b/drivers/net/mana/gdma.c index 65685fe236..7f66a7a7cf 100644 --- a/drivers/net/mana/gdma.c +++ b/drivers/net/mana/gdma.c @@ -166,6 +166,97 @@ gdma_post_work_request(struct mana_gdma_queue *queue, return 0; } +#ifdef RTE_ARCH_32 +union gdma_short_doorbell_entry { + uint32_t as_uint32; + + struct { + uint32_t tail_ptr_incr : 16; /* Number of CQEs */ + uint32_t id : 12; + uint32_t reserved : 3; + uint32_t arm : 1; + } cq; + + struct { + uint32_t tail_ptr_incr : 16; /* In number of bytes */ + uint32_t id : 12; + uint32_t reserved : 4; + } rq; + + struct { + uint32_t tail_ptr_incr : 16; /* In number of bytes */ + uint32_t id : 12; + uint32_t reserved : 4; + } sq; + + struct { + uint32_t tail_ptr_incr : 16; /* Number of EQEs */ + uint32_t id : 12; + uint32_t reserved : 3; + uint32_t arm : 1; + } eq; +}; /* HW DATA */ + +enum { + DOORBELL_SHORT_OFFSET_SQ = 0x10, + DOORBELL_SHORT_OFFSET_RQ = 0x410, + DOORBELL_SHORT_OFFSET_CQ = 0x810, + DOORBELL_SHORT_OFFSET_EQ = 0xFF0, +}; + +/* + * Write to hardware doorbell to notify new activity. + */ +int +mana_ring_short_doorbell(void *db_page, enum gdma_queue_types queue_type, + uint32_t queue_id, uint32_t tail_incr, uint8_t arm) +{ + uint8_t *addr = db_page; + union gdma_short_doorbell_entry e = {}; + + if ((queue_id & ~GDMA_SHORT_DB_QID_MASK) || + (tail_incr & ~GDMA_SHORT_DB_INC_MASK)) { + DP_LOG(ERR, "%s: queue_id %u or " + "tail_incr %u overflowed, queue type %d", + __func__, queue_id, tail_incr, queue_type); + return -EINVAL; + } + + switch (queue_type) { + case GDMA_QUEUE_SEND: + e.sq.id = queue_id; + e.sq.tail_ptr_incr = tail_incr; + addr += DOORBELL_SHORT_OFFSET_SQ; + break; + + case GDMA_QUEUE_RECEIVE: + e.rq.id = queue_id; + e.rq.tail_ptr_incr = tail_incr; + addr += DOORBELL_SHORT_OFFSET_RQ; + break; + + case GDMA_QUEUE_COMPLETION: + e.cq.id = queue_id; + e.cq.tail_ptr_incr = tail_incr; + e.cq.arm = arm; + addr += DOORBELL_SHORT_OFFSET_CQ; + break; + + default: + DP_LOG(ERR, "Unsupported queue type %d", queue_type); + return -1; + } + + /* Ensure all writes are done before ringing doorbell */ + rte_wmb(); + + DP_LOG(DEBUG, "db_page %p addr %p queue_id %u type %u tail %u arm %u", + db_page, addr, queue_id, queue_type, tail_incr, arm); + + rte_write32(e.as_uint32, addr); + return 0; +} +#else union gdma_doorbell_entry { uint64_t as_uint64; @@ -248,6 +339,7 @@ mana_ring_doorbell(void *db_page, enum gdma_queue_types queue_type, rte_write64(e.as_uint64, addr); return 0; } +#endif /* * Poll completion queue for completions. diff --git a/drivers/net/mana/mana.h b/drivers/net/mana/mana.h index 7dfacd57f3..cd877afcf9 100644 --- a/drivers/net/mana/mana.h +++ b/drivers/net/mana/mana.h @@ -50,6 +50,21 @@ struct mana_shared_data { #define MAX_TX_WQE_SIZE 512 #define MAX_RX_WQE_SIZE 256 +/* For 32 bit only */ +#ifdef RTE_ARCH_32 +#define GDMA_SHORT_DB_INC_MASK 0xffff +#define GDMA_SHORT_DB_QID_MASK 0xfff + +#define GDMA_SHORT_DB_MAX_WQE (0x10000 / GDMA_WQE_ALIGNMENT_UNIT_SIZE) + +#define TX_WQE_SHORT_DB_THRESHOLD \ + (GDMA_SHORT_DB_MAX_WQE - \ + (MAX_TX_WQE_SIZE / GDMA_WQE_ALIGNMENT_UNIT_SIZE)) +#define RX_WQE_SHORT_DB_THRESHOLD \ + (GDMA_SHORT_DB_MAX_WQE - \ + (MAX_RX_WQE_SIZE / GDMA_WQE_ALIGNMENT_UNIT_SIZE)) +#endif + /* Values from the GDMA specification document, WQE format description */ #define INLINE_OOB_SMALL_SIZE_IN_BYTES 8 #define INLINE_OOB_LARGE_SIZE_IN_BYTES 24 @@ -424,6 +439,11 @@ struct mana_rxq { */ uint32_t desc_ring_head, desc_ring_tail; +#ifdef RTE_ARCH_32 + /* For storing wqe increment count btw each short doorbell ring */ + uint32_t wqe_cnt_to_short_db; +#endif + struct mana_gdma_queue gdma_rq; struct mana_gdma_queue gdma_cq; struct gdma_comp *gdma_comp_buf; @@ -450,8 +470,14 @@ extern int mana_logtype_init; #define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") +#ifdef RTE_ARCH_32 +int mana_ring_short_doorbell(void *db_page, enum gdma_queue_types queue_type, + uint32_t queue_id, uint32_t tail_incr, + uint8_t arm); +#else int mana_ring_doorbell(void *db_page, enum gdma_queue_types queue_type, uint32_t queue_id, uint32_t tail, uint8_t arm); +#endif int mana_rq_ring_doorbell(struct mana_rxq *rxq); int gdma_post_work_request(struct mana_gdma_queue *queue, diff --git a/drivers/net/mana/rx.c b/drivers/net/mana/rx.c index fdb56ce05d..371510b473 100644 --- a/drivers/net/mana/rx.c +++ b/drivers/net/mana/rx.c @@ -39,10 +39,18 @@ mana_rq_ring_doorbell(struct mana_rxq *rxq) /* Hardware Spec specifies that software client should set 0 for * wqe_cnt for Receive Queues. */ +#ifdef RTE_ARCH_32 + ret = mana_ring_short_doorbell(db_page, GDMA_QUEUE_RECEIVE, + rxq->gdma_rq.id, + rxq->wqe_cnt_to_short_db * + GDMA_WQE_ALIGNMENT_UNIT_SIZE, + 0); +#else ret = mana_ring_doorbell(db_page, GDMA_QUEUE_RECEIVE, rxq->gdma_rq.id, rxq->gdma_rq.head * GDMA_WQE_ALIGNMENT_UNIT_SIZE, 0); +#endif if (ret) DP_LOG(ERR, "failed to ring RX doorbell ret %d", ret); @@ -97,6 +105,9 @@ mana_alloc_and_post_rx_wqe(struct mana_rxq *rxq) /* update queue for tracking pending packets */ desc->pkt = mbuf; desc->wqe_size_in_bu = wqe_size_in_bu; +#ifdef RTE_ARCH_32 + rxq->wqe_cnt_to_short_db += wqe_size_in_bu; +#endif rxq->desc_ring_head = (rxq->desc_ring_head + 1) % rxq->num_desc; } else { DP_LOG(DEBUG, "failed to post recv ret %d", ret); @@ -115,12 +126,22 @@ mana_alloc_and_post_rx_wqes(struct mana_rxq *rxq) int ret; uint32_t i; +#ifdef RTE_ARCH_32 + rxq->wqe_cnt_to_short_db = 0; +#endif for (i = 0; i < rxq->num_desc; i++) { ret = mana_alloc_and_post_rx_wqe(rxq); if (ret) { DP_LOG(ERR, "failed to post RX ret = %d", ret); return ret; } + +#ifdef RTE_ARCH_32 + if (rxq->wqe_cnt_to_short_db > RX_WQE_SHORT_DB_THRESHOLD) { + mana_rq_ring_doorbell(rxq); + rxq->wqe_cnt_to_short_db = 0; + } +#endif } mana_rq_ring_doorbell(rxq); @@ -389,6 +410,10 @@ mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) struct rte_mbuf *mbuf; int ret; uint32_t num_pkts; +#ifdef RTE_ARCH_32 + rxq->wqe_cnt_to_short_db = 0; +#endif + num_pkts = gdma_poll_completion_queue(&rxq->gdma_cq, rxq->gdma_comp_buf, pkts_n); for (uint32_t i = 0; i < num_pkts; i++) { @@ -470,6 +495,16 @@ drop: } wqe_posted++; + +#ifdef RTE_ARCH_32 + /* Ring short doorbell if approaching the wqe increment + * limit. + */ + if (rxq->wqe_cnt_to_short_db > RX_WQE_SHORT_DB_THRESHOLD) { + mana_rq_ring_doorbell(rxq); + rxq->wqe_cnt_to_short_db = 0; + } +#endif } if (wqe_posted) @@ -478,6 +513,15 @@ drop: return pkt_received; } +#ifdef RTE_ARCH_32 +static int +mana_arm_cq(struct mana_rxq *rxq __rte_unused, uint8_t arm __rte_unused) +{ + DP_LOG(ERR, "Do not support in 32 bit"); + + return -ENODEV; +} +#else static int mana_arm_cq(struct mana_rxq *rxq, uint8_t arm) { @@ -491,6 +535,7 @@ mana_arm_cq(struct mana_rxq *rxq, uint8_t arm) return mana_ring_doorbell(priv->db_page, GDMA_QUEUE_COMPLETION, rxq->gdma_cq.id, head, arm); } +#endif int mana_rx_intr_enable(struct rte_eth_dev *dev, uint16_t rx_queue_id) diff --git a/drivers/net/mana/tx.c b/drivers/net/mana/tx.c index 39cc59550e..3e255157f9 100644 --- a/drivers/net/mana/tx.c +++ b/drivers/net/mana/tx.c @@ -174,6 +174,9 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) void *db_page; uint16_t pkt_sent = 0; uint32_t num_comp; +#ifdef RTE_ARCH_32 + uint32_t wqe_count = 0; +#endif /* Process send completions from GDMA */ num_comp = gdma_poll_completion_queue(&txq->gdma_cq, @@ -391,6 +394,20 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) DP_LOG(DEBUG, "nb_pkts %u pkt[%d] sent", nb_pkts, pkt_idx); +#ifdef RTE_ARCH_32 + wqe_count += wqe_size_in_bu; + if (wqe_count > TX_WQE_SHORT_DB_THRESHOLD) { + /* wqe_count approaching to short doorbell + * increment limit. Stop processing further + * more packets and just ring short + * doorbell. + */ + DP_LOG(DEBUG, "wqe_count %u reaching limit, " + "pkt_sent %d", + wqe_count, pkt_sent); + break; + } +#endif } else { DP_LOG(DEBUG, "pkt[%d] failed to post send ret %d", pkt_idx, ret); @@ -409,11 +426,19 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } if (pkt_sent) { +#ifdef RTE_ARCH_32 + ret = mana_ring_short_doorbell(db_page, GDMA_QUEUE_SEND, + txq->gdma_sq.id, + wqe_count * + GDMA_WQE_ALIGNMENT_UNIT_SIZE, + 0); +#else ret = mana_ring_doorbell(db_page, GDMA_QUEUE_SEND, txq->gdma_sq.id, txq->gdma_sq.head * GDMA_WQE_ALIGNMENT_UNIT_SIZE, 0); +#endif if (ret) DP_LOG(ERR, "mana_ring_doorbell failed ret %d", ret); } -- 2.25.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-10-22 22:17:35.082493000 +0800 +++ 0017-net-mana-add-32-bit-short-doorbell.patch 2023-10-22 22:17:34.146723700 +0800 @@ -1 +1 @@ -From 26c6bdf3d1169e6e9ab04691a1088937137a6d5c Mon Sep 17 00:00:00 2001 +From 0e82ee1363f32a269db86628489350b13de2c97a Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 26c6bdf3d1169e6e9ab04691a1088937137a6d5c ] @@ -14,2 +16,0 @@ -Cc: stable@dpdk.org - @@ -27 +28 @@ -index ec31ab8bd0..276325211c 100644 +index 244b8d2544..bde21e6e3c 100644 @@ -30 +31 @@ -@@ -1481,6 +1481,7 @@ Wei Dai +@@ -1442,6 +1442,7 @@ Wei Dai @@ -149 +150 @@ -index 5801491d75..74e37706be 100644 +index 7dfacd57f3..cd877afcf9 100644 @@ -174 +175 @@ -@@ -425,6 +440,11 @@ struct mana_rxq { +@@ -424,6 +439,11 @@ struct mana_rxq { @@ -186 +187 @@ -@@ -455,8 +475,14 @@ extern int mana_logtype_init; +@@ -450,8 +470,14 @@ extern int mana_logtype_init; @@ -202 +203 @@ -index 14d9085801..fc1587e206 100644 +index fdb56ce05d..371510b473 100644 @@ -257,4 +258,4 @@ -@@ -397,6 +418,10 @@ mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) - uint32_t i; - int polled = 0; - +@@ -389,6 +410,10 @@ mana_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) + struct rte_mbuf *mbuf; + int ret; + uint32_t num_pkts; @@ -265,4 +266,6 @@ - repoll: - /* Polling on new completions if we have no backlog */ - if (rxq->comp_buf_idx == rxq->comp_buf_len) { -@@ -505,6 +530,16 @@ drop: + + num_pkts = gdma_poll_completion_queue(&rxq->gdma_cq, rxq->gdma_comp_buf, pkts_n); + for (uint32_t i = 0; i < num_pkts; i++) { +@@ -470,6 +495,16 @@ drop: + } + @@ -270,2 +272,0 @@ - if (pkt_received == pkts_n) - break; @@ -284,2 +285,2 @@ - rxq->backlog_idx = pkt_idx; -@@ -525,6 +560,15 @@ drop: + if (wqe_posted) +@@ -478,6 +513,15 @@ drop: @@ -301 +302 @@ -@@ -538,6 +582,7 @@ mana_arm_cq(struct mana_rxq *rxq, uint8_t arm) +@@ -491,6 +535,7 @@ mana_arm_cq(struct mana_rxq *rxq, uint8_t arm) @@ -310 +311 @@ -index 11ba2ee1ac..1e2508e1f2 100644 +index 39cc59550e..3e255157f9 100644 @@ -313 +314 @@ -@@ -176,6 +176,9 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +@@ -174,6 +174,9 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -316 +317 @@ - uint32_t num_comp, i; + uint32_t num_comp; @@ -323 +324 @@ -@@ -418,6 +421,20 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +@@ -391,6 +394,20 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -344 +345 @@ -@@ -436,11 +453,19 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +@@ -409,11 +426,19 @@ mana_tx_burst(void *dpdk_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)