From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 86A5643261; Wed, 1 Nov 2023 15:57:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 22D244029E; Wed, 1 Nov 2023 15:57:41 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2041.outbound.protection.outlook.com [40.107.93.41]) by mails.dpdk.org (Postfix) with ESMTP id EA233400EF for ; Wed, 1 Nov 2023 15:57:39 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dORtc/xl9GZQJw+b1BthXAwp/zW+DBcfP1YpZixmNclI+KOTxC7izyN5upohuehU/Yh4pOY5HUw/y808kJ8u+KzC1pNDBAgvvR934Nr8Z/2y1+ulhOe8+mrMIjQr8GYnZ+ci9DaioFjaSpOpzn53ntawQGAPvXF1Ip+TmVbGjdy/Z91htQKWndA6QAdlrS+8WnPYWIFINJ/sn35zGsAQ1+li+uf8HedUN3BrmHnQ4gBtcd140GJNsd69VgSwGZu4jYmYjNxmKq+816wdR8gw119xHdZWHJZkmvJSA8kmIB5SBeYEt9zWq+KcTDddEFNOl8nskrhKarC+QFksOBus4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NNTnocP3nxmxCmYRfm1TADHXKz6Hxs15hCNtdMmnS+k=; b=ZOpkGwJerDAe5OhoPtp63QeWTbPc9TCTD/TWQ40b1V2qfhZZKPO3EDtoE7ylKBCYY2ad0zFSpHM4ycXTu31ptS+MQcP4Pize7/01LWn4YuBQDftDJWDjBm85V9R8x2jkzJdCf2sYO9rusZlyA7z3bXaX3cv4pZGJZzKD2HHOcxrC3L4vAAvF0wkD/2lW8cjWxynIvZ5P1Lw1P3BZWPHH+hYNd3ptwtaTv8R8wnSdNWkXCHkuc9E5yiyxO4waebpEMrR9D5UMTbxsLAfXyb+dO8s+QAL0FWJUrIUhYiMQJRqYWS+sksShOl8BLZ7yQN8MQKUBCm6dZgkb+iTyXQHkBg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NNTnocP3nxmxCmYRfm1TADHXKz6Hxs15hCNtdMmnS+k=; b=VLpdljX3xnfdsgOxGPr9ZT2gEOQ4o45amrO5Nzgcr3/8N7SNOj5f1w+2IQBF4YAx0S3Dw9yQmyh48QZ3HUKIwoIOmlTH1r/S/YuEZuQZDChq6K1t0qjO7uL28oHILAnF/2lFzzGxbWC01Aj4F5vf321ZSNVDiJZwmG3WRUxWUgXW8xdOp7MII1PcQcxXB9JHsMdIMRyL9mvpslFrGsHMJcCgeLuQqRWQKXPnpbLQ6+dpywA0yoZMljYEsK6733tnPkNdDKahazCPDWR5LYkRXavsQj5Cw3C38a6DUHh9PJFq+VF9ZP8iWAYRo2mWADAIRzz2RWYQwIQXMTIBJJmh2g== Received: from CYZPR11CA0015.namprd11.prod.outlook.com (2603:10b6:930:8d::23) by BL1PR12MB5945.namprd12.prod.outlook.com (2603:10b6:208:398::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6933.27; Wed, 1 Nov 2023 14:57:37 +0000 Received: from CY4PEPF0000E9D5.namprd05.prod.outlook.com (2603:10b6:930:8d:cafe::65) by CYZPR11CA0015.outlook.office365.com (2603:10b6:930:8d::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Wed, 1 Nov 2023 14:57:37 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by CY4PEPF0000E9D5.mail.protection.outlook.com (10.167.241.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6954.19 via Frontend Transport; Wed, 1 Nov 2023 14:57:37 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 1 Nov 2023 07:57:27 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 1 Nov 2023 07:57:25 -0700 From: Alexander Kozyrev To: CC: , , Subject: [PATCH v2] net/mlx5: replenish MPRQ buffers for miniCQEs Date: Wed, 1 Nov 2023 16:57:10 +0200 Message-ID: <20231101145710.2356260-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20231101144354.2296367-1-akozyrev@nvidia.com> References: <20231101144354.2296367-1-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D5:EE_|BL1PR12MB5945:EE_ X-MS-Office365-Filtering-Correlation-Id: e98a56b8-9c26-4930-742d-08dbdaeae356 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Kfg1YBhKwA6OZc5mqZmg7GfrlUvoW/1Ku9bkcSXbzyqIKKxejJ+3TRRWgvY9MadbLm6cETZEJRgbTmYOl8D/b87M51/lz5Vy9psyvoKcoTppp7T2kCT5VDM2djYEG9wrHmu5yYAvP5JsWGjzQ7r0S1j0EpaprGdXJh/IvulM8AeCPr+2H3ZWrzw28Zbjt3iRafdzBODYOh13eVJ2F6ySLvBoN+7EVkzl4PAvyW+evlCogDBE2x16HhrBq8uZmMvWFCRCcZ2+uYdrM161z83bGWivdsX+dwtc312zoifugYPdnBBJ12DG4WbkElAVywBCaAh6iaLpEo5qDefxB+kMQW4ojozH9CaUIBx2N7AfRLV2xRO0RHwj/3qH1PCqDaOP8A2fyT08SzUWt/N5owu4/jEvSrdG9KSMhcANjkON6fkqC+hp7vwwV6jmodX3QAbgfVhqgZvOJ9u6LAjpNKkR2R0ZNFkqz8awSPVeHnANMHHqq2LrDs0Ab9DgnkAmEPkqt/9aMZAfZEOrR7tKbCr/UKtLawEqDXSEaIWGn+V8Bfa2U+/vHCH5PiCOeybRjUxHOhN4fUeviz19byJJ4XgdA85I56W/F6NAp4khY051PZ+5SNoabrvC2aopUfTNqMxNg+yWCc58tghvyieWS5fZPXl1Mrb7D9TpsSIbltnbs6TYXiOD1otU8k82y/wrF1y4QsxcuZXnBnfj7ce4ft07xaUiXcXkft/XCe8GIdugL5mui5j3YMHurpeE4fg2pbEKG6kYk2Sk4TJGQJEzZVdmLw== X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(396003)(346002)(136003)(376002)(230922051799003)(82310400011)(1800799009)(186009)(64100799003)(451199024)(40470700004)(36840700001)(46966006)(40480700001)(36860700001)(40460700003)(16526019)(54906003)(70206006)(47076005)(7636003)(83380400001)(6916009)(316002)(36756003)(356005)(2616005)(426003)(478600001)(70586007)(336012)(2906002)(107886003)(26005)(1076003)(86362001)(5660300002)(8936002)(41300700001)(4326008)(8676002)(82740400003)(6666004)(334744004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Nov 2023 14:57:37.2790 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e98a56b8-9c26-4930-742d-08dbdaeae356 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D5.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5945 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Keep unzipping if the next CQE is the miniCQE array in rxq_cq_decompress_v() routine only for non-MPRQ scenario, MPRQ requires buffer replenishment between the miniCQEs. Restore the check for the initial compressed CQE for SPRQ and check that the current CQE is not compressed before copying it as a possible title CQE. Signed-off-by: Alexander Kozyrev --- drivers/net/mlx5/mlx5_rxtx_vec.c | 56 ++++++++++++++++++------ drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 6 ++- drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 6 ++- drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 6 ++- 4 files changed, 54 insertions(+), 20 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 2363d7ed27..1872bf310c 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -331,6 +331,15 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, } /* At this point, there shouldn't be any remaining packets. */ MLX5_ASSERT(rxq->decompressed == 0); + /* Go directly to unzipping in case the first CQE is compressed. */ + if (rxq->cqe_comp_layout) { + ret = check_cqe_iteration(cq, rxq->cqe_n, rxq->cq_ci); + if (ret == MLX5_CQE_STATUS_SW_OWN && + (MLX5_CQE_FORMAT(cq->op_own) == MLX5_COMPRESSED)) { + comp_idx = 0; + goto decompress; + } + } /* Process all the CQEs */ nocmp_n = rxq_cq_process_v(rxq, cq, elts, pkts, pkts_n, err, &comp_idx); /* If no new CQE seen, return without updating cq_db. */ @@ -345,18 +354,23 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, rcvd_pkt += nocmp_n; /* Copy title packet for future compressed sessions. */ if (rxq->cqe_comp_layout) { - next = &(*rxq->cqes)[rxq->cq_ci & q_mask]; - ret = check_cqe_iteration(next, rxq->cqe_n, rxq->cq_ci); - if (ret != MLX5_CQE_STATUS_SW_OWN || - MLX5_CQE_FORMAT(next->op_own) == MLX5_COMPRESSED) - rte_memcpy(&rxq->title_pkt, elts[nocmp_n - 1], - sizeof(struct rte_mbuf)); + ret = check_cqe_iteration(cq, rxq->cqe_n, rxq->cq_ci); + if (ret == MLX5_CQE_STATUS_SW_OWN && + (MLX5_CQE_FORMAT(cq->op_own) != MLX5_COMPRESSED)) { + next = &(*rxq->cqes)[rxq->cq_ci & q_mask]; + ret = check_cqe_iteration(next, rxq->cqe_n, rxq->cq_ci); + if (MLX5_CQE_FORMAT(next->op_own) == MLX5_COMPRESSED || + ret != MLX5_CQE_STATUS_SW_OWN) + rte_memcpy(&rxq->title_pkt, elts[nocmp_n - 1], + sizeof(struct rte_mbuf)); + } } +decompress: /* Decompress the last CQE if compressed. */ if (comp_idx < MLX5_VPMD_DESCS_PER_LOOP) { MLX5_ASSERT(comp_idx == (nocmp_n % MLX5_VPMD_DESCS_PER_LOOP)); rxq->decompressed = rxq_cq_decompress_v(rxq, &cq[nocmp_n], - &elts[nocmp_n]); + &elts[nocmp_n], true); rxq->cq_ci += rxq->decompressed; /* Return more packets if needed. */ if (nocmp_n < pkts_n) { @@ -482,6 +496,15 @@ rxq_burst_mprq_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, } /* At this point, there shouldn't be any remaining packets. */ MLX5_ASSERT(rxq->decompressed == 0); + /* Go directly to unzipping in case the first CQE is compressed. */ + if (rxq->cqe_comp_layout) { + ret = check_cqe_iteration(cq, rxq->cqe_n, rxq->cq_ci); + if (ret == MLX5_CQE_STATUS_SW_OWN && + (MLX5_CQE_FORMAT(cq->op_own) == MLX5_COMPRESSED)) { + comp_idx = 0; + goto decompress; + } + } /* Process all the CQEs */ nocmp_n = rxq_cq_process_v(rxq, cq, elts, pkts, pkts_n, err, &comp_idx); /* If no new CQE seen, return without updating cq_db. */ @@ -495,18 +518,23 @@ rxq_burst_mprq_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, rcvd_pkt += cp_pkt; /* Copy title packet for future compressed sessions. */ if (rxq->cqe_comp_layout) { - next = &(*rxq->cqes)[rxq->cq_ci & q_mask]; - ret = check_cqe_iteration(next, rxq->cqe_n, rxq->cq_ci); - if (ret != MLX5_CQE_STATUS_SW_OWN || - MLX5_CQE_FORMAT(next->op_own) == MLX5_COMPRESSED) - rte_memcpy(&rxq->title_pkt, elts[nocmp_n - 1], - sizeof(struct rte_mbuf)); + ret = check_cqe_iteration(cq, rxq->cqe_n, rxq->cq_ci); + if (ret == MLX5_CQE_STATUS_SW_OWN && + (MLX5_CQE_FORMAT(cq->op_own) != MLX5_COMPRESSED)) { + next = &(*rxq->cqes)[rxq->cq_ci & q_mask]; + ret = check_cqe_iteration(next, rxq->cqe_n, rxq->cq_ci); + if (MLX5_CQE_FORMAT(next->op_own) == MLX5_COMPRESSED || + ret != MLX5_CQE_STATUS_SW_OWN) + rte_memcpy(&rxq->title_pkt, elts[nocmp_n - 1], + sizeof(struct rte_mbuf)); + } } +decompress: /* Decompress the last CQE if compressed. */ if (comp_idx < MLX5_VPMD_DESCS_PER_LOOP) { MLX5_ASSERT(comp_idx == (nocmp_n % MLX5_VPMD_DESCS_PER_LOOP)); rxq->decompressed = rxq_cq_decompress_v(rxq, &cq[nocmp_n], - &elts[nocmp_n]); + &elts[nocmp_n], false); /* Return more packets if needed. */ if (nocmp_n < pkts_n) { uint16_t n = rxq->decompressed; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h index cccfa7f2d3..b2bbc4ba17 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h @@ -68,13 +68,15 @@ rxq_copy_mbuf_v(struct rte_mbuf **elts, struct rte_mbuf **pkts, uint16_t n) * @param elts * Pointer to SW ring to be filled. The first mbuf has to be pre-built from * the title completion descriptor to be copied to the rest of mbufs. + * @param keep + * Keep unzipping if the next CQE is the miniCQE array. * * @return * Number of mini-CQEs successfully decompressed. */ static inline uint16_t rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, - struct rte_mbuf **elts) + struct rte_mbuf **elts, bool keep) { volatile struct mlx5_mini_cqe8 *mcq = (void *)&(cq + !rxq->cqe_comp_layout)->pkt_info; @@ -507,7 +509,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, } } - if (rxq->cqe_comp_layout) { + if (rxq->cqe_comp_layout && keep) { int ret; /* Keep unzipping if the next CQE is the miniCQE array. */ cq = &cq[mcqe_n]; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h index 3ed688191f..510f60b25d 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h @@ -63,13 +63,15 @@ rxq_copy_mbuf_v(struct rte_mbuf **elts, struct rte_mbuf **pkts, uint16_t n) * @param elts * Pointer to SW ring to be filled. The first mbuf has to be pre-built from * the title completion descriptor to be copied to the rest of mbufs. + * @param keep + * Keep unzipping if the next CQE is the miniCQE array. * * @return * Number of mini-CQEs successfully decompressed. */ static inline uint16_t rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, - struct rte_mbuf **elts) + struct rte_mbuf **elts, bool keep) { volatile struct mlx5_mini_cqe8 *mcq = (void *)&(cq + !rxq->cqe_comp_layout)->pkt_info; @@ -372,7 +374,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, } } } - if (rxq->cqe_comp_layout) { + if (rxq->cqe_comp_layout && keep) { int ret; /* Keep unzipping if the next CQE is the miniCQE array. */ cq = &cq[mcqe_n]; diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h index 2bdd1f676d..06bec45cdf 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h +++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h @@ -65,13 +65,15 @@ rxq_copy_mbuf_v(struct rte_mbuf **elts, struct rte_mbuf **pkts, uint16_t n) * @param elts * Pointer to SW ring to be filled. The first mbuf has to be pre-built from * the title completion descriptor to be copied to the rest of mbufs. + * @param keep + * Keep unzipping if the next CQE is the miniCQE array. * * @return * Number of mini-CQEs successfully decompressed. */ static inline uint16_t rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, - struct rte_mbuf **elts) + struct rte_mbuf **elts, bool keep) { volatile struct mlx5_mini_cqe8 *mcq = (void *)(cq + !rxq->cqe_comp_layout); /* Title packet is pre-built. */ @@ -361,7 +363,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq, } } } - if (rxq->cqe_comp_layout) { + if (rxq->cqe_comp_layout && keep) { int ret; /* Keep unzipping if the next CQE is the miniCQE array. */ cq = &cq[mcqe_n]; -- 2.18.2